{"id":88726,"date":"2023-05-11T12:19:35","date_gmt":"2023-05-11T17:49:35","guid":{"rendered":"https:\/\/www.whizlabs.com\/blog\/?p=88726"},"modified":"2023-05-14T22:49:32","modified_gmt":"2023-05-15T04:19:32","slug":"microsoft-azure-dp-100-exam-questions","status":"publish","type":"post","link":"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/","title":{"rendered":"Free DP-100 Exam Questions: Designing and Implementing a Data Science Solution on Azure"},"content":{"rendered":"<p>In today&#8217;s world, there are a lot of raw data generated every day in almost every IT Industries, so there is an need of a dedicated team who can be to evaluate and plot this data to make inferences and imply the Machine Learning algorithm to make the predictions.\u00a0 Hence there is a huge demand and gap for <strong id=\"DP_100\">Data Scientists<\/strong>.<\/p>\n<p><strong>Microsoft Azure Data Scientist \u00a0<a href=\"https:\/\/www.whizlabs.com\/microsoft-azure-certification-dp-100\/\" target=\"_blank\" rel=\"noopener\">DP-100 Certification<\/a><\/strong> helps to assess the individual knowledge on data science and machine learning to deploy and run machine learning workloads on Microsoft Azure with the usage of Azure Machine Learning Service.<\/p>\n<p>If you are preparing for Microsoft Azure Data Scientist Certification (DP-100) Exam, then you have to check your readiness by taking these exam questions and answers.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_76 ez-toc-wrap-left counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #ea7e02;color:#ea7e02\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #ea7e02;color:#ea7e02\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Demand_for_Data_Scientists\" >Demand for Data Scientists<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Roles_and_Responsibilities_of_Data_Scientists\" >Roles and Responsibilities of Data Scientists<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Top_20_Free_DP-100_Exam_Questions\" >Top 20 Free DP-100 Exam Questions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Design_and_prepare_a_machine_learning_solution\" >Domain: Design and prepare a machine learning solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Explore_data_and_train_models\" >Domain: Explore data and train models<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Deploy_and_retrain_a_model\" >Domain: Deploy and retrain a model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Design_and_prepare_a_machine_learning_solution-2\" >Domain: Design and prepare a machine learning solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Deploy_and_retrain_a_model-2\" >Domain: Deploy and retrain a model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Design_and_prepare_a_machine_learning_solution-3\" >Domain: Design and prepare a machine learning solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Design_and_prepare_a_machine_learning_solution-4\" >Domain: Design and prepare a machine learning solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Explore_data_and_train_models-2\" >Domain: Explore data and train models<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Deploy_and_retrain_a_model-3\" >Domain: Deploy and retrain a model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Design_and_prepare_a_machine_learning_solution-5\" >Domain: Design and prepare a machine learning solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Design_and_prepare_a_machine_learning_solution-6\" >Domain: Design and prepare a machine learning solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-15\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Explore_data_and_train_models-3\" >Domain: Explore data and train models<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-16\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Prepare_a_model_for_deployment\" >Domain: Prepare a model for deployment<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-17\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Deploy_and_retrain_a_model-4\" >Domain: Deploy and retrain a model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-18\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Deploy_and_retrain_a_model-5\" >Domain: Deploy and retrain a model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-19\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Prepare_a_model_for_deployment-2\" >Domain: Prepare a model for deployment<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-20\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Design_and_prepare_a_machine_learning_solution-7\" >Domain: Design and prepare a machine learning solution<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-21\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Explore_data_and_train_models-4\" >Domain: Explore data and train models<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-22\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Deploy_and_retrain_a_model-6\" >Domain: Deploy and retrain a model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-23\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Domain_Deploy_and_retrain_a_model-7\" >Domain: Deploy and retrain a model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-24\" href=\"https:\/\/www.whizlabs.com\/blog\/microsoft-azure-dp-100-exam-questions\/#Summary\" >Summary<\/a><\/li><\/ul><\/nav><\/div>\n<h3><span class=\"ez-toc-section\" id=\"Demand_for_Data_Scientists\"><\/span>Demand for Data Scientists<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>According to a report by IBM, the demand for data scientists gets level up by<b>\u00a028%<\/b> in the year 2024 and beyond, and thus making it one of the ever-growing careers in the future.<\/p>\n<figure id=\"attachment_88756\" aria-describedby=\"caption-attachment-88756\" style=\"width: 1043px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" class=\"wp-image-88756 size-full\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions.webp\" alt=\"DP-100 exam questions\" width=\"1043\" height=\"718\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions.webp 1043w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-300x207.webp 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-1024x705.webp 1024w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-768x529.webp 768w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-150x103.webp 150w\" sizes=\"(max-width: 1043px) 100vw, 1043px\" \/><figcaption id=\"caption-attachment-88756\" class=\"wp-caption-text\">indeed.com<\/figcaption><\/figure>\n<h3><span class=\"ez-toc-section\" id=\"Roles_and_Responsibilities_of_Data_Scientists\"><\/span>Roles and Responsibilities of Data Scientists<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<ul>\n<li>Transforming vast amounts of organized and unstructured data into informative data.<\/li>\n<li>Finding the data analytics solutions with the greatest potential to advance business.<\/li>\n<li>Finding hidden patterns and trends using data analysis methods like text analytics, machine learning, and deep learning.<\/li>\n<li>Data cleansing and validation to increase data accuracy and efficacy.<\/li>\n<\/ul>\n<h3><span class=\"ez-toc-section\" id=\"Top_20_Free_DP-100_Exam_Questions\"><\/span>Top 20 Free DP-100 Exam Questions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Here&#8217;s an compiled list of free DP-100 exam questions and answeres framed by our experts and by taking this certification can really helps you to get thorough on every concepts required to clear the real examination.<\/p>\n<blockquote><p>Also Read: <span style=\"font-size: 16px;\">How to Prepare for the Exam <a href=\"https:\/\/www.whizlabs.com\/blog\/dp-100-exam-preparation\/\" target=\"_blank\" rel=\"noopener\">DP-100<\/a>: Designing and Implementing a Data Science Solution on Azure?<\/span><\/p><\/blockquote>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Design_and_prepare_a_machine_learning_solution\"><\/span><b>Domain: Design and prepare a machine learning solution<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 1:<\/b><span style=\"font-weight: 400;\"> Which of the following statements are Not True in the usage of Execute Python Script component of Azure Machine Learning designer?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The script must contain a function named azureml_main<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The entry point function must have two input arguments<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A zip file should always be connected to the third input port<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Run settings section of Execute Python Script component is used to attach a compute target to execute the step<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><b>Explanation:\u00a0\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">We use <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/algorithm-module-reference\/execute-python-script\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Execute Python Script<\/span><\/a><span style=\"font-weight: 400;\"> component to add custom logic to Azure Machine Learning designer where we can bring and use our custom-written code.<\/span><\/p>\n<p><b>Option A is incorrect<\/b><span style=\"font-weight: 400;\"> because a function named <\/span><b>azureml_main<\/b><span style=\"font-weight: 400;\"> is mandatory for the azure ML Designer framework to execute the step.<\/span><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because the entry point to the function must have two input arguments which must be pandas\u2019 data frames.<\/span><\/p>\n<p><b>Option C is correct<\/b><span style=\"font-weight: 400;\"> because it is <\/span><b>not mandatory<\/b><span style=\"font-weight: 400;\"> to add any zip file to the step, it is only needed if we want to use code from our custom python modules\/packages.<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because a default compute target is attached to the pipeline that runs this step and if we want to use a different compute machine <\/span><b>Run settings<\/b><span style=\"font-weight: 400;\"> have options to detach and attach compute machines.<\/span><\/p>\n<p><b>References:<\/b><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/component-reference\/execute-python-script\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/component-reference\/execute-python-script<\/span><\/a><span style=\"font-weight: 400;\"> ,<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/modules\/create-classification-model-azure-machine-learning-designer\/\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/training\/modules\/create-classification-model-azure-machine-learning-designer\/<\/span><\/a><\/p>\n<p><b>\u00a0<\/b><\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Explore_data_and_train_models\"><\/span><b>Domain: Explore data and train models<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 2:<\/b><span style=\"font-weight: 400;\"> You are conducting multiple experiments trying to find the best Machine Learning algorithm that fits your data for a regression problem, and you are using MLFlow to keep track of the experiments. You want to log the regression metrics for each experiment using the following dictionary -&gt; metrics = {\u201cr2\u201d: 0.1, &#8220;mse&#8221;: 2500.00, &#8220;rmse&#8221;: 50.00}. Which of the following code line can fulfill the task?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">mlflow.log_metric(metrics)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">mlflow.log_params(metrics)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">mlflow.log_artifacts(metrics)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">mlflow.log_metrics(metrics)<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: D<\/b><\/p>\n<p><b>Explanation:\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files when running your machine learning code and for later visualizing the results.<\/span><\/p>\n<p><b>Option A is incorrect<\/b><span style=\"font-weight: 400;\"> because mlflow.log_metric() is used to log a single metric. For example: &#8211; mlflow.log_metric(&#8220;mse&#8221;, 2500.00)<\/span><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because mlflow.log_params() is a batch of parameters for the current run, parameters are different configuration variables that are internal to the machine learning models that can change independently as it learns the patterns in the data.<\/span><\/p>\n<p><b>Option C is incorrect<\/b><span style=\"font-weight: 400;\"> because mlflow.log_artifacts() is generally used to Log all the contents of a local directory as artifacts of the run. Artifacts could be plot files, model output files, or any other file we would like to store for later reference.<\/span><\/p>\n<p><b>Option D is correct<\/b><span style=\"font-weight: 400;\"> because a mlflow.log_metrics() is used to Log multiple metrics for the current run.<\/span><\/p>\n<p><b>References:<\/b><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/modules\/use-mlflow-to-track-experiments-azure-databricks\/5-exercise-experiment\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/training\/modules\/use-mlflow-to-track-experiments-azure-databricks\/5-exercise-experiment<\/span><\/a><span style=\"font-weight: 400;\">,<\/span><a href=\"https:\/\/www.mlflow.org\/docs\/latest\/python_api\/mlflow.html#mlflow.log_metrics\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/www.mlflow.org\/docs\/latest\/python_api\/mlflow.html#mlflow.log_metrics<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Deploy_and_retrain_a_model\"><\/span><b>Domain: Deploy and retrain a model<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 3:<\/b><span style=\"font-weight: 400;\"> You are building a training pipeline in Azure machine learning studio workspace using AzureML python SDK library <\/span><span style=\"font-weight: 400;\">azureml.pipeline.core<\/span><span style=\"font-weight: 400;\">. You also want to Automate the process of retraining after the deployment whenever you receive enough new data. Assume that you are planning to upload the new data at a Blob storage location, and you wanted the pipeline to get triggered as soon as the upload to Blob storage happens. Select all the libraries under <\/span><span style=\"font-weight: 400;\">azureml.pipeline.core<\/span><span style=\"font-weight: 400;\"> that are needed to accomplish this task. (SELECT TWO)<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">from<\/span><span style=\"font-weight: 400;\"> azureml.pipeline.core <\/span><span style=\"font-weight: 400;\">import<\/span><span style=\"font-weight: 400;\"> Schedule<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">from<\/span><span style=\"font-weight: 400;\"> azureml.core.datastore <\/span><span style=\"font-weight: 400;\">import<\/span><span style=\"font-weight: 400;\"> Datastore<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">from<\/span><span style=\"font-weight: 400;\"> azureml.pipeline.core <\/span><span style=\"font-weight: 400;\">import<\/span><span style=\"font-weight: 400;\"> ScheduleRecurrence<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">from<\/span><span style=\"font-weight: 400;\"> azureml.core<\/span><span style=\"font-weight: 400;\"> import<\/span><span style=\"font-weight: 400;\"> Environment<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">from<\/span><span style=\"font-weight: 400;\"> azureml.pipeline.core <\/span><span style=\"font-weight: 400;\">import<\/span><span style=\"font-weight: 400;\"> PipelineRun<\/span><\/li>\n<\/ol>\n<p><b>Correct Answers: A and B<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">azureml.pipeline.core<\/span><span style=\"font-weight: 400;\"> contains core functionality to work with Azure Machine Learning pipelines, which are configurable machine learning workflows (i,e. Pipelines). Azure Machine Learning pipelines allow you to create re-usable machine learning workflows that can be used as a template for your machine learning scenarios such as retraining, model- deployment, data processing, etc.<\/span><\/p>\n<p><b>Option A is correct<\/b><span style=\"font-weight: 400;\"> because azureml.pipeline.core.Schedule Class is designed to monitor changes to the blob container or location in the blob storage container and create a trigger event that starts the training pipeline with pre-set input parameters.<\/span><\/p>\n<p><b>Option B is correct<\/b><span style=\"font-weight: 400;\"> because azureml.pipeline.core.Datastore is needed to create an object that refers to the Blob storage location where our input files are placed and this object is used as input to the Schedule object which monitors for any file additions.<\/span><\/p>\n<p><b>Option C is incorrect<\/b><span style=\"font-weight: 400;\"> because azureml.pipeline.core.ScheduleRecurrence Class is used to schedule a pipeline to run at periodic intervals (i,e. after 15 days, weeks, hours, etc). An instance of this class is used as an input to the Schedule object but when triggering a pipeline based on monitoring a blob storage location we don\u2019t need to use this class.<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because azureml.core.Experiment Class is used to Fetch the environment to run a pipeline. This is attached to pipeline and used when we submit the pipeline to run.<\/span><\/p>\n<p><b>Option E is incorrect<\/b><span style=\"font-weight: 400;\"> because azureml.Pipeline.core.PipelineRun Class is used to run the built pipeline. This class doesn\u2019t help to build a pipeline rather this is used after the development activity.<\/span><\/p>\n<p><b>Reference:<\/b><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/modules\/create-pipelines-in-aml\/\" target=\"_blank\" rel=\"noopener\"><b>\u00a0<\/b><\/a><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/modules\/create-pipelines-in-aml\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/training\/modules\/create-pipelines-in-aml\/<\/span><\/a><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Design_and_prepare_a_machine_learning_solution-2\"><\/span><b>Domain: Design and prepare a machine learning solution<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 4: <\/b><span style=\"font-weight: 400;\">You are working on a binary classification problem and tried all classical machine learning algorithms but none of them resulted in a satisfactory output. So you have turned towards building a deep learning model and you have multiple optimizers to choose from. Identify the option that is not an optimization algorithm of a Deep Neural Network.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Stochastic Gradient Descent (SGD)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Adaptive Learning Rate (ADADELTA)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Adaptive Momentum Estimation (Adam)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Gradient Clipping (GC)<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: D<\/b><\/p>\n<p><b>Explanation:\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">An Optimizer modifies\/updates the attributes of the neural network which is called learning, such as weights, bias and learning rate, etc. Different algorithms were proposed to do these activities each having its own advantages and disadvantages.<\/span><\/p>\n<p><b>Option A is incorrect<\/b><span style=\"font-weight: 400;\"> because it is one of the basic optimizers which calculates loss and updates weight in the back-propagation.<\/span><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because it is an advanced version of SGD which also adjusts the learning rate during backpropagation.<\/span><\/p>\n<p><b>Option C is incorrect<\/b><span style=\"font-weight: 400;\"> because it is also an extension of SGD combined with the momentum algorithm.<\/span><\/p>\n<p><b>Option D is correct<\/b><span style=\"font-weight: 400;\"> because Gradient Clipping is not an optimizer algorithm rather it is a technique where the error derivative in the backpropagation is clipped which is an effective way to tackle Exploding Gradients that occur during Deep learning models training.<\/span><\/p>\n<p><b>Reference:<\/b><a href=\"http:\/\/learn.microsoft.com\/en-us\/training\/modules\/train-evaluate-deep-learn-models\/\" target=\"_blank\" rel=\"noopener\"><b>\u00a0<\/b><\/a><\/p>\n<p><a href=\"http:\/\/learn.microsoft.com\/en-us\/training\/modules\/train-evaluate-deep-learn-models\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">http:\/\/learn.microsoft.com\/en-us\/training\/modules\/train-evaluate-deep-learn-models<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Deploy_and_retrain_a_model-2\"><\/span><b>Domain: Deploy and retrain a model<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 5: <\/b><span style=\"font-weight: 400;\">You have created an Azure Pipeline to test the deployment script and hosted a repository containing the deployment pipeline on a private GitHub repository. Now you want to set up a CI\/CD Pipeline from GitHub that executes the pipeline in Azure whenever a change is made to the deployment script. The First step in setting up the CI\/CD pipeline is to set up\u00a0 Authentication with Azure Pipelines<\/span> <span style=\"font-weight: 400;\">for GitHub. Below are the listed steps required to set up authentication in GitHub. Please select the correct sequence to achieve the goal.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Add a Personal Access Token as a secret of your repository<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Generate a Personal Access token, in your DevOps Organization with an expiry date<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Open your GitHub repository and go to Security Settings<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Sign in to your Azure DevOps organization<\/span><b>\u00a0<\/b><\/li>\n<\/ol>\n<p><b>Correct Answer: D, B, C and A<\/b><\/p>\n<p><b>Explanation:<\/b><a href=\"https:\/\/docs.github.com\/en\/free-pro-team@latest\/actions\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">\u00a0<\/span><\/a><\/p>\n<p><a href=\"https:\/\/docs.github.com\/en\/free-pro-team@latest\/actions\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">GitHub Action<\/span><\/a><span style=\"font-weight: 400;\">s helps us to automate software development workflows such as build, test, package, release, or deploy any project on GitHub with a workflow from within GitHub. With GitHub Workflows for Azure, you can create workflows that you can set up in your repository to build, test, package, release, and deploy to Azure. To set up a workflow that can trigger an azure pipeline, GitHub needs to authenticate with Azure DevOps, and using Personal Access Token is one of the simplest ways of doing it which connects your GitHub account to Azure DevOps in the following steps<\/span><b>:<\/b><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Sign in to your Azure DevOps organization.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Add a Personal Access Token as a secret of your repository.<\/span><\/li>\n<\/ol>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Open your GitHub repository and go to Security Settings.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Add a Personal Access Token as a secret of your repository.<\/span><\/li>\n<\/ol>\n<p><b>Reference:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/developer\/github\/github-actions\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/azure\/developer\/github\/github-actions<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Design_and_prepare_a_machine_learning_solution-3\"><\/span><b>Domain: Design and prepare a machine learning solution<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 6:<\/b><span style=\"font-weight: 400;\"> You are designing a machine learning solution using azure Machine learning studio. Which of the following is not true in regard to using Azure Machine Learning Studio?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The workspace is the top-level resource for Azure Machine Learning<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">AzureMl Pipelines are reusable workflows for training and retraining your model<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When you create the workspace, you also need to create an associated workspace such as Azure Storage Account, Azure Container Registry, etc. before starting to use the workspace<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">You can interact with the workspace using AzureML Python SDK from any python environment remotely<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is incorrect<\/b><span style=\"font-weight: 400;\"> because the workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.<\/span><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because an Azure Machine Learning pipeline is an independently executable workflow of a complete machine learning task that can be executed on a variety of compute machines from AzureML Studio Workspace.<\/span><\/p>\n<p><b>Option C is correct<\/b><span style=\"font-weight: 400;\"> because when you create the workspace, associated resources are also created for you. You may choose to not use these created default resources and create resources of your own, nonetheless, you will always have default resources created for you to start experimenting within the workspace.<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because Azure provides us with the open-source Azure libraries for Python to simplify provisioning, managing, and using Azure resources from any remote Python environment after proper authorization.<\/span><\/p>\n<p><b>Reference:<\/b><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-workspace\" target=\"_blank\" rel=\"noopener\"><b>\u00a0<\/b><\/a><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-workspace\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-workspace<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Design_and_prepare_a_machine_learning_solution-4\"><\/span><b>Domain: Design and prepare a machine learning solution<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 7: <\/b><span style=\"font-weight: 400;\">You are tasked with creating a binary prediction model as soon as possible and are provided with a .csv file of size 400 MB. After quick thinking, you have decided to use Azure Ml designed as it provides a quick out-of-the-box solution. You have to use this .csv file as input to your model. Which is the fastest and also recommended way by Microsoft for the ingestion of data into your solution in AzureML?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Use the Import Data component in AzureMl Designer to read data from the local machine using a path on your local computer<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Register the .csv file as a Dataset and access it in the AzureMl Designer<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Upload your file to an HTTPS server like GitHub. Use the Import Data component in azureML designer to read data from the local machine using a path on that http\/s server<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Upload your data to an Azure Blob Storage and access the AzureML Designer<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: B<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is incorrect<\/b><span style=\"font-weight: 400;\"> because the Data Import Component of AzureMl Designer doesn\u2019t support reading data directly from a local computer.<\/span><\/p>\n<p><b>Option B is correct<\/b><span style=\"font-weight: 400;\"> because datasets are part of AzureMl Workspace and are managed by Azure and also recommended by Microsoft while working with AzureML Studio. We can also take full advantage of advanced data features like<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-version-track-datasets\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">versioning and tracking<\/span><\/a><span style=\"font-weight: 400;\"> and<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-monitor-datasets\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">data monitoring<\/span><\/a><span style=\"font-weight: 400;\"> using datasets. Apart from datasets, we can also use data stores by pointing them toward our files in the Azure blob container. So if you could register the file as a dataset it would appear under the AzureML Designer &gt; asset library &gt; data section. You select it and use it inside the designer.<\/span><\/p>\n<p><b>Option C is incorrect<\/b><span style=\"font-weight: 400;\"> because uploading files to servers causes security issues as anyone with a link can access it if not properly managed and it is not a preferred way when dealing with confidential data.<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because files from azure blob storage cannot be directly accessed from Designer. You need to create a datastore connecting the azure blob storage container with Azure ML Studio and then we can use this datastore as a way to import data into AzureML Designer.<\/span><\/p>\n<p><b>References:<\/b><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-azure-machine-learning-v2?tabs=cli#data\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-azure-machine-learning-v2?tabs=cli#data<\/span><\/a><b>,<\/b><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/component-reference\/import-data\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/component-reference\/import-data<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Explore_data_and_train_models-2\"><\/span><b>Domain: Explore data and train models<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 8: <\/b><span style=\"font-weight: 400;\">You are writing a training script and you wish to run it on a remote compute target with compute disk size of 30 GB and 4 GB Ram. Your input data of size 40 GB is registered as a file dataset. Which mode should you use to access input data in your training script?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">as_mount()<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">as_download()<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">as_upload()<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">as_hdfs()<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: A<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is correct<\/b><span style=\"font-weight: 400;\"> because the data size exceeds the compute disk size, downloading isn&#8217;t possible. For this scenario, It is recommended mounting since only the data files used by your script are loaded at the time of processing.<\/span><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because datasets cannot be downloaded because of size restrictions.<\/span><\/p>\n<p><b>Option C is incorrect<\/b><span style=\"font-weight: 400;\"> because as_upload() method is used to upload model outputs and other artifacts to azure but not to read into your script.<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because the as_hdfs() is used with Azure Synapse.HDFS is Hadoop <\/span><b>\u00a0<\/b><span style=\"font-weight: 400;\">Distributed File System and is normally used to process large data sets running on Compute clusters with multiple nodes for parallel processing. This Model is used for experiment runs dealing with problems whose solution involves both Azure Synapse and Azure Machine Learning.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/paths\/build-ai-solutions-with-azure-ml-service\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/training\/paths\/build-ai-solutions-with-azure-ml-service\/<\/span><\/a><span style=\"font-weight: 400;\">,<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-train-with-datasets\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-train-with-datasets<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Deploy_and_retrain_a_model-3\"><\/span><b>Domain: Deploy and retrain a model<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 9: <\/b><span style=\"font-weight: 400;\">You have successfully trained and validated a machine learning model using AzureML Python SDK. You are using Visual Studio Code Editor as it has extensions such as\u00a0 VSCode AzureML which help you to access\/control Azure services from your local machine. You want to deploy the trained model to a compute target as a web service for some further testing. Which of the following cannot fulfill the requirement?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Azure Container Instance<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Azure Kubernetes Service<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Local development environment<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Azure Virtual Machine<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: D<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is incorrect<\/b><span style=\"font-weight: 400;\"> because Azure Container Service supports deploying azure machine learning models which is generally a preferred way during testing and development.<\/span><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because Azure Kubernetes Service also supports the deployment of models and Kubernetes is more suitable for production workloads.<\/span><\/p>\n<p><b>Option C is incorrect<\/b><span style=\"font-weight: 400;\"> because we can deploy models to the local development environment during testing and development. This method is preferred while debugging any errors caused.<\/span><\/p>\n<p><b>Option D is correct <\/b><span style=\"font-weight: 400;\">because we cannot deploy a model directly to the virtual machine. It is just a compute machine created on the cloud and doesn\u2019t necessarily have all properties and support needed to deploy a machine model directly and start consuming the endpoint created.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/concept-model-management-and-deployment?source=recommendations\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/concept-model-management-and-deployment?source=recommendations<\/span><\/a><span style=\"font-weight: 400;\">,<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/paths\/build-ai-solutions-with-azure-ml-service\/\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/training\/paths\/build-ai-solutions-with-azure-ml-service\/<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Design_and_prepare_a_machine_learning_solution-5\"><\/span><b>Domain: Design and prepare a machine learning solution<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 10:<\/b><span style=\"font-weight: 400;\"> You have trained a model using a dataset registered in AzureML Studio which contains historical data from the past 2 years. As time progresses, you will collect new data and you fear that over time there may be trends that change the profile of the data. So you have set up data drift monitoring in Azure Machine Learning Studio to notify the data science team whenever a data drift is detected. From the following, select the one which is not true regarding the setup of data drift in AzureML.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">To monitor data drift using registered datasets, you need to register two datasets<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The data drift monitor gets triggered for every small change in the data and cannot be customized to trigger changes that we think are significant<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">You can create dataset monitors using the visual interface in Azure Machine Learning studio<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">An alert notification by email can be configured while defining data drift to notify team members of the data drift<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: B<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is incorrect<\/b><span style=\"font-weight: 400;\"> because To monitor data drift using registered datasets, you need to register two datasets. 1) baseline dataset &#8211; usually the original training data.2) A target dataset that will be compared to the baseline based on time intervals. This dataset requires a column for each feature you want to compare, and a timestamp column so the rate of data drift can be measured.<\/span><\/p>\n<p><b>Option B is correct<\/b><span style=\"font-weight: 400;\"> because Data drift is measured using a calculated magnitude of change in the statistical distribution of feature values over time. You can expect some natural random variation between the baseline and target datasets, but you should monitor for large changes that might indicate significant data drift and not for small changes. It is possible to only detect Large changes that we are interested in. This can be achieved by defining a threshold for data drift magnitude above which you want to be notified<\/span><\/p>\n<p><b>Option C is incorrect<\/b><span style=\"font-weight: 400;\"> because You can create dataset monitors using the visual interface in Azure Machine Learning studio. Go to Datasets Section and Select Data Monitor option to set up Data drift.<\/span><\/p>\n<p><b>Option D is incorrect <\/b><span style=\"font-weight: 400;\">because we can use the alert_configuration setting while defining DataDriftDetector to notify the drift detected via mail.<\/span><\/p>\n<p><b>Reference:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/modules\/monitor-data-drift-with-azure-machine-learning\/3-schedules-alerts\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/learn.microsoft.com\/en-us\/training\/modules\/monitor-data-drift-with-azure-machine-learning\/3-schedules-alerts<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Design_and_prepare_a_machine_learning_solution-6\"><\/span><b>Domain: Design and prepare a machine learning solution<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 11: <\/b><span style=\"font-weight: 400;\">You have developed a training pipeline using Jupyter notebooks directly in your workspace in Azure Machine Learning studio. You have used the following command in a cell to start the training pipeline.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/span> <span style=\"font-weight: 400;\">run <\/span><b>=<\/b><span style=\"font-weight: 400;\"> experiment<\/span><b>.<\/b><span style=\"font-weight: 400;\">submit(ScriptRunConfigObject)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Which of the following command lets you retrieve all the metrics logged during the run<\/span><span style=\"font-weight: 400;\">?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">RunDetails(run).show()<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">run.get_detailed_status()<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">run.get_metrics()<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">run.wait_for_completion()<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is incorrect <\/b><span style=\"font-weight: 400;\">because RunDetails(run).show() is used to monitor the run once the training job is submitted. We can view the Live messages being logged and the Live status of the pipeline run in real-time.<\/span><\/p>\n<p><b>Option B is incorrect <\/b><span style=\"font-weight: 400;\">because fetches the latest status of the run. Based on the current status it will fetch some additional logs.<\/span><\/p>\n<p><b>Option C is correct <\/b><span style=\"font-weight: 400;\">because<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/python\/api\/azureml-core\/azureml.core.run(class)?view=azure-ml-py#azureml-core-run-get-metrics\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">get_metrics<\/span><\/a><span style=\"font-weight: 400;\">() lets us fetch metrics for runs in the given while analyzing the Training pipeline run\/Job results.<\/span><\/p>\n<p><b>Option D is incorrect <\/b><span style=\"font-weight: 400;\">because of wait_for_completion()\u00a0 Wait for the completion of this run and Returns the status object after the wait.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/python\/api\/azureml-core\/azureml.core.run(class)?view=azure-ml-py#azureml-core-run-get-metrics\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">azureml.core.Run class &#8211; Azure Machine Learning Python | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/python\/api\/azureml-core\/azureml.core.run(class)?view=azure-ml-py#azureml-core-run-get-detailed-status\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">azureml.core.Run class &#8211; Azure Machine Learning Python | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/github.com\/Azure\/MachineLearningNotebooks\/blob\/master\/how-to-use-azureml\/work-with-data\/datasets-tutorial\/train-with-datasets\/train-with-datasets.ipynb\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">MLNotebooks\/train-with-datasets.ipynb at master \u00b7 Azure\/MachineLearningNotebooks \u00b7 GitHub<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Explore_data_and_train_models-3\"><\/span><b>Domain: Explore data and train models<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 12: <\/b><span style=\"font-weight: 400;\">A teammate of yours has used Azure AutoML to choose the right algorithm for a forecasting problem. Your team has found the right model that gives the best forecast. So he implemented it in his final solution but he\/she is wondering if he has followed <\/span><span style=\"font-weight: 400;\">Responsible Artificial Intelligence (Responsible AI)<\/span><span style=\"font-weight: 400;\"> Guidelines. Help your teammate by finding where the Responsible AI Guidelines haven\u2019t been followed ?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Creation of a<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-responsible-ai-dashboard\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Responsible<\/span><\/a> <span style=\"font-weight: 400;\">AI Dashboard for the final model created<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-error-analysis\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">error analysis<\/span><\/a><span style=\"font-weight: 400;\"> component of the<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-responsible-ai-dashboard\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Responsible AI dashboard shows<\/span><\/a><span style=\"font-weight: 400;\"> that the error rate is uniform across different groups of the categorical input features<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Final solution is designed in a way where we could explain if a forecast is higher or lower<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Entire Repository is Committed to your company&#8217;s GitHub channel with the config file that has secrets, such as database connection strings or passwords<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer : D<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is incorrect <\/b><span style=\"font-weight: 400;\">because the creation of a<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-responsible-ai-dashboard\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Responsible<\/span><\/a> <span style=\"font-weight: 400;\">AI Dashboard helps us to check one of the principles of the Responsible AI Guidelines Fairness and inclusiveness which says that AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. So here Responsible AI Guidelines are upheld.<\/span><\/p>\n<p><b>Option B is incorrect <\/b><span style=\"font-weight: 400;\">because the error must be uniform across all groups\/regions of input features which represents another principle of the Responsible AI Guidelines,\u00a0 Reliability, and safety. For example, Let\u2019s say we have a higher rate for a group of individuals less than 25 years of age, We are essentially introducing bias in the model for people below 25 years of age as the results for them are not reliable when compared to others. So here Responsible AI Guidelines are upheld.<\/span><\/p>\n<p><b>Option C is incorrect <\/b><span style=\"font-weight: 400;\">because based on the forecast organisation takes key decisions such as ordering inventory, hiring employees, making delivery promises, etc so it&#8217;s essential to understand why the output is the way it is. This is also another principle of the Responsible AI Guidelines called Transparency. So here Responsible AI Guidelines are upheld.<\/span><\/p>\n<p><b>Option D is correct <\/b><span style=\"font-weight: 400;\">because storing secrets in source code is impractical and is a huge security risk. It violates another principle of\u00a0 Responsible AI Guidelines called Privacy and security. One of the best alternatives recommended by Microsoft to storing secrets in source code is to make them available in the application environment by using Azure Key Vault. Azure Key Vault is an azure service that provides secure storage of generic secrets for applications.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-responsible-ai\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">What is Responsible AI &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-responsible-ai#privacy-and-security\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">What is Responsible AI &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-use-secrets-in-runs\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Authentication secrets &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Prepare_a_model_for_deployment\"><\/span><b>Domain: Prepare a model for deployment<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 13: <\/b><span style=\"font-weight: 400;\">You have developed a pipeline using AzureML Python SDK, which has three steps -&gt; Step-1: Read and preprocess input data, Step-2: Train the Model on preprocessed data, Step-3: Deploy the trained model to an endpoint. You have successfully published the pipeline and is triggered every month to retrain the model with newly collected input data. Suddenly Pipeline re-train run failed. After the investigation, you found the problem and made changes to the training script used in Step-2. Choose the right code snippet out of the following that does the job of republishing the pipeline?<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">from azureml.pipeline.core import Pipeline<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">pipeline = Pipeline(workspace=yourmlworkspace, steps=[Step-1, Step-2, Step-3])<\/span><\/p>\n<p><span style=\"font-weight: 400;\">pipeline.publish(name=pipeline_name, version=new_version_no)\u00a0<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">from azureml.pipeline.core import Pipeline<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">pipeline = Pipeline(workspace=yourmlworkspace)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">pipeline_run = experiment.submit(pipeline)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">pipeline.publish(name=pipeline_name, version=new_version_no)<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\"> \u00a0 <\/span> <span style=\"font-weight: 400;\">from azureml.pipeline.core import Pipeline<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">pipeline = Pipeline(workspace=yourmlworkspace, steps=[Step-1, Step-2, Step-3])<\/span><\/p>\n<p><span style=\"font-weight: 400;\">pipeline_run = experiment.submit(pipeline)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">pipeline.publish(name=pipeline_name, version=new_version_no)<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\"> \u00a0 <\/span> <span style=\"font-weight: 400;\">from azureml.pipeline.core import Pipeline<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">pipeline = Pipeline(workspace=yourmlworkspace, steps=[Step-2])<\/span><\/p>\n<p><span style=\"font-weight: 400;\">pipeline_run = experiment.submit(pipeline)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">pipeline.publish(name=pipeline_name, version=new_version_no)<\/span><\/p>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><b>Explanation:\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Using Python SDK for AzureML, we can interact\/access\/use AzureML services. One such Service is Azure Machine Learning Pipelines. We can develop pipelines for data processing training, inferencing, etc. Generally, Machine Learning Solution would contain three components 1) data processing 2) training 3) Deployment. Here the pipeline is defined in three pipeline steps for each of the discussed components. We need to publish a pipeline every time a change is made to it.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Since we have made changes to the training step (Step-2) we need to create a pipeline object with all its steps -&gt; pipeline = Pipeline(workspace=yourmlworkspace, steps=[Step-1, Step-2, Step-3]). Then we need to submit the pipeline as a job using Azure ML Experiment Class to check the proper functioning of the pipeline -&gt; pipeline_run = experiment.submit(pipeline). Once after the successful run of the Job. If we are satisfied with the results we will have to publish it -&gt;pipeline.publish(name=pipeline_name, version=new_version_no).<\/span><\/p>\n<p><b>Option A is incorrect <\/b><span style=\"font-weight: 400;\">because Pipeline is not submitted before publishing to check the functioning of the changes made.<\/span><\/p>\n<p><b>Option B is incorrect <\/b><span style=\"font-weight: 400;\">because while creating the pipeline we did not mention the pipeline steps that are part of the pipeline.<\/span><\/p>\n<p><b>Option C is correct <\/b><span style=\"font-weight: 400;\">because it shows all the steps to create a pipeline, ran the pipeline, and then published it.<\/span><\/p>\n<p><b>Option D is incorrect <\/b><span style=\"font-weight: 400;\">because we have only mentioned Step-2 and left out all other steps while creating the pipeline.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/python\/api\/azureml-pipeline-core\/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py#azureml-pipeline-core-pipeline-pipeline-publish\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Azureml.pipeline.core.pipeline.Pipeline class &#8211; Azure Machine Learning Python | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-create-machine-learning-pipelines\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Create and run ML pipelines &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Deploy_and_retrain_a_model-4\"><\/span><b>Domain: Deploy and retrain a model<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 14: <\/b><span style=\"font-weight: 400;\">An ML Team is planning to deploy a model built using Azure Services. You have been asked to identify a computer target to deploy the model for inference for various cases. Which of the following should you recommend for each requirement? Drag the appropriate service to the correct answer area.<\/span><span style=\"font-weight: 400;\">\u00a0 <\/span><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>Incorrect Matching<\/b><b>:<\/b><\/p>\n<table>\n<tbody>\n<tr>\n<td><span style=\"font-weight: 400;\">Service Name<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Answer Area<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Local web service<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Use for low-scale CPU-based workloads that require less than 48 GB of RAM. Doesn&#8217;t require you to manage a cluster<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Azure Machine Learning endpoints<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Run inferencing workloads on on-premises, cloud, and edge Kubernetes clusters<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Azure Machine Learning Kubernetes<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Fully managed computes for real-time (managed online endpoints) and batch scoring (batch endpoints) on serverless compute<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Azure Container Instances<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Use for limited testing and troubleshooting. Hardware acceleration depends on the use of libraries in the local system<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><b>Correct Answer: 1-D, 2-C, 3-B and 4-A\u00a0<\/b><\/p>\n<table>\n<tbody>\n<tr>\n<td><b>Service Name<\/b><\/td>\n<td><b>Answer Area<\/b><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Local web service<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Use for limited testing and troubleshooting. Hardware acceleration depends on the use of libraries in the local system<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Azure Machine Learning endpoints<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Fully managed computes for real-time (managed online endpoints) and batch scoring (batch endpoints) on Serverless Compute<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Azure Machine Learning Kubernetes<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Run inferencing workloads on on-premises, cloud, and edge Kubernetes clusters<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Azure Container Instances<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Use for low-scale CPU-based workloads that require less than 48 GB of RAM. Doesn&#8217;t require you to manage a cluster<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Local web service<\/b><\/p>\n<p><span style=\"font-weight: 400;\">This is a solution for\u00a0 limited testing and troubleshooting. Here the model is deployed on the local machine you are working on so Hardware acceleration depends on the use of libraries in the local system.<\/span><\/p>\n<p><b>Azure Machine Learning endpoints<\/b><\/p>\n<p><span style=\"font-weight: 400;\">This Solution is Microsoft recommended way to deploy models for inference since it is Fully managed computes (Managed by Azure) for real-time (managed online endpoints) and batch scoring (batch endpoints) on serverless computers.<\/span><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>Azure Machine Learning Kubernetes<\/b><\/p>\n<p><span style=\"font-weight: 400;\">This is the traditional option of deploying models in a production environment where we can r<\/span><span style=\"font-weight: 400;\">un inferencing workloads on on-premises, cloud, and edge Kubernetes clusters.<\/span><\/p>\n<p><b>Azure Container Instances<\/b><\/p>\n<p><span style=\"font-weight: 400;\">This Solution can be used for low-scale CPU-based workloads that require less than 48 GB of RAM. Doesn&#8217;t require you to manage a cluster (Azure Manages it). It only supports Real-time inference and is Recommended for dev\/test purposes only.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-deploy-and-where?tabs=azcli\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Deploy machine learning models &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">\\, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-deploy-azure-kubernetes-service?tabs=python\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Deploy ML models to Kubernetes Service with v1 &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-deploy-azure-container-instance\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">How to deploy models to Azure Container Instances with CLI (v1) &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-deploy-managed-online-endpoints?tabs=azure-cli\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Deploy machine learning models to online endpoints &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Deploy_and_retrain_a_model-5\"><\/span><b>Domain: Deploy and retrain a model<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 15<\/b><b>: <\/b><span style=\"font-weight: 400;\">Your colleagues developed an AzureML pipeline using PythonScriptStep, where your team wrote custom code for training and subsequent deployment using the AzureML SDK to manage AzureML resources. The deployment target is an Azure Container instance. Below is the code snippet used for the deployment. Now they want to verify the deployment status. If the deployment fails, they want to know the reason for the failure.<\/span><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>code snippet used for deployment:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">from azureml.core.webservice import AciWebservice<\/span><\/p>\n<p><span style=\"font-weight: 400;\">deployment_config=AciWebservice.deploy_configuration(cpu_cores=0.5,memory_gb=1,auth_enabled=True)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">service = Model.deploy(<\/span><\/p>\n<p><span style=\"font-weight: 400;\">ws,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;YourACIServiceName&#8221;,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">[model],<\/span><\/p>\n<p><span style=\"font-weight: 400;\">inference_config,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">deployment_config,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">overwrite=True,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/span> <span style=\"font-weight: 400;\">service.wait_for_deployment()<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Which of the following can be utilized to achieve the goal? [SELECT TWO]<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Use the following print statement after you deploy -&gt;\u00a0 print(service.get_logs())<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">In the AzureML web Studio, use the UI to get deployment logs<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Go to the pipeline job, and get logs from the outputs and logs section which are printed automatically when a deployment fails<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">If deployment is not successful, Re-Deploy the model locally and debug the problem<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Raise a Microsoft support request if a deployment fails to get the necessary help<\/span><span style=\"font-weight: 400;\">\u00a0<\/span><\/li>\n<\/ol>\n<p><b>Correct Answers: A and B<\/b><\/p>\n<p><b>Explanation:\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The following services can be used to look at the status and logs of the deployment:<\/span><\/p>\n<p><b>By Using Azure ML SDK<\/b><\/p>\n<p><span style=\"font-weight: 400;\">&#8211;<\/span><span style=\"font-weight: 400;\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><span style=\"font-weight: 400;\">use the following print statement after you deploy -&gt;\u00a0 <\/span><b>print(service.get_logs())<\/b><\/p>\n<p><span style=\"font-weight: 400;\">&#8211;<\/span><span style=\"font-weight: 400;\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><span style=\"font-weight: 400;\">service.get_logs() prints all the logs that were recorded by AzureML SDK packages and also logs that were placed in deployment scripts. So you would know exactly what went wrong with the deployment. These logs can be found in the output logs of the Job where the pipeline ran.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0<\/span><b>Code example:-<\/b><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/span> <span style=\"font-weight: 400;\">from azureml.core.webservice import AciWebservice<\/span><\/p>\n<p><span style=\"font-weight: 400;\">deployment_config=AciWebservice.deploy_configuration(cpu_cores=0.5,memory_gb=1,auth_enabled=True)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">service = Model.deploy(<\/span><\/p>\n<p><span style=\"font-weight: 400;\">ws,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;YourACIServiceName&#8221;,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">[model],<\/span><\/p>\n<p><span style=\"font-weight: 400;\">inference_config,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">deployment_config,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">overwrite=True,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/span> <span style=\"font-weight: 400;\">service.wait_for_deployment()<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/span> <span style=\"font-weight: 400;\">service.get_logs()<\/span><\/p>\n<p><b>By Using Azure ML Web Studio UI<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Under the assets section of the AzureML Web Studio &gt; <\/span><b>inside endpoints,<\/b><span style=\"font-weight: 400;\"> you can look at the <\/span><b>Deployment state<\/b><span style=\"font-weight: 400;\"> and logs of the service you have selected to <\/span><b>deploy<\/b><span style=\"font-weight: 400;\">.<\/span><\/li>\n<\/ul>\n<p><b>Option C is incorrect <\/b><span style=\"font-weight: 400;\">because No Logs will be printed about the deployment except for the exception message saying that the deployment has failed.<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because deploying the model locally is a way to troubleshoot the failure which is mostly runtime errors. Some issues cannot be found from this method as There are a lot of elements to a real-time service deployment, including the trained model, the runtime environment configuration, the scoring script, the container image, and the container host. We cannot exactly replicate everything by deploying it locally even though a majority of issues can be found. This method is used to troubleshoot the cause of the error and is not helpful to look at the deployment status or immediate error information.<\/span><\/p>\n<p><b>Option E is incorrect <\/b><span style=\"font-weight: 400;\">because Microsoft support requests are raised when dealing with problems when the problem is out of the scope\/capacity of the developer and it is not the right way to deal with deployment issues.<\/span><\/p>\n<p><b>Reference:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-deploy-and-where?tabs=python\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Deploy machine learning models &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Prepare_a_model_for_deployment-2\"><\/span><b>Domain: Prepare a model for deployment<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 16: <\/b><span style=\"font-weight: 400;\">You have been given a task to quickly check the feasibility of building an image classification mode and were given 100k images as input data. You have decided to quickly train multiple models and see if the performance of any model is acceptable. You decide to use Azure Machine Learning (AzureML) Experiments to do the training and wanted to use MLflow to track Tracking experiments to record and compare parameters and results. You do not have enough time to write code that can log\/record results for all the models. So you decide to use the working MLflow Autolog() functionality. So among the following machine learning libraries choose a library that doesn&#8217;t support Auto Logging with MLflow?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Theano<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Pytorch<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">TensorFlow<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Keras<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: A<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is correct <\/b><span style=\"font-weight: 400;\">because <\/span><span style=\"font-weight: 400;\">Theano is a Python library that <\/span><b>allows us to evaluate mathematical operations and it doesn\u2019t support <\/b><b>Auto Logging with MLflow.<\/b><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because we can either use mlflow.autolog() or explicitly call mlflow.pytorch.autolog()<\/span><\/p>\n<p><b>Option C is incorrect <\/b><span style=\"font-weight: 400;\">\u00a0because we can either use mlflow.autolog() or explicitly call mlflow.tensorflow.autolog()<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because we can either use mlflow.autolog() or explicitly call\u00a0 mlflow.keras.autolog()<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/azuremarketplace.microsoft.com\/en-us\/marketplace\/apps\/apps-4-rent.theano-on-windows2016\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Theano &#8211; Azure Marketplace<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-log-mlflow-models?tabs=wrapper\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Logging MLflow models &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-use-mlflow?tabs=azuremlsdk\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">MLflow Tracking for models &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Design_and_prepare_a_machine_learning_solution-7\"><\/span><b>Domain: Design and prepare a machine learning solution<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 17: <\/b><span style=\"font-weight: 400;\">In<\/span><a href=\"https:\/\/ml.azure.com\/\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Azure Machine Learning studio<\/span><\/a><span style=\"font-weight: 400;\">, you can manage the computer targets for your data science activities. There are four kinds of computer resources you can create and use. What is the purpose of having four kinds of computer resources? Drag the appropriate service to the correct answer area.<\/span><span style=\"font-weight: 400;\">\u00a0 <\/span><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>\u00a0<\/b><b>Incorrect Match<\/b><\/p>\n<table>\n<tbody>\n<tr>\n<td><b>Compute Target<\/b><\/td>\n<td><b>Answer Area<\/b><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Compute Instances<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Links to existing Azure compute resources, such as Virtual Machines or Azure Databricks clusters.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Compute Clusters<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Deployment targets for predictive services that use your trained models<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Inference Clusters<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Scalable clusters of virtual machines for on-demand processing of experiment code.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Attached Compute<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Development workstations that data scientists can use to work with data and models.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><b>Correct Answers:\u00a0 1-D, 2-C, 3-B and 4-A <\/b><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<table>\n<tbody>\n<tr>\n<td><span style=\"font-weight: 400;\"> Compute Instances<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Development workstations that data scientists can use to work with data and models<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Compute Clusters<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Scalable clusters of virtual machines for on-demand processing of experiment code<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Inference Clusters<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Deployment targets for predictive services that use your trained models<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\"> Attached Compute<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Links to existing Azure compute resources, such as Virtual Machines or Azure Databricks clusters<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Compute Instances<\/b><\/p>\n<p><span style=\"font-weight: 400;\">can be used as a<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-compute-target#training-compute-targets\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\"> training compute target<\/span><\/a><span style=\"font-weight: 400;\"> similar to Azure Machine Learning<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-create-attach-compute-cluster\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\"> compute training clusters<\/span><\/a><span style=\"font-weight: 400;\">. But a compute instance has only a single node, while a compute cluster can have more nodes. So these are generally used as <\/span><span style=\"font-weight: 400;\">Development workstations that data scientists can use to work with data and models.<\/span><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><b>Compute Clusters<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Since Compute clusters can have multiple nodes they are\u00a0 Scalable clusters of virtual machines for the on-demand processing of experiment code. They are generally used to run all kinds of Machine learning Jobs and can also be shared with other users in your workspace.<\/span><\/p>\n<p><b>Inference Clusters<\/b><\/p>\n<p><span style=\"font-weight: 400;\">These are Deployment targets for predictive services that use your trained models. These are generally\u00a0 Azure Kubernetes Service (AKS) clusters.<\/span><\/p>\n<p><b>Attached Compute<\/b><\/p>\n<p><span style=\"font-weight: 400;\">These compute targets are not managed by Azure Machine Learning. You create this type of compute target outside Azure Machine Learning and then attach it to your workspace. These compute resources can require additional steps for you to maintain or to improve performance for machine learning workloads.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Azure Machine Learning supports the following compute types: Remote virtual machines, Azure HDInsight, Azure Databricks, Azure Data Lake Analytics, etc.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-in\/azure\/machine-learning\/concept-compute-target\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">What are compute targets &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/training\/paths\/create-no-code-predictive-models-azure-machine-learning\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Microsoft Azure AI Fundamentals: Explore visual tools &#8211; machine learning\u00a0 | Microsoft Learn<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Explore_data_and_train_models-4\"><\/span><b>Domain: Explore data and train models<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 18: <\/b><span style=\"font-weight: 400;\">You are dealing with an NLP Task where you have to predict the genres of a movie.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Your input dataset(.txt file) looks like this;<\/span><\/p>\n<table>\n<tbody>\n<tr>\n<td><span style=\"font-weight: 400;\">text\u00a0\u00a0<\/span><\/td>\n<td><span style=\"font-weight: 400;\"> labels<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">&#8220;Avengers End Game&#8221;<\/span><\/td>\n<td><span style=\"font-weight: 400;\">&#8220;Action, Adventure&#8221;<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">&#8220;Exorcist&#8221;<\/span><\/td>\n<td><span style=\"font-weight: 400;\"> &#8220;Horror&#8221;<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">&#8220;RRR&#8221;<\/span><\/td>\n<td><span style=\"font-weight: 400;\">&#8220;Action, Historical Fiction&#8221;<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span style=\"font-weight: 400;\">You have decided to use Azure AUTOMLCapabilities. Which of the following can we use to train a natural language processing model using AutoML?<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\">A) job = automl.classification(<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">compute=my_compute_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">experiment_name=my_exp_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">training_data=my_training_data_input,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">target_column_name=&#8221;labels&#8221;,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">primary_metric=&#8221;accuracy&#8221;,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">n_cross_validations=5,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">enable_model_explainability=True)<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\">B) job = automl.text_classification(<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">compute=compute_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">experiment_name=exp_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">training_data=my_training_data_input,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">validation_data=my_validation_data_input,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">target_column_name=&#8221;labels&#8221;,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">primary_metric=&#8221;accuracy&#8221;)<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\">C) job = automl.text_classification_multilabel(<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">compute=compute_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">experiment_name=exp_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">training_data=my_training_data_input,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">validation_data=my_validation_data_input,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">target_column_name=&#8221;labels&#8221;,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">primary_metric=&#8221;accuracy&#8221;)<\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\">D) text_ner_job = automl.text_ner(<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">compute=compute_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">experiment_name=exp_name,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">training_data=my_training_data_input,<\/span><\/p>\n<p><span style=\"font-weight: 400;\">validation_data=my_validation_data_input)<\/span><\/p>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><b>Option A is incorrect <\/b><span style=\"font-weight: 400;\">because automl.classification is used to solve normal classification (non-NLP) problems but not text classification problems. Here we are dealing with text data and we are trying to generate an NLP Model.<\/span><\/p>\n<p><b>Option B is incorrect<\/b><span style=\"font-weight: 400;\"> because automl.text_classification is used to solve multi-class classification problems. multi-class classification -&gt; if there are multiple possible classes and each sample (data point) can be classified as exactly one class. The task is to predict the correct class for each sample. For example, classifying a movie as &#8220;Comedy&#8221; or &#8220;Romantic&#8221;.<\/span><\/p>\n<p><b>Option C is correct <\/b><span style=\"font-weight: 400;\">\u00a0because we are dealing with a Multi-Label Classification problem. If you look at the input data each data point can contain more than 1 Label associated with it. Multi-Label Classification -&gt; here are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample. For example, classifying a movie as &#8220;Comedy&#8221;, &#8220;Romantic&#8221;, or &#8220;Comedy and Romantic&#8221;.<\/span><\/p>\n<p><b>Option D is incorrect<\/b><span style=\"font-weight: 400;\"> because automl.text_ner is used for problems related to Named Entity Recognition (NER). It involves the identification of key information in the text and classification into a set of predefined categories. An entity is basically the thing that is consistently talked about or referred to in the text. At its core, NLP is just a two-step process, below are the two steps that are involved: Detecting the entities from the text, Classifying them into different categories. The Format (CoNLL format) of input data required for this task is also different from what is provided<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-auto-train-nlp-models?tabs=python\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Set up AutoML for NLP &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/github.com\/Azure\/azureml-examples\/blob\/main\/sdk\/python\/jobs\/automl-standalone-jobs\/automl-nlp-text-classification-multilabel-task-paper-categorization\/automl-nlp-text-classification-multilabel-task-paper-cat.ipynb\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">azureml-examples\/automl-nlp-text-classification-multilabel-task-paper-cat.ipynb at main \u00b7 Azure\/azureml-examples \u00b7 GitHub<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Deploy_and_retrain_a_model-6\"><\/span><b>Domain: Deploy and retrain a model<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 19: <\/b><span style=\"font-weight: 400;\">You are designing an event-driven workflow where you want to automatically trigger a re-training pipeline published in your AzureML workspace whenever you detect a data drift in the training data. You have set up a dataset monitor to<\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-monitor-datasets\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">detect data drift<\/span><\/a><span style=\"font-weight: 400;\"> in a workspace.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0Arrange the following steps you need to perform in the right order to achieve the workflow.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Subscribe to the event type &#8211;&gt; Microsoft.MachineLearningServices.DatasetDriftDetected<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Create an Azure Logic App from your Azure Machine Learning Subscription page and select MachineLearningServices.DatasetDriftDetected event(s) to be notified for<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Use Http Step in Azure Logic Apps to trigger the Retraining pipeline<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Go to your Azure Machine Learning Subscription page, and select Create Event Subscription<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: D, A, B and C<\/b><\/p>\n<p><b>Explanation:<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Azure Cloud Platform of Microsoft Provides a service called Azure Event Grid which lets us create workflows based on events. An event Grid Monitors events and triggers workflows designed by us. Azure Cloud Platform provides another service called Azure Logic Apps which lets us create and orchestrate the workflows which are generally triggered by Event Grids.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">One of the ways to create a retraining workflow based on data drift is by using Azure Event Grid and Azure Logic Apps. First step is to set up an Event Grid subscription where the event grid will receive DataDrift events from our ML workspace. Next, we will create a Logic App where it hears for data drift events coming from our ML Workspace and we will use an HTTP step inside Azure Logic App where we can send a Post request to the pipeline endpoint of the re-training pipeline (All published Pipelines have endpoint\u2019s which we can use to trigger pipeline).<\/span><\/p>\n<p><b>So the order is -&gt;<\/b><\/p>\n<ol>\n<li><b>D) Go to your Azure Machine Learning Subscription page,\u00a0 select Create Event Subscription<\/b><\/li>\n<li><b>A) Subscribe to the event type \u2192 Microsoft.MachineLearningServices.DatasetDriftDetected<\/b><\/li>\n<li><b>B) Create an Azure Logic App from your Azure Machine Learning Subscription page and select<\/b><\/li>\n<\/ol>\n<p><b>\u00a0\u00a0\u00a0\u00a0\u00a0MachineLearningServices.DatasetDriftDetected event(s) to be notified<\/b><\/p>\n<ol>\n<li><b>C) Use Http Step in Azure Logic Apps to trigger the Retraining pipeline<\/b><\/li>\n<\/ol>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-use-event-grid\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Trigger events in ML workflows (preview) &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/v1\/how-to-trigger-published-pipeline\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Trigger Azure Machine Learning pipelines &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Domain_Deploy_and_retrain_a_model-7\"><\/span><b>Domain: Deploy and retrain a model<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><b>Question 20: <\/b><span style=\"font-weight: 400;\">In which of the following scenarios should you choose Real-time endpoints over Batch endpoints?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When you have expensive models that require a longer time to run inference<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When you need to perform inference over large amounts of data, distributed in multiple files<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When your prediction calls to the deployment model are not frequent<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When you have low latency requirements<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: D<\/b><\/p>\n<p><b>Explanation:\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">After you train a machine learning model, you need to deploy the model so that others can use it to do inferencing. In Azure Machine Learning, you can use endpoints and deployments to do so. An endpoint is an HTTPS endpoint that clients can call to receive the inferencing (scoring) output of a trained model. There are two kinds of endpoints 1) online and 2) Batch<\/span><\/p>\n<p><b>Option A is incorrect <\/b><span style=\"font-weight: 400;\">because<\/span> <span style=\"font-weight: 400;\">online endpoints are designed for faster response times and when we know that the prediction takes longer time, we should go for the Batch endpoint.<\/span><\/p>\n<p><b>Option B is incorrect <\/b><span style=\"font-weight: 400;\">because It takes time to process large volumes of data. And batch endpoint suites for this scenario as it is not only cheaper but also more efficient as we can use parallelization.<\/span><\/p>\n<p><b>Option C is incorrect <\/b><span style=\"font-weight: 400;\">because we won\u2019t be using the endpoint for most of its time which results in more cost\/wastage of resources. Here we should use the Batch endpoint.<\/span><\/p>\n<p><b>Option D is correct <\/b><span style=\"font-weight: 400;\">because Online endpoints are endpoints that are used for online (real-time) inferencing. Compared to batch endpoints, online endpoints contain deployments that are ready to receive data from clients and can send responses back in real time. So Online endpoints are significantly faster translating to lower latency.<\/span><\/p>\n<p><b>References:<\/b><\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-use-batch-endpoint?tabs=azure-cli\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Use batch endpoints for batch scoring &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-endpoints#what-are-online-endpoints\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">What are endpoints? &#8211; Azure Machine Learning | Microsoft Learn<\/span><\/a><\/p>\n<h3><span class=\"ez-toc-section\" id=\"Summary\"><\/span>Summary<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>To prepare for the DP-100 exam, you can start by reviewing the exam objectives and Azure documentation related to data science solutions. You can also use practice exams and sample questions to test your knowledge and identify areas that require further study.<\/p>\n<p>In addition, it&#8217;s essential to have hands-on experience with Azure services related to data science. And thus Whizlabs comes up with <a href=\"https:\/\/www.whizlabs.com\/labs\/library\" target=\"_blank\" rel=\"noopener\">hands-on labs<\/a> and <a href=\"https:\/\/www.whizlabs.com\/labs\/sandbox\" target=\"_blank\" rel=\"noopener\">sandboxes <\/a>to get familiar with real time settings.<\/p>\n<p>If you want to pass the exam, you must practice, practice and practice often and get certified now.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In today&#8217;s world, there are a lot of raw data generated every day in almost every IT Industries, so there is an need of a dedicated team who can be to evaluate and plot this data to make inferences and imply the Machine Learning algorithm to make the predictions.\u00a0 Hence there is a huge demand and gap for Data Scientists. Microsoft Azure Data Scientist \u00a0DP-100 Certification helps to assess the individual knowledge on data science and machine learning to deploy and run machine learning workloads on Microsoft Azure with the usage of Azure Machine Learning Service. If you are preparing [&hellip;]<\/p>\n","protected":false},"author":223,"featured_media":88758,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"default","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"default","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[10],"tags":[],"class_list":["post-88726","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cloud-computing-certifications"],"uagb_featured_image_src":{"full":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",1280,720,false],"thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-150x150.webp",150,150,true],"medium":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-300x169.webp",300,169,true],"medium_large":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-768x432.webp",768,432,true],"large":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-1024x576.webp",1024,576,true],"1536x1536":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",1280,720,false],"2048x2048":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",1280,720,false],"profile_24":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",24,14,false],"profile_48":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",48,27,false],"profile_96":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",96,54,false],"profile_150":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",150,84,false],"profile_300":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free.webp",300,169,false],"tptn_thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-250x250.webp",250,250,true],"web-stories-poster-portrait":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-640x720.webp",640,720,true],"web-stories-publisher-logo":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-96x96.webp",96,96,true],"web-stories-thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2023\/05\/DP-100-exam-questions-Free-150x84.webp",150,84,true]},"uagb_author_info":{"display_name":"Dharmendra Digari","author_link":"https:\/\/www.whizlabs.com\/blog\/author\/dharmendrawhizlabs-com\/"},"uagb_comment_info":2,"uagb_excerpt":"In today&#8217;s world, there are a lot of raw data generated every day in almost every IT Industries, so there is an need of a dedicated team who can be to evaluate and plot this data to make inferences and imply the Machine Learning algorithm to make the predictions.\u00a0 Hence there is a huge demand&hellip;","_links":{"self":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/88726","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/users\/223"}],"replies":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/comments?post=88726"}],"version-history":[{"count":9,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/88726\/revisions"}],"predecessor-version":[{"id":88766,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/88726\/revisions\/88766"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/media\/88758"}],"wp:attachment":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/media?parent=88726"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/categories?post=88726"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/tags?post=88726"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}