AI-102 practice exam

Free Questions on Microsoft Azure AI Solution Exam AI-102 Certification

Are you looking for free AI-102 exam questions and answers to prepare for Designing and Implementing a Microsoft Azure AI Solution certification exam?

Here are our newly updated 25+ Free questions on the Microsoft Azure AI Solution certification exam which are very similar to the practice test as well as the real exam.

Why do we provide Microsoft Azure AI Solution certification  AI-102 exam for free?

We provide Microsoft Azure AI Solution certification AI-102 exam for free because we want to help people learn about artificial intelligence and how it can be used to solve real-world problems. Azure AI is a powerful tool that can help businesses automate tasks, improve efficiency, and make better decisions. With the right training, businesses can harness the power of Azure AI to transform their operations.

AI-102 exam

Also, note that the exam syllabus covers questions from the following domains:

  • Plan and manage an Azure AI solution (15–20%)
  • Implement decision support solutions (10–15%)
  • Implement computer vision solutions (15–20%)
  • Implement natural language processing solutions (30–35%)
  • Implement knowledge mining and document intelligence solutions (10–15%)
  • Implement generative AI solutions (10–15%)

Let’s get started!

Domain : Implement Computer Vision solutions

Question 1.  The capability to export classifiers is offered in the Custom Vision Service. You may use this feature in your application locally for real-time classifications. Since Custom Vision Service only exports compact domains, you plan to convert standard domain to compact domain to run the classifier locally for real-time classification.

Review the scenario given above and select the answer choice where the sequence of steps to convert standard to a custom domain is  correctly maintained:

A. Select & save the new domain -> Retrain the model -> Select the project -> Export the model 
B. Select the project -> Select & save the new domain -> Export the model -> Retrain the model
C. Select & save the new domain -> Select the project -> Export the model -> Retrain the model
D. Select the project -> Select & save the new domain -> Retrain the model ->  Export the model

Correct Answer: D

Explanation 

Option A is incorrect because you would need to select the project first and change it to the compact domain. The next step would be to retain the model and export it in the appropriate format. 
Option B is incorrect because you would need to retain the model before exporting it in the appropriate format.
Option C is incorrect because you would need to select the project first and change it to the compact domain. The next step would be to retain the model and export it in the appropriate format.
Option D is correct because following this sequence, you can export the model to execute locally for real-time classification.

Reference: To learn more about Custom Vision Service export options, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model

Domain : Implement knowledge mining and document intelligence solutions

Question 2 : You granted Reader access to a group of users to allow them to search service operations, such as index management and querying search data. However, users provided feedback that they are unable to perform the intended functions.
Which action will you perform to address the concern?

A. Use API keys to grant access for content operations on the service
B. Grant Contributor Role to the group of users through the IAM page of resource group in Azure Portal
C. Use a Service Principal to grant access for content operations on the service
D. Grant Owner Role to the group of users through the IAM page of resource group in Azure Portal

Correct Answer: A

Explanation 

Option A is correct because the API key is the sole mechanism for authenticating inbound requests to your search service endpoint and is required on every request.
Option B is incorrect because Contributor Role provides access to create or delete the service. However, it does not grant access rights to the service endpoint. Search service operations are controlled through API keys.
Option C is incorrect because the service principal can gain access to portal resources through RBAC. Search service operations are controlled through API keys.
Option D is incorrect because the Owner Role provides access to create or delete the service. However, it does not grant access rights to the service endpoint. Search service operations are controlled through API keys.

Reference: To learn more about authorizing access through Azure roles in Azure Cognitive Search, use the link given below: https://docs.microsoft.com/en-us/azure/search/search-security-api-keys

Domain : Plan and manage an Azure Cognitive Services solution

Question 3: You may use role-based access control to grant access to a Custom Vision Cognitive resource. Review the table below and map the Cognitive Services role with the level of permission associated with that role.

Role (R1) Permission (P1)
Cognitive Services Custom Vision Reader (R11) Ability to view projects and export/publish the trained models (P11)
Cognitive Services Custom Vision Deployment (R12) Ability to view projects, delete training images and add tags (P12)
Cognitive Services Custom Vision Trainer (R13) Ability to edit projects and train models but cannot create/delete projects (P13)
Cognitive Services Custom Vision Labeler (R14) Ability to view projects but cannot make any changes to projects or models(P14)

 

A. R11 -> P14; R12 -> P12 ; R13 -> P13; R14 -> P11 
B. R11 -> P11; R12 -> P13; R13 -> P12; R14 -> P14
C. R11 -> P14; R12 -> P11; R13 -> P13; R14 -> P12
D. R11 -> P14; R12 -> P13; R13 -> P12; R14 -> P11

Correct Answer: C

Explanation 

Here is the correct mapping for the role and the permissions associated with it.

Role (R1) Permission (P1)
Cognitive Services Custom Vision Reader (R11) Ability to view projects but cannot make any changes to projects or models (P14)
Cognitive Services Custom Vision Deployment (R12) Ability to view projects and export/publish the trained models (P11)
Cognitive Services Custom Vision Trainer (R13) Ability to edit projects and train models but cannot create/delete projects (P13)
Cognitive Services Custom Vision Labeler (R14) Ability to view projects, delete training images and add tags (P12)

 

Option A is incorrect because Cognitive Services Custom Vision Reader can view projects but cannot make any changes
Option B is incorrect because using role Cognitive Services Custom Vision Deployment, models can be published or exported. Deployment resources can also view the project. However, they cannot make changes to the projects.
Option C is correct because Cognitive Services Custom Vision Trainer has the ability to make changes to projects. However, they cannot create or delete a project. The trainer role is suited for activities like training, publishing and exporting models.
Option D is incorrect because cognitive Services Custom Vision Labeler has the ability to upload, edit, or delete training images. Labelers can view projects, make changes to tags/images, but they cannot update anything else in the projects.

Reference: To learn more about Custom Vision Cognitive Services RBAC access, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/role-based-access-control

Domain : Implement computer vision solutions

Question 4 : To extract text from text-heavy images in a set of documents, you plan to use the Read API of Computer Vision cognitive service in Azure. Review the statements given below and choose the ones that are true regarding the Read API. (select three options)

A. Get Read Result operation fetches text from documents in a single step
B. The Read API supports around 23 languages for printing text
C. Supported file formats by the Read API are PDF, BMP, JPEG, PNG and TIFF
D. Up to 2000 pages can be processed for PDF file format while using the Read API
E. The Read API can only support dimensions up to 10000×10000 pixels

Correct Answers: C, D and E

Explanation 

Option A is incorrect. Read operation is asynchronous. It is a two-step process. In the first step, Operation-location is returned through the response. In the second step, the Get Read Result operation fetches the text through the JSON response. 
Option B is incorrect. Read API uses more than 70 languages for printing text. Refer to supported languages for reference.
Option C is correct. Read API supported PDF, BMP, JPEG, PNG and TIFF file formats.
Option D is correct. Up to 2000 pages can be processed for PDF file format while using the Read API, provided you are not using a free tier.
Option E is correct. The Read API can support a minimum dimension of 50×50 pixels and a maximum of 10000×10000 pixels.

Reference: To learn more about the OCR and Read API requirements, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr

Domain : Implement natural language processing solutions

Question 5 : You are tasked to use the Language Detection feature of Azure Cognitive Language Service. You provide the JSON inputs and receive two outputs, as given in code snippets below. The first output (Output 1) provides a confidence score of 1, whereas the second output (Output 2) returns a confidence score of 0.
Review the scenario given above and complete the following code snippets. (Choose two options)

Output 1:

   “documents”:[
        {
            “detectedLanguage”:{
                “confidenceScore”:1.0,
                “iso6391Name”:”fr”,
                “name”:”………………………………..”
            },
            “id”:”1″,
            “warnings”:[                
            ]
        }
]
Output 2:
   “documents”:[
        {
            “detectedLanguage”:{
                “confidenceScore”:0.0,
                “iso6391Name”:”(Unknown)”,
                “name”:”……………………………..”
            },
            “id”:”1″,
            “warnings”:[               
            ]
        }
]

A. Fr
B. French
C. countryHint
D. (Unknown)

Correct Answers: B and D

Explanation 

Option A is incorrect because fr is iso code for the french language. The name of the language is French.
Option B is correct. Since the confidence score for iso code “fr” is 1, the French language is returned as the name. Here is the completed code for Output 1.
    “documents”:[
        {
            “detectedLanguage”:{
                “confidenceScore”:1.0,
                “iso6391Name”:”fr”,
                “name”:”French”
            },
            “id”:”1″,
            “warnings”:[            ]
        }
]

Option C is incorrect because countryHint provides the hint for a region or country in case of an ambiguity in the language or if there are mixed languages in a sentence.
Option D is correct. Since the confidence score is 0, language is unknown. Here is the completed code for Output 2.
    “documents”:[
        {
            “detectedLanguage”:{
                “confidenceScore”:0.0,
                “iso6391Name”:”(Unknown)”,
                “name”:”(Unknown)”
            },
            “id”:”1″,
            “warnings”:[              ]
        }
]

Reference: To learn more about language detection, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/language-service/language-detection/overview

Domain : Implement natural language processing solutions

Question 6 : Language Understanding (LUIS) authoring authentication has changed. Now it uses Azure resources instead of an email account. With this change, you would need to migrate to an authoring resource as well. Review the statements regarding this migration process and select two statements that are true.

A. If you own the application, it will automatically migrate with you
B. Application owners get the choice to migrate a subset of the application
C. Collaborators are automatically added to the Azure authoring resource
D. If you are a collaborator, the application will automatically migrate with you
E. Before migration coauthors are called collaborators and after migration contributors

Correct Answers: A and E

Explanation 

Option A is correct. If you own the application, it will automatically migrate with you.
Option B is incorrect. Owners do not get a choice to migrate part of the application.
Option C is incorrect. An application owner adds collaborators as contributors using IAM. Collaborators are not automatically added to the authoring resource.
Option D is incorrect. If you are a collaborator, you will be prompted to migrate the application. However, it will not migrate automatically with you.
Option E is correct. Before migration, coauthors are called collaborators and after migration, they are called as contributors.

Reference: To learn more about migrating to an authoring resource, use the links given below: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-migration-authoring

Domain : Implement natural language processing solutions

Question 7 : Review the sentiment conditions in the matrix given below and choose the correct document that meets both conditions.

Sentiment Condition 1 Sentiment Condition 2 Returned Document Level
At least one sentence is negative At least one sentence is positive DocLevel1
At least one sentence is negative Rest of the sentences are neutral DocLevel2
At least one sentence is positive Rest of the sentences are neutral DocLevel3

 

A. DocLevel1 -> Neutral; DocLevel2 -> Negative; DocLevel3 -> Positive
B. DocLevel1 -> Neutral; DocLevel2 -> Neutral; DocLevel3 -> Neutral
C. DocLevel1 -> Mixed; DocLevel2 -> Negative; DocLevel3 -> Positive
D. DocLevel1 -> Mixed; DocLevel2 -> Neutral; DocLevel3 -> Neutral

Correct Answer: C

Explanation 

Option A is incorrect because DocLevel1 would be Mixed, the reason being one of the sentence sentiments is positive, and one of the sentence sentiments is negative.
Option B is incorrect because DocLevel1 would be Mixed, the reason being one of the sentence sentiments is positive, and one of the sentence sentiments is negative. DocLevel2 sentiment would be negative since one of the sentence sentiments is negative while other sentences returned a neutral sentiment. DocLevel3 sentiment would be positive since one of the sentence sentiments is positive while other sentences returned a neutral sentiment.
Option C is correct because DocLevel1, DocLevel2, DocLevel3 document level is maintained correctly.
Option D is incorrect because DocLevel2 sentiment would be negative since one of the sentence sentiments is negative while other sentences returned a neutral sentiment. DocLevel3 sentiment would be positive since one of the sentence sentiments is positive while other sentences returned a neutral sentiment.

Reference: To learn more about sentiment analysis, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1#view-the-results

Domain : Implement generative AI solutions

Question 8 : In your chat application, you decide to use multimedia attachments to enhance user interaction. In the first interaction with the user, your bot uses a basic card with a single large image, one button and a few lines of plain text. The image displays a product or category, the button offers a link to your website, and the text displays the title text for the product or category. In the second interaction, your bot uses a combination of text, speech, images and input fields. It renders a JSON object response into your chat application for an engaging experience.
Given the scenario above, which card would you choose to add to your bot for the first (Interaction1) and second Interaction (interaction2)?

A. Interaction1 -> Hero Card; Interaction2 -> Adaptive Card
B. Interaction1  -> Thumbnail Card; Interaction2 -> Adaptive Card
C. Interaction1 -> Adaptive Card; Interaction2  -> Thumbnail Card
D. Interaction1 -> Adaptive Card; Interaction2 -> Hero Card

Correct Answer: A

Explanation 

Option A is correct because a large image, a button, and some text are displayed through the multimedia attachment in the first interaction. The Hero card is suited for that purpose. A JSON object provides a flexible and rich experience to the user through an Adaptive card in the second interaction.
Option B is incorrect because the image used in the first interaction is large. However, the Thumbnail card uses a thumbnail image.
Option C is incorrect because Interaction 1 is suited to use the Hero Card, and Interaction 2 is suited to use an Adaptive card.
Option D is incorrect because Interaction 1 is suited to use the Hero Card, and Interaction 2 is suited to use an Adaptive card.

Reference: To learn more about responding with the card in a bot framework, use the link given below: https://docs.microsoft.com/en-us/composer/how-to-send-cards?tabs=v2x

Domain : Implement natural language processing solutions

Question 9 : Review the JSON code snippet given below and complete it by selecting the entity type and subtype for a prebuilt entity in a Language Understanding (LUIS) application:
 “documents”: [{
    “id”: “1”,
    “entities”: [{
      “name”: “last week”,
      “matches”: [{
        “entityTypeScore”: 0.8,
        “text”: “last week”,
        “offset”: 34,
        “length”: 9
      }],
      “type”: “……………………..”,
      “subType”: “……………………..”
    }]
  }],

A. DateTime
B. DateRange
C. Event
D. Location
E. Duration

Correct Answers: A and B

Explanation 

Here is the completed code for reference:

 “documents”: [{
    “id”: “1”,
    “entities”: [{
      “name”: “last week”,
      “matches”: [{
        “entityTypeScore”: 0.8,
        “text”: “last week”,
        “offset”: 34,
        “length”: 9
      }],
      “type”: “DateTime”,
      “subType”: “DateRange”
    }]
  }],

Option A is correct. Named entity recognition provides the ability to recognize and identify items in the text. In this code snippet, the prebuilt entity type DateTime is used for categorization.
Option B is correct. While not all entity types have subtypes, the “DateTime” entity type offers sub-type DateRange, apart from other subtypes, that is correct for this example.
Option C is incorrect because the event entity type is used with a historic, naturally occurring or a social event.
Option D is incorrect because location entity type is used for a location such as a landmark or a geographical feature.
Option E is incorrect because duration subentity type is measures in units such as seconds.

Reference: To learn more about DateTime prebuilt entity type in Language Understanding (LUIS) applications, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-datetimev2?tabs=1-1%2C2-1%2C3-1%2C4-1%2C5-2%2C6-1

Domain : Plan and manage an Azure AI solution

Question 10 : A start-up working on building an app to design memes, wants to
1- Text content scanning from images.
2- extract text from images.
Which API should be used?

A. Text Moderation API
B. Content Moderator Review tool
C. Custom term API
D. Image Moderation API

Correct Answer: D

Explanation 

Option A is INCORRECT because Text Moderation API cannot scan images for text.
Option B is INCORRECT. Content Moderator Review tool provides services that combines human review with the machine learning content moderation. This is not an API.
Option C is INCORRECT. Custom term API helps the creation of custom term lists to be used with Text Moderation API.
Option D is CORRECT. Image Moderation API can perform both the listed activities.

References: https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/text-moderation-apihttps://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/review-tool-user-guide/human-in-the-loophttps://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/try-terms-list-apihttps://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/image-moderation-api

Domain : Plan and manage an Azure Cognitive Services solution

Question 11 : Which of the cognitive Service API would help in:
–  Scanning text to identify personal data
–  Using a custom list in-line with content policies to block or allow content. 

A. Computer Vision
B. Language service
C. Text Protector
D. Content Moderator

Correct Answer: D

Explanation 

Option A is INCORRECT. Computer Vision service provides capabilities for image processing and capturing required associated information.
Option B is INCORRECT. Azure Cognitive Service for Language performs sentiment analysis, language detection, and key phrase extraction over text by NLP.
Option C is INCORRECT. Text Protector is an invalid service.
Option D is CORRECT. Moderation API is included in the content moderator service and helps check materials content for potential inappropriateness or objectionability. There are various types of moderation APIs including text moderation, custom terms list, image moderation, custom image list, and Video moderation.

References: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/https://docs.microsoft.com/en-us/azure/cognitive-services/language-service/https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/overview

Domain : Implement computer vision solutions

Question 12 : You have a requirement in your application to group faces based on the similar looking faces.  Assuming that you have initialized the variables, authorized the API call, created the PersonGroup and created the persons for the PersonGroup. Review the code snippet given below for adding faces to the persons and complete it by choosing the most appropriate answer choice:

Parallel.For(0, PersonCount, async i =>
{
    Guid personId = persons[i].PersonId;
    string personImageDir = @”/path/to/person/i/images”;
    foreach (string imagePath in Directory.GetFiles(personImageDir, “*.jpg”))
    {
        await WaitCallLimitPerSecondAsync();
        using (Stream stream = File.OpenRead(imagePath))
        {
            await faceClient………………………………………………………….(personGroupId, personId, stream);
        }}
});

A. PersonGroupPerson.AddFaceFromUrlAsync
B. PersonGroupPerson.CreateAsync
C. PersonGroupPerson.UpdateFaceAsync
D. PersonGroupPerson.AddFaceFromStreamAsync

Correct Answer: D

Explanation 

Option A is incorrect because in order to use AddFaceFromUrlAsync method, you would need to provide image urls, such as “string imageUrl = “https://<path to jpg file>”;
Option B is incorrect because the CreateAsync method is used for creating the person. Here is a quick example “persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName)”
Option C is incorrect because the UpdateFaceAsync method is used to update a person’s persisted face data. Requirement here is to add new faces instead.
Option D is correct because the requirement here is to add faces to the PersonGroup using an stream input. Here is the completed C# code

Parallel.For(0, PersonCount, async i =>
{
    Guid personId = persons[i].PersonId;
    string personImageDir = @”/path/to/person/i/images”;
    foreach (string imagePath in Directory.GetFiles(personImageDir, “*.jpg”))
    {
        await WaitCallLimitPerSecondAsync();
        using (Stream stream = File.OpenRead(imagePath))
        {
            await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
        }}
});

Reference: To learn more about adding faces to a group, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces

Domain : Implement computer vision solutions

Question 13 : A developer is working on building a “Deep Search AI solution” for a video library company. The solution requires insights extraction from video to improve user video searching experience.
The solution also requires enabling users to “create content for social media” based on the insights from their videos.
Which Azure cognitive service should the developer use?

A. Face API
B. Computer vision
C. Video Indexer
D. Bing Video Search

Correct Answer: C

Explanation 

Option A is INCORRECT. Face API facilitates searching, identifying, and matching faces in a private repository. These actions could be done for up to 1 million people.
Option B is INCORRECT. Computer vision helps in real-time video analysis, extraction of text automation, but will not be useful for the given scenario.
Option C is CORRECT. Video indexer can be used to extract insights from videos and these insights could be used for:
–  Deep Search
–  Content Creation
–  Accessibility
–  Monetization
–  Recommendations
–  Content Moderations
Option D is INCORRECT. Bing Video Search provides functionality to search video across the web and cannot help in building an AI solution as detailed in the scenario.

References: https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-overviewhttps://azure.microsoft.com/en-us/services/cognitive-services/face/https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/https://azure.microsoft.com/en-in/services/cognitive-services/bing-video-search-api/

Domain : Implement natural language processing solutions

Question 14 : You have created example utterances and intents for your Language Understanding (LUIS) application. You train your application and while you review the dashboard, you identify certain issues. Your top intent and the next intent scores are close enough that may flip with the next training. To address the issue, you decide to delete a number of utterances amongst different intents. This changes the quantity of example utterances significantly. With this change, your review dashboard analysis again. What kind of issue would  you expect on the dashboard with the new change?

A. Incorrect predictions
B. Unclear predictions
C. Data imbalance
D. None of the above

Correct Answer: C

Explanation 

Option A is incorrect because incorrect predictions occur when example utterance is not predicted for the labeled intent. Instead, it is predicted for a different intent. To remediate this issue, you would need to edit the utterances to be more specific and train the model again.
Option B is incorrect because unclear predictions occur when top intent and next intent scores are close enough to flip the results with the next model training. To remediate this issue, you would need to combine the intents or edit the utterances and train the model again.
Option C is correct because data imbalance occurs when the quantity of example utterances varies significantly. To remediate this issue, you would need to add more utterances to the intent and train the model again.

Reference: To learn more about issues that can be fixed using dashboard analysis, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-use-dashboard

Domain : Plan and manage an Azure Cognitive Services solution

Question 15 : Which of the below cognitive service is NOT a Language Cognitive Service offering from Azure?

A. Immersive reader
B. Language Understanding
C. QnA Maker
D. Azure Cognitive Service for Language
E. Speech Service
F. Translator

Correct Answer:  E

Explanation 

Option A is INCORRECT. Immersive Reader is a language cognitive service.
Option B is INCORRECT. Language Understanding is a language cognitive service.
Option C is INCORRECT. QnA maker is a language cognitive service.
Option D is INCORRECT. Azure Cognitive Service for Language is a language cognitive service.
Option E is CORRECT. Speech Service is NOT a language cognitive service. Speech Service is a Speech cognitive service.
Option F is INCORRECT. Translator is a language cognitive service.

Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/

Domain : Implement generative AI solutions

Question 16 : You are working on creating a bot that will eventually replace the users’ call to library reception to book products available in the library. The bot is expected to process the customer’s request (text and voice request) via its integration with backend services. The flow is finalized as below.
Step 1: The customer uses a mobile app or website.
Step 2: User is authenticated.
Step 3: The user requests for the information in natural language.
Step 4: Natural language request is processed by cognitive service.
Step 5: User reviews the response and reframes the query if required.
Step 6: Runtime telemetry is gathered to monitor bot performance.
Identify the correct service from the below for step 2 “User Authentication.”

A. Cognitive service: LUIS
B. Azure Active directory
C. Application Insights
D. Azure Bot Authenticator service

Correct Answer: B

Explanation 

Option A is INCORRECT. LUIS helps us build conversational applications that support natural language for communication with users. LUIS can perform extraction of the information from conversations as per the requirement.
Option B is CORRECT. Azure active directory is the azure service that helps perform identity and Access Management. Hence it performs the user validation.
Option C is INCORRECT. Applications Insights are a feature of Azure monitor that helps monitor applications performance.  Application Insights facilitates anomaly detection as an important feature amongst many other features.
Option D is INCORRECT. Azure Bot Authenticator service is an invalid service. We need to use the Azure active directory for the authentication.

References: https://docs.microsoft.com/en-us/azure/cognitive-services/luis/, https://docs.microsoft.com/en-us/azure/active-directory/, https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview

Domain : Implement generative AI solutions

Question 17 : Identify the statement that is True.
Statement 1: While using Azure Monitor Log Analytics, the customer is charged for Data Ingestion.
Statement 2: While using Azure Monitor Log Analytics, the customer is charged for Data retention.
(Select 2)

A. Statement 1 –> True
B. Statement 1 –> False
C. Statement 2 –> True
D. Statement 2 –> False

Correct Answers: A and C

Explanation 

Option A is CORRECT because while using Azure Monitor Log Analytics, the customer is charged for Data Ingestion. 
Option B is INCORRECT because while using Azure Monitor Log Analytics, the customer is charged for Data Ingestion.
Option C is CORRECT because while using Azure Monitor Log Analytics, the customer is charged for Data retention.
Option D is INCORRECT because while using Azure Monitor Log Analytics, the customer is charged for Data retention.

Reference: https://azure.microsoft.com/en-us/pricing/details/monitor/

Domain : Implement knowledge mining and document intelligence solutions

Question 18 : You are enabling users to perform image search from domains and websites specified in your custom search instance.
Which Azure API will be an ideal fit for this?

A. Bing Entity Search API
B. Bing Visual Search API
C. Bing Custom Search API
D. Bing Image Search API

Correct Answer:  C

Explanation 

Option A is INCORRECT. Bing Entity Search is not the correct choice because this API helps in searching the entities and places including restaurants, hotels, local businesses etc.
Option B is INCORRECT. Bing visual search lets us upload an image or use a URL to perform search and get information about it as required in the scenario.
Option C is CORRECT. Bing Custom Search API facilitates users image search from the domains and the websites mentioned in the custom search instance.
Option D is INCORRECT. Bing Image Search API helps us search and find images (static and animated).

References: https://docs.microsoft.com/en-us/azure/cognitive-services/bing-entities-search/overview, https://docs.microsoft.com/en-us/azure/cognitive-services/bing-visual-search/overview, https://docs.microsoft.com/en-us/azure/cognitive-services/bing-custom-search/overview, https://docs.microsoft.com/en-us/azure/cognitive-services/bing-image-search/overview

Domain : Implement natural language processing solutions

Question 19 : You have a requirement to analyze sentiments in a set of documents. You use sentiment analysis with opinion mining. As a result, you get document i, confidence score and document level sentiment. Given the requirement, review the code snippet below and complete it by choosing the option flag with analyzeSentiment method
async function sentimentAnalysisWithOpinionMining(client){
const sentimentInput = [
{
text: “The food was good but the service was poor”,
id: “0”,
language: “en”
}
];
const results = await client.analyzeSentiment(sentimentInput, {………………………………………. });

A. includeOpinionMining: true
B. includeOpinionMining: enable
C. getConfidenceScores: true
D. getConfidenceScores: enable

Correct Answer: A

Explanation 

Option A is correct because includeOpinionMining: true is the correct option flag for the requirement. Here is the completed code snippet:
async function sentimentAnalysisWithOpinionMining(client){
const sentimentInput = [
{
text: “The food was good but the service was poor”,
id: “0”,
language: “en”
}
];
const results = await client.analyzeSentiment(sentimentInput, { includeOpinionMining: true });
Option B is incorrect because the includeOpinionMining: enable is not the correct option flag.
Option C is incorrect because the getConfidenceScores: true is not the correct option flag.
Option D is incorrect because the getConfidenceScores: true is not the correct option flag.

Reference: To learn more about opinion mining in sentiment analysis, use the link given below: https://docs.microsoft.com/en-in/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1

Domain : Implement natural language processing solutions

Question 20 : You use the Speech service in Azure to transcribe large amounts of audio. Further you plan to apply analytics on the transcribed text and get insights or facilitate action. Since the amount of audio is large, you use batch transcription. In this process you perform following operations:
Operation A (O11): Create a new transcription
Operation B (O12): Get the transcription
Operation C (O13): Update the details of the transcription
Operation D (O14): Delete the transcription
To perform these operations, you make use of the REST API calls and methods, as given in the table below. Review the operations, methods and API calls and map them in the correct order.
API Call A (A21): GET speechtotext/v3.0/transcriptions/{id}
API Call B (A22): POST speechtotext/v3.0/transcriptions
API Call C (A23): DELETE speechtotext/v3.0/transcriptions/{id}
API Call D (A24): PATCH speechtotext/v3.0/transcriptions/{id}

A. O11 -> A21; O12 -> A22; O13 -> A24; O14 -> A23
B. O11 -> A21; O12 -> A22; O13 -> A23; O14 -> A24
C. O11 -> A22; O12 -> A21; O13 -> A24; O14 -> A23
D. O11 -> A22; O12 -> A21; O13 -> A23; O14 -> A24

Correct Answer: C

Explanation 

Here is the correct mapping of the Batch Transcription Operation and the REST API Call:

Batch Transcription Operation REST API Call
Create a new transcription POST speechtotext/v3.0/transcriptions
Get the transcription GET speechtotext/v3.0/transcriptions/{id}
Update the details of the transcription PATCH speechtotext/v3.0/transcriptions/{id}
Delete the transcription DELETE speechtotext/v3.0/transcriptions/{id}

Given the mapping table above, Option A is incorrect. Use the POST method to create a transcription.
Given the mapping table above, Option B is incorrect as well. Use the POST method to create a transcription. Use the PATCH method to update the transcription and DELETE method to delete the transcription.
Given the mapping table above, Option C is correct.
Given the mapping table above, Option D is incorrect.

Reference: To learn more about batch transcription, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/batch-transcription

Domain : Implement knowledge mining and document intelligence solutions

Question 21 : As you review your incident ticket volume, you realize that there is an upward trend observed in the incidents regarding Azure Cognitive Search queries. Following the next practices, you can identify and address the performance concerns. Review the answer choices below and choose the options that you will prefer to improve Cognitive Search queries performance. Keep the operational costs low. (select two answer choices) 

A. Reduce the content to maintain smaller indexes
B. Select all properties of the fields while creating a search index
C. Provide additional storage by adding disks
D. Mapping a complex data type to a simpler type field
E. Upgrade to standard S2 storage and add additional search units

Correct Answers: A and D

Explanation 

Option A is correct because smaller indexes would bring in faster query results. As data grows, the index becomes large, thereby slowing the search queries.
Option B is incorrect because selecting fewer properties or only the properties that you need would improve query performance.
Option C is incorrect because adding additional disk would not improve query performance. It would also increase your operational cost.
Option D is correct because complex data types require additional storage and hence increases your operational cost. However, if there is an opportunity to map a complex data type to a simpler type field, you can save additional storage and resource cost while maintaining the search query performance.
Option E is incorrect because upgrading to standard S2 storage and adding additional search units will increase the operational costs.

Reference: To learn more about performance tips of cognitive search index, use the link given below: https://docs.microsoft.com/en-us/azure/search/search-performance-tips

Domain : Implement generative AI solutions

Question 22 : You have existing Azure resources such as Bot channels, Cosmos DB, Blob storage, Language Understanding (LUIS), and QnA service. You plan to publish your bot to Azure Web App by importing existing Azure resources. Review the steps below and sequence them in the correct order or execution:

Step A: Select “Manage profiles” in the “Publish target” section
Step B: Select the publish tab in the “Publish your bot” pane
Step C: In the Composer menu, select Publish
Step D: Select import existing Azure resources
Step E: Add profile by providing profile name and profile target as “Publish bot to Azure” 
Step F: Select “Publish selected bots” by choosing “bot” and “Publish target”
Step G: Select the bot that you want to publish

A. Step G -> Step B -> Step A -> Step E -> Step D -> Step C -> Step F
B. Step G -> Step B -> Step D -> Step A -> Step E -> Step C -> Step F
C. Step C -> Step B -> Step G -> Step E -> Step A -> Step D -> Step F
D. Step C -> Step B -> Step D -> Step G -> Step E -> Step A -> Step F

Correct Answer: C

Explanation 

 Here is the correct order of the execution steps:

  • In the Composer menu, select Publish
  • Select the publish tab in “Publish your bot” pane
  • Select the bot that you want to publish
  • Add profile by providing profile name and profile target as “Publish bot to Azure” 
  • Select “Manage profiles” in the “Publish target” section
  • Select import existing Azure resources
  • Select “Publish selected bots” by choosing “bot” and “Publish target”

Given the order of steps in the explanation above, Option A is incorrect.
Given the order of steps in the explanation above, Option B is incorrect.
Given the order of steps in the explanation above, Option C is correct.
Given the order of steps in the explanation above, Option D is incorrect.

Reference: To learn more about publishing your bot to Azure, use the link given below: https://docs.microsoft.com/en-us/composer/how-to-publish-bot?tabs=v2x

Domain : Implement natural language processing solutions

Question 23 : To recognize specific phrases and intent in a recorded speech, you use Speech and Language Understanding services in Azure. For this configuration you create a SpeechConfig and IntentRecognizer object in your Language Understanding (LUIS) application. Review the code snippet below below and choose the which SubscriptionKey and ServiceRegion would you use to create SpeechConfig object
var config = SpeechConfig.FromSubscription(“SubscriptionKey”, “ServiceRegion”);

A. Speech service key
B. Language Understanding (LUIS) primary key
C. Language Understanding (LUIS) location
D. Speech service location

Correct Answers: B and C

Explanation 

Option A is incorrect because the Speech subscription key is used to create a Speech service. However in the Language Understanding (LUIS) application, for intent recognition in a given speech, Language Understanding (LUIS) primary key is used.
Option B is correct because in the Language Understanding (LUIS) application, for intent recognition in a given speech, Language Understanding (LUIS) primary key is used. To get that you go to the Language Understanding (LUIS) prediction resource section of Azure resource under the Manage blade of the Language Understanding (LUIS) portal.
Option C is correct because you need to provide the Language Understanding (LUIS) service location or Azure region where Language Understanding (LUIS) service is provisioned. 
Option D is incorrect because the location for the Language Understanding (LUIS) service hosting region needs to be provided instead.

Reference: To learn more about SpeechConfig object creation for intent recognition, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-intent-recognition?pivots=programming-language-csharp

Domain : Implement natural language processing solutions

Question 24 : In your application, you facilitate the identification of different entities in a given text. Using Text Analytics API in Azure, you categorize these entities into predefined classes.  Review the endpoint given below and choose the one that is used for Named Entity Recognition

A. https://uksouth.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/general
B. https://uksouth.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/pii
C. https://uksouth.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/pii?domain=phi
D. https://uksouth.cognitiveservices.azure.com/text/analytics/v3.1/entities/linking

Correct Answer: A

Explanation 

Option A is correct because named entity recognition uses general entities to categorize them into classes such as person, organization and product. The JSON response that you will receive will provide you with values such as the text, category, length, offset and confidence score.
Option B is incorrect because endpoint text/analytics/v3.1/entities/recognition/pii is used for personally identifiable information such as the email address and phone number.
Option C is incorrect because endpoint  text/analytics/v3.1/entities/recognition/pii?domain=phi is used to detect the health information.
Option D is incorrect because endpoint text/analytics/v3.1/entities/linking is used for entity linking. In the response you get the datasource and the url for the matches.

Reference: To learn more about Names Entity Recognition in Text Analytics, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking?tabs=version-3-1

Domain : Plan and manage an Azure AI solution

Question 25 : You use Computer Vision OCR containers to extract printed text from a set of PDF documents. To configure the docker runtime environment, you use following docker run command:

docker run –rm -it -p 5000:5000 \
–memory 2g –cpus 1 \
–mount type=bind,src=/home/<user>/output,target=/output \
<registry-location>/<image-name> \
Eula=accept \
Billing=<endpoint> \
ApiKey=<api-key> \
HTTP_PROXY=<proxy-url> \
HTTP_PROXY_CREDS=<proxy-user>:<proxy-password> \
Logging:Disk:Format=json
In this command, multiple configuration settings are used. Review the answer choices below and choose settings that are optional. (choose two answer choices)

A. ApiKey
B. Logging
C. HTTP_PROXY
D. Eula
E. Billing

Correct Answers: B and C

Explanation 

Option A is incorrect because the ApiKey setting is a required setting to track container’s billing information. You can find the ApiKey  in the Azure Portal, under the management blade of the Cognitive Services Resource.
Option B is correct because Logging is an optional setting. It offers the ASP.NET Core logging support for the containers. You can also use this setting for the debug purpose.
Option C is correct because HTTP_PROXY is also an optional setting. If you have a requirement to configure a proxy for outbound calls, use this setting in your docker run command.
Option D is incorrect because Eula is a required setting that states that you have accepted the license agreement for the container.
Option E is incorrect because Billing is a required setting as it provides the billing endpoint url for the cognitive service. You can find this value in the Azure Portal, under the overview blade of the Cognitive service resource.

Reference: To learn more about configuring the Read OCR Docker containers, use the link given below: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/computer-vision-resource-container-config?tabs=version-3-2

Summary

So, here we have covered the top 25+ Microsoft Azure AI Solutions AI-102 practice exam questions & answers. We hope the above list of AI-102 exam questions on Microsoft Azure AI Solutions are helpful for you to prepare. The key to getting success in the AI-102 exam is going through as many questions as you can. The more questions you will go through, it will only increase your chance of clearing the certification exam.

So, wish you luck with your Designing and Implementing Microsoft Azure AI Solutions AI-102 exam !

Have any questions or concerns? Just write down in the comment section, we will get back to you.

About Pavan Gumaste

Pavan Rao is a programmer / Developer by Profession and Cloud Computing Professional by choice with in-depth knowledge in AWS, Azure, Google Cloud Platform. He helps the organisation figure out what to build, ensure successful delivery, and incorporate user learning to improve the strategy and product further.

Leave a Comment

Your email address will not be published. Required fields are marked *


Scroll to Top