DP-420 exam questions

Top free exam questions on DP-420 Designing and Implementing Cloud-Native Applications

Are you looking for DP-420 exam questions and answers to prepare for the  Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB certification exam?

Here are our newly updated 25+ Free DP-420 exam questions which are very similar to the DP-420 practice test as well as the real exam.

Who this exam is intended for?

This DP-420 Microsoft azure cosmos DB exam is intended for software professionals those who want to show their skills and knowledge in designing and implementing solutions that use Azure Cosmos DB. The candidates who take this exam should have experience working with Azure Cosmos DB, including experience with the Azure portal, Azure Resource Manager, Cosmos DB SQL API, and Cosmos DB SDK.

Domains Covered in DP-420 exam questions

This DP-420 exam questions covers four domains as follows:

Domain Weightage
Implementing and designing the data models 35-40%
Implementing and designing the data distribution 5-10%
Integrating an Azure Cosmos DB Solution 5-10%
Optimizing an Azure Cosmos DB Solution 15-20%
Maintaining an Azure Cosmos DB Solution 25-30%

 

The DP-420 practice exam covers a range of topics, including how to provision and configure Cosmos DB databases, how to manage and monitor them, and how to troubleshoot common issues.

What skills will you gain by obtaining DP-420 Designing and Implementing Cloud-Native Applications certification exam?

This DP-420 exam is designed to help you learn the skills needed to design, build, and deploy cloud-native applications into an Azure Cosmos DB database. By taking this exam, you will gain the skills needed to:

  • Creation of indexing policies,
  • Manage and provisioned resources, and
  • Performing common operations with the SDK.
  • Design and deploy a cloud-native application
  • Monitor and troubleshoot a cloud-native application
  • Microservices architecture
  • Continuous delivery and integration

Let’s start Learning!

Free DP- 420 Exam Questions on Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB

Domain: Design and Implement Data Models

Subdomain: Design and implement a non-relational data model for Azure Cosmos DB Core API

Question 1. You are working on an app that will save the device metrics produced every minute by various IoT devices. You decide to collect the data for two entities; the devices and the device metrics produced by each device. You were about to create two containers for these identified entities but suddenly your data engineer suggests placing both entities in a single container, not in two different containers. What would you do?

  1. Create a document with the deviceid property and other device data, and add a property called ‘type’ having the value ‘device’. Also, create another document for each metrics data collected using devicemetricsid property.
  2. Create a document with the deviceid property and other device data. After that embed each metrics collection into the document with the devicemetricsid property and all the metrics data.
  3. Create a document with the deviceid property and other device data, and add a property called ”type” with the value ’device‘. Create another document for each metrics data using the devicemetricsid and deviceid properties and add a property called ”type” with the value devicemetrics.
  4. None of these.

Correct Answer: C

Explanation: If you create two different types of documents including the property ‘deviceid’ for both entities, it will ease referencing both entities inside the container.

Option A is incorrect. Creating two types of documents but not including the deviceid for the device metrics document would make it impossible to reference the device actually referenced by the collected metrics.

Option B is incorrect. Embedding each metric data collected every minute would create high amounts of requests.

Option C is correct. If you create two different types of documents including the property ‘deviceid’ for both entities, it will ease referencing both entities inside the container.

Option D is incorrect. Option C is correct.

Reference: To know more about storing multiple entities in a single cosmos DB collection, please visit the below-given link:

https://anthonychu.ca/post/cosmos-db-mongoose-discriminators/

 

Domain: Design and Implement Data Models

Subdomain: Design and implement a non-relational data model for Azure Cosmos DB Core API

Question 2. The below diagram illustrates the configuration settings for a container in an Azure Cosmos DB Core (SQL) API account.

Which of the following statements rightly describes the container’s configuration?

  1. All the items will be deleted after 1 hour.
  2. All the items will be deleted after 1 month.
  3. Items will be expired only if they have a time to live value.
  4. Items stored in containers will get retained always, regardless of the time to live value of the items.

Correct Answer: C

Explanation: Time to live can be set on a container or an item within the container. The following figure illustrates the properties:

Option A is incorrect. The deletion of the items depends upon the time to live value set on the item.

Option B is incorrect. The deletion of the items depends upon the time to live value set on the item.

Option C is correct. Items will be expired only if it has a time to live value.

Option D is incorrect. Items will get expired if it has a time to live value.

Reference: To know more about Time to Live in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/time-to-live

 

Domain: Design and Implement Data Models

Subdomain: Design a data partitioning strategy for Azure Cosmos DB Core API

Question 3. In a shared throughput database, the containers share the throughput (Request Units per Second) allocated to that database. In a manually provisioned throughput, it is possible to have up to ………… containers with a minimum ……………. Request Units per second on the database.

  1. 15 and 400
  2. 25 and 400
  3. 15 and 4000
  4. 25 and 4000

Correct Answer: B

Explanation: With manually provisioned throughput, there can be up to 25 (twenty-five) containers with a minimum of 400 Request Units per second on the database. On the other hand, with autoscale provisioned throughput, there can be up to twenty-five containers with an autoscale maximum of 4000 Request Units/s (scaling b/w 400 – 4000 Request Units per second).

Option A is incorrect. You can have up to 25 containers, not only 15.

Option B is correct. 25 containers with a minimum of 400 RU/s is the right answer.

Option C is incorrect. You can have up to 25 containers with a minimum of 400 RU/s.

Option D is incorrect. You can have up to 25 containers a minimum of 400 RU/s.

Reference: To know more about provisioned throughput in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput

 

Domain: Design and Implement Data Models

Subdomain: Plan and implement sizing and scaling for a database created with Azure Cosmos DB

Question 4. Storage1 is an Azure Cosmos DB Core (SQL) API account based on provisioned throughput capacity mode.

The following table shows the databases contained by this account:

And the below table demonstrates the containers contained by the databases:

Here are two statements for the given scenario:

Statement 1: You can add a new container that uses database throughput to db2.

Statement 2: The maximum throughput that can be consumed by cn11 is 400 RU/s.

Which of the above statements are correct?

  1. Only statement 1
  2. Only statement 2
  3. Both statement 1 and statement 2
  4. None of the statement 

Correct Answer: A

Explanation: For db2, the maximum request units per second are 8000 and there are 8 containers in DB as can be seen from the above diagram; from cn11 to cn18. Therefore, the maximum throughput that can be consumed by each container is 1000 RU/s. Moreover, as the throughput is 1000 RU/s, the new container can be added easily. 

Option A is correct. A new container can be added to db2 that will use the database throughput.

Option B is incorrect. The maximum throughput that can be consumed by cn11 is not 400 RU/s.

Option C is incorrect. Statement 2 is incorrect.

Option D is incorrect. Statement 1 is correct.

Reference: To know more about Azure Cosmos DB pricing, please visit the below-given link:

https://azure.microsoft.com/en-us/pricing/details/cosmos-db/

Domain: Design and Implement Data Models

Subdomain: Implement client connectivity options in the Azure Cosmos DB SDK

Question 5. While working in .NET SDK v3, you need to enable multi-region writes in your app that uses Azure Cosmos DB. WestUS2 is the region where your application would be deployed and Cosmos DB is replicated.

Which of the following attributes would you set to WestUS2 to achieve the goal?

  1. ApplicationRegion
  2. SetCurrentLocation
  3. setPreferredLocations
  4. connection_policy.PreferredLocations

Correct Answer: A

Explanation: In .NET SDK v3, if you want to enable multi-region writes in your application, set ApplicationRegion to the region where the application is being deployed and Cosmos DB is replicated:

Option A is correct. ApplicationRegion is the correct answer.

Option B is incorrect.  SetCurrentLocation is used in .NET SDK v2.

Option C is incorrect.  setPreferredLocations is used in Async Java V2 SDK.

Option D is incorrect. Connection_policy.PreferredLocations is used in Python SDK.

Reference: To know more about multi-region writes in your applications, please visit the below-given link:

https://docs.microsoft.com/bs-latn-ba/azure/cosmos-db/sql/how-to-multi-master

 

Domain: Design and Implement Data Models

Subdomain: Implement client connectivity options in the Azure Cosmos DB SDK

Question 6. While chairing a team session, you are telling the team members about the Azure Cosmos DB Emulator. Which of the following statement is not true about Azure Cosmos DB Emulator?

  1. Azure Cosmos DB Emulator offers an emulated environment that would run on the local developer workstation.
  2. The emulator does support only a single fixed account and a familiar primary key. You can even regenerate the key while utilizing the Azure Cosmos DB Emulator.
  3. The emulator doesn’t offer multi-region replication.
  4. The emulator doesn’t offer various Azure Cosmos DB consistency levels as offered by the cloud service.

Correct Answer: B

Explanation: The following diagram states the differences in functionality between the emulator and an Azure Cosmos account in the cloud:

Option A is incorrect.  Azure Cosmos DB Emulator really offers an emulated environment that would run on the local developer workstation.

Option B is correct. The emulator does support only a single fixed account and a familiar primary key but you can’t regenerate the key while utilizing the Azure Cosmos DB Emulator.

Option C is incorrect. The statement “The emulator doesn’t offer multi-region replication” is true.

Option D is incorrect.  The statement “The emulator doesn’t offer various Azure Cosmos DB consistency levels as offered by the cloud service” is also true.

Reference: To know more about Installing and using the Azure Cosmos DB Emulator for local development and testing, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator

 

Domain: Design and Implement Data Models

Subdomain: Implement data access by using the Azure Cosmos DB SQL language

Question 7. You are having an index policy as shown in the following diagram:

You need to create an SQL query with an ORDER BY statement. Which of the following ORDER BY statements will execute successfully while creating the query?

  1. ORDER BY c.age ASC, c.name ASC
  2. ORDER BY c.age DESC, c.name DESC
  3. ORDER BY c.name ASC, c.age DESC
  4. ORDER BY c.name DESC, c.age ASC
  5. ORDER BY c.name DESC, c.age DESC

Correct Answer: E

Explanation: SQL Queries having an ORDER BY clause with two or more properties needs a composite index. While using a composite index for such queries, here are a few considerations that need to be taken care of: 

  • Composite index won’t support the query if the composite index paths don’t match the sequence of the properties in the ORDER BY clause.
  • The order of composite index paths (descending or ascending) should also match the order in the ORDER BY clause.
  • The composite index also supports an ORDER BY clause with the opposite order on all paths.

Option A is incorrect. As per the above consideration, the given statement won’t run successfully. 

Option B is incorrect. Due to the non-matching of the sequence of properties, the given statement won’t run successfully.

Option C is incorrect. The order needs to be the same. 

Option D is incorrect. The order needs to be the same.

Option E is correct. The given statement meets all the above considerations and therefore will run successfully.

Reference: To know more about index policy, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

 

Domain: Design and Implement Data Models

Subdomain: Implement server-side programming in Azure Cosmos DB Core API by using JavaScript

Question 8. Your development team has written a validation logic in JavaScript to ensure that the items are in the said format before committing them to a container.

One of your friends suggests using post-trigger as a server-side programming construct for this task.

Will the suggested solution meet the goal?

  1. Yes
  2. No

Correct Answer: B

Explanation: A post-trigger will run its logic too late after the item has already been committed to the container. Therefore, using the post-trigger won’t work. Instead, a pre-trigger should be used as a pre-trigger will run its logic prior to the item being committed to the container. Any validation logic can be executed at this point.

Reference: To know more about triggers in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-write-stored-procedures-triggers-udfs

 

Domain: Design and Implement Data Distribution

Subdomain: Design and implement a replication strategy for Azure Cosmos DB

Question 9.  You need to configure the consistency levels on a per-request basis. Which of the following C# class would you use in the .NET SDK for Azure Cosmos DB SQL API?

  1. CosmosClientOptions
  2. CosmosConfigOptions
  3. ItemRequestOptions
  4. Container

Correct Answer: C

Explanation: The ItemRequestOptions class contains various session token and consistency level configuration properties on a per-request basis.

Option A is incorrect. CosmosClientOptions class configures the overall client used in the SDK in several operations.

Option B is incorrect. CosmosConfigOptions isn’t a valid class.

Option C is correct. The ItemRequestOptions class includes several properties to configure per-request options.

Option D is incorrect. The Container class has methods for performing operations on a container but does not provide any way to configure per-request options.

Reference: To know more about how to configure consistency models, please visit the below-given link:

https://microsoftlearning.github.io/dp-420-cosmos-db-dev/instructions/21-sdk-consistency-model.html

 

Domain: Integrate an Azure Cosmos DB Solution

Subdomain: Enable Azure Cosmos DB analytical workloads

Question 10. Which of the following function in Spark SQL separates the array’s elements into multiple rows with positions and utilizes the column names ‘pos’ for position and ‘col’ for elements of the array?

  1. explode()
  2. posexplode()
  3. preexplode()
  4. Separate()

Correct Answer: B

Explanation: posexplode() is a function in Spark SQL that separates the array’s elements into multiple rows with positions and utilizes the column names pos for position and col for elements of the array.

Option A is incorrect. explode() function separates the array’s elements into many rows and utilizes the default column name array’s elements.

Option B is correct. posexplode() is a function in Spark SQL that separates the array’s elements into multiple rows with positions and utilizes the column names pos for position and col for elements of the array.

Option C is incorrect. Preexplode() is not the right function.

Option D is incorrect. Separate() is not a right function. 

Reference: To know more about performing complex queries with JSON data, please visit the below-given link:

https://docs.microsoft.com/en-us/learn/modules/query-azure-cosmos-db-with-apache-spark-for-azure-synapse-analytics/5-perform-complex-queries

 

Domain: Optimize an Azure Cosmos DB Solution

Subdomain: Optimize query performance in Azure Cosmos DB Core API

Question 11. An Azure Cosmos DB Core (SQL) API account has a container that stores telemetry data from IoT devices. The container utilizes telemetryId as the partition key and has a throughput of 1000 RU/s (request units per second). Data is submitted by approx 5000 IoT devices by using the same telemetryId value every 5 minutes.

You have an app that does analytics on the data and reads telemetry data frequently for a single IoT device to have the trend analysis.

The below figure demonstrates the sample of a document in the container:

What would you do for reducing the number of RUs consumed by the analytics app?

  1. Enhance the offerThroughput value for the container.
  2. Decrease the offerThroughput value for the container.
  3. Move the data to a new container that utilizes a partition key of date.
  4. Move the data to a new container that contains a partition key of deviceId.

Correct Answer: D

Explanation: The partition key determines how data is routed in the different partitions by Cosmos DB and should make sense in the context of the specified scenario. For IoT applications, DeviceId is the typical “natural” partition key.

Option A is incorrect. Increasing the offerThroughput value is not the right answer.

Option B is incorrect. Decreasing the offerThroughput value won’t meet the goal.

Option C is incorrect.  DeviceId is the typical “natural” partition key for IoT applications.

Option D is correct. Moving the data to a new container that contains a partition key of deviceId will result in reducing the number of RUs consumed by the analytics app.

Reference: To know more about Azure Cosmos DB in IoT workloads, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db

 

Domain: Optimize an Azure Cosmos DB Solution

Subdomain: Optimize query performance in Azure Cosmos DB Core API

Question 12. One of the team members has recently joined the team and wants to manually adjust the time for which the items will remain in the cache.  He asks you for help about which property from the ItemRequestOptions class should he configure. What would you suggest?

  1. ConsistencyLevel
  2. SessionToken
  3. MaxIntegratedCacheStaleness
  4. None of these

Correct Answer: C

Explanation:  MaxIntegratedCacheStaleness indicates the maximum acceptable staleness for the cached queries and point reads, irrespective of the selected consistency. This property is set to configure a TimeSpan that will be used to limit how long items will remain in the cache. 

Option A is incorrect. Setting the consistency level correctly is a part of enabling the integrated cache, but it doesn’t affect the time items remain in the cache.

Option B is incorrect. Getting the session token won’t affect the time items remain in the cache.

Option C is correct. MaxIntegratedCacheStaleness property is set to configure a TimeSpan that will be used to limit how long items will remain in the cache.

Option D is incorrect. MaxIntegratedCacheStaleness property is the right property to set for the said purpose.

Reference: To know about MaxIntegratedCacheStaleness in detail, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/integrated-cache

 

Domain: Optimize an Azure Cosmos DB Solution

Subdomain: Design and implement change feeds for an Azure Cosmos DB Core API

Question 13. You are having an Azure Cosmos DB Core (SQL) API account that has three containers as demonstrated in the below table.

Fn1, Fn2, and Fn3 are three Azure functions that read the change feed from cn1, cn2, and cn3 respectively.

Now, you perform the below three actions:

  • Delete item item1 from container cn1.
  • Update item item2 from container cn2.
  • For item item3 in container cn3, update the item TTL ( time to live to) 3,600 seconds.

Consider three statements for the given scenario:

Statement 1: Fn1 will receive an item1 from the change feed.

Statement 2: Fn2 can check the _etag of an item2 to view whether the item is an insert or update.

Statement 3: Fn3 will receive item3 from the change feed.

Which of the above statements are true?

  1. Only statement 1
  2. Only statement 2
  3. Only statement 3
  4. Only statement 1 and statement 2
  5. Only statement 2 and statement 3
  6. All of the statements

Correct Answer: C

Explanation: The change feed doesn’t capture the deletes. If an item is deleted from the container, it also gets removed from the change feed. The easiest method to handle this is to add a soft marker upon the items that are being deleted. You can even add a property known as “deleted” and assign it the value “true” while deleting. This document update will be shown up in the change feed. A TTL can be set on this item to make it automatically delete later.

Option A is incorrect. Statement 1 is incorrect as the change feed doesn’t capture deletes. 

Option B is incorrect. Statement 2 is also incorrect as the _etag format is internal and you shouldn’t be dependable upon it as it can change anytime.

Option C is correct. Statement 3 is the only statement that is correct as the Change feed listens an Azure Cosmos container for any modifications or changes.

Option D is incorrect. Both statements 1 and 2 are incorrect.

Option E is incorrect. Only statement 3 is correct.

Option F is incorrect. Only statement 3 is correct.

Reference: To know more about change feed in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-design-patterns

 

Domain: Optimize an Azure Cosmos DB Solution

Subdomain: Design and implement change feeds for an Azure Cosmos DB Core API

Question 14. Which of the following method of the ChangeFeedProcessor class is invoked to start consuming changes from the change feed?

  1. StartAsync
  2. GetChangeFeedProcessorBuilder<>
  3. Build
  4. None of these

Correct Answer: A

Explanation: StartAsync is a method in the ChangeFeedProcessor class and is invoked to start consuming changes from the change feed.

Option A is correct. StartAsync is a method in the ChangeFeedProcessor class and is invoked to start consuming changes from the change feed.

Option B is incorrect. The given is a method of Container class that creates the builder to eventually build a change feed processor.

Option C is incorrect. The build method is invoked at the end of creating a change feed processor (or estimator) and isn’t a method of the ChangeFeedProcessor class.

Option D is incorrect. StartAsync is the correct answer.

Reference: To know more about the change feed processor in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-processor

 

Domain: Optimize an Azure Cosmos DB Solution

Subdomain: Define and implement an indexing strategy for an Azure Cosmos DB Core API

Question 15. While defining a custom index policy, which of the following path expression can be used to define an included path that will include all possible properties from the root of any JSON document?

  1. /[]
  2. /?
  3. /*
  4. /()

Correct Answer: C

Explanation: The property path can be defined with the below-given additions:

  • a path that leads to a scalar value (number or string) ends with /?
  • The elements from an array are addressed together through the /[] notation (instead of /1 , /0  etc.)
  • Wildcard /* can be utilized to match any elements below the referenced node.

Option A is incorrect. The array operator is exclusively used to address multiple elements together.

Option B is incorrect. The scalar operator is used strictly for the number or string values at the current node.

Option C is correct. The wildcard operator is used to match any elements below the referenced node.

Option D is incorrect. /() is not the valid option. 

Reference: To know more about property paths, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

 

Domain: Maintain an Azure Cosmos DB Solution

Subdomain: Monitor and troubleshoot an Azure Cosmos DB solution

Question 16. In the Azure portal, which of the following tab inside the Insights pane demonstrates the percentage (%) of successful requests over the total requests per hour?

  1. Storage
  2. Requests
  3. System
  4. Availability

Correct Answer: D

Explanation: The availability tab demonstrates the % of successful requests out of the total requests per hour. Azure Cosmos DB SLAs define the success rate.

Option A is incorrect. The storage tab demonstrates the size of data and index usage over the specific time period.

Option B is incorrect. The requests tab demonstrates the total number of requests processed by operation type, by status code, and the count of failed requests.

Option C is incorrect. It demonstrates the number of metadata requests served by the primary partition. 

Option D is correct. The availability tab demonstrates the % of successful requests out of the total requests per hour.

Reference: To know more about how to monitor and debug with insights in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/hi-in/azure/cosmos-db/use-metrics

 

Domain: Maintain an Azure Cosmos DB Solution

Subdomain: Implement backup and restore for an Azure Cosmos DB solution

Question 17. There is a database in an Azure Cosmos DB Core (SQL) API account which is backed up every two hours.

You have been asked to implement a solution that will support point-in-time restore. Which of the following thing would you perform first?

  1. Configuring the backup and restore settings for the account
  2. Enabling the continuous backup for the account
  3. Configuring the Point In Time Restore settings for the account
  4. Creating a new account having a periodic backup policy

Correct Answer: B

Explanation: While creating a new Azure Cosmos DB account, go to the Backup policy tab and select continuous mode to enable the point-in-time restore feature for the new account. With this restore functionality, data is restored to a new account, currently, it doesn’t allow restoring the data to an existing account.

Option A is incorrect. First, you need to enable continuous backup for the account.

Option B is correct. Enabling the continuous backup for the account is the first thing you need to perform to meet the goal. 

Option C is incorrect. To enable the Point In Time Restore settings for the account, you need to first go to the Backup policy tab and select continuous mode. 

Option D is incorrect. You need to create a new account with the continuous backup policy.

Reference: To know more about the point in time restore, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/provision-account-continuous-backup

 

Domain: Maintain an Azure Cosmos DB Solution

Subdomain: Implement security for an Azure Cosmos DB solution

Question 18. Your Azure Cosmos DB Core (SQL) API account contains a database. You need to create an Azure function that would access the database for retrieving the records based upon a variable known as the account number.

You need to ensure that the provided solution protects against SQL injection attacks.

How would you write the command statement in the Azure function?

  1. cmd= “SELECT * FROM Employees e where e.accountnumber= ‘accountnumber’”
  2.  cmd= “SELECT * FROM Employees e where e.accountnumber= @accountnumber”
  3. cmd= “SELECT * FROM Employees e where e.accountnumber= LIKE @accountnumber”
  4. cmd= “SELECT * FROM Employees e where e.accountnumber= ‘ “ +accountnumber+” ’ ”

Correct Answer: B

Explanation: Azure Cosmos DB supports writing queries with parameters with @ notation. Parameterized SQL offers robust handling and escaping of user input, and avoids accidental exposure of data through SQL injection.

Option A is incorrect. The given query doesn’t protect from SQL injection attacks.

Option B is correct. Parameterized SQL protects against SQL injection attacks.

Option C is incorrect. The given query won’t protect from SQL injection attacks.

Option D is incorrect. Parameterized SQL queries with @ notation protect against SQL injection.

Reference: To know more about parameterized queries in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-parameterized-queries

 

Domain: Maintain an Azure Cosmos DB Solution

Subdomain: Implement data movement for an Azure Cosmos DB solution

Question 19. The Azure Cosmos DB (SQL API) connector supports various authentication types like Key authentication, Service principal authentication, System-assigned managed identity authentication and User-assigned managed identity authentication.

Consider the following statements regarding these types:

Statement 1: The service principal authentication is currently supported in the data flow.

Statement 2: The system-assigned managed identity authentication is currently supported in the data flow.

Statement 3: The user-assigned managed identity authentication isn’t currently supported in the data flow.

Which of the above statements are true?

  1. Statement 1 only
  2. Statement 2 only
  3. Statement 3 only
  4. Statement 1 and statement 2 only
  5. Statement 2 and statement 3 only
  6. All the three statements

Correct Answer: C

Explanation: Currently, all the three authentication types i.e service principal authentication, system-assigned managed identity authentication, and user-assigned managed identity authentication are not supported in the data flow.

Option A is incorrect. Statement 1 is incorrect.

Option B is incorrect. Statement 2 is not true.

Option C is correct. Out of the given statements, only statement 3 is true.

Option D is incorrect. Both statements 1 and 2 are not true.

Option E is incorrect. Only statement 3 is correct. Statement 2 is not true.

Option F is incorrect. Only statement 3 is true.

Reference: To know more about Copy and transforming data in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db

 

Domain: Maintain an Azure Cosmos DB Solution

Subdomain: Implement a DevOps process for an Azure Cosmos DB solution

Question 20. You want to disable all indexing for a container in Azure Cosmos Database. Which of the following property of the indexing policy would help you in meeting the goal?

  1. excludedPaths
  2. includedPath
  3. automatic 
  4. indexingMode

Correct Answer: D

Explanation: None and consistent are two indexing modes supported by Azure Cosmos DB. If you set the indexingMode property to none, it disables all indexing.

Option A is incorrect. The excludedPaths property specifies the paths that will be excluded from the index. It says nothing about the running of the indexer.

Option B is incorrect. The includedPaths property specifies the paths that will be included in the index. It says nothing about the running of the indexer.

Option C is incorrect. If you disable the automatic indexing by assigning it a value as false, it won’t disable all indexing for the container.

Option D is correct. Setting the indexing mode property to none disables all indexing.

References: To know more about various properties in Indexing in Azure Cosmos DB, please visit the below-given link:

https://azurecosmosdb.github.io/labs/dotnet/labs/04-indexing_in_cosmosdb.html

 

Domain: Design and Implement Data Models

Subdomain: Design and implement a non-relational data model for Azure Cosmos DB Core API

Question 21. There is a database in an Azure Cosmos DB Core (SQL) API account. You have been asked to create a container that will store employee data for 2500 small businesses. Every business is supposed to have up to 30 employees and every employee will have an email_Address value.

You will have to ensure the uniqueness of the email_Address value for every employee within the same company.

Which of the following fields would you set as the partition key and the unique key respectively?

  1. Company_ID and Email_Address
  2. Email_Address and Company_ID
  3. Employee_ID and Email_Address
  4. Email_Address and Employee_ID

Correct Answer: A

Explanation: After a container is created with a unique key condition, as per the unique key constraint, it will prevent creating any such new item or updating an existing value or item that will result in duplicity inside a logical partition. When you combine the partition key with the unique key, it guarantees that the item will be unique within the container’s scope.

Option A is correct. For an Azure Cosmos container with Company_ID as the partition key and Email_address as the unique key constraint, when the email address of a user is configured using a unique key, every item will have a unique email_address within the specific Company_ID. You can’t create two items with the same email addresses using the same Company_ID.

Option B is incorrect. Company_ID must be the partition key whereas Email_Address must be the unique key.

Option C is incorrect. Employee_ID and Email_Address are not the given option.

Option D is incorrect. Email_Address and Employee_ID are not the right answers.

Reference: To know more about unique keys and partition keys, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/unique-keys

 

Domain: Design and Implement Data Models

Subdomain: Design and implement a non-relational data model for Azure Cosmos DB Core API

Question 22. Your development team already has an existing Azure Cosmos DB account, database, and container. One of the senior executives asks you to configure Azure Cosmos DB for another team with unique and different needs for regional replication and default consistency levels.

Which of the following new sources would you create?

  1. Database
  2. Container
  3. Account
  4. There is no need to create a new source

Correct Answer: C

Explanation: The replication settings and default consistency level configured on a cosmos account apply to all Cosmos databases and containers under that account. As per the scenario, another team has unique needs for regional replication and default consistency levels. To support the different behavior for replication and default consistency levels, there is a need to create a new account. 

Option A is incorrect. If you create a new container, the global replication settings and default consistency level will remain the same between both teams.

Option B is incorrect. If you create a new database, the global replication settings and default consistency level will remain the same between both teams.

Option C is correct. To support different behavior for replication and default consistency levels, you must create a new account.

Option D is incorrect. You will have to create a new account.

Reference:  To know more about Consistency levels in Azure Cosmos DB, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

 

Domain: Design and Implement Data Models

Subdomain: Design a data partitioning strategy for Azure Cosmos DB Core API

Question 23. You need to design an Azure Cosmos DB SQL API solution for storing the data from various IoT (Internet of Things) devices. The Writes from these devices will occur each second.

Below is a data sample.

You are required to choose a partition key which will meet all the below-mentioned writes requirements:

  • Avoids hot partitions
  • Avoids capacity limits
  • Decreases the partition skew

What would you prefer?

  1. Creating a new synthetic key containing deviceId and a random number
  2. Creating a new synthetic key containing deviceId and a devicemanufacturer
  3. Creating a new synthetic key containing deviceId and sensor1value
  4. Use timestamp as the partition key

Correct Answer: A

Explanation: More evenly workload distribution is all about appending a random number to the value of partition key. When items are distributed in this style, it allows performing write operations in parallel across partitions.

Option A is correct. Creating a new synthetic key containing deviceId and a random number is the right answer.

Option B is incorrect. All the devices might have the same manufacturer. Therefore, creating a new synthetic key that contains deviceId and a devicemanufacturer is not the right answer.

Option C is incorrect.  Senser1Value has only two values.

Option D is incorrect. You won’t like partitioning the data on timestamp, as it will result in creating hot partitions. Suppose, the data is partitioned on time, in this case, for a specific minute, all write calls might hit only 1 partition. If you want to get the data for a client, it will result in a fan-out query as data might be distributed across all partitions. 

Reference: To know more about synthetic partition keys, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/synthetic-partition-keys

 

Domain: Design and Implement Data Models

Subdomain: Plan and implement sizing and scaling for a database created with Azure Cosmos DB

Question 24. The web development team of the company successfully estimates the throughput needs of your app within a 5% margin of error, with no significant variances over time. While running the app in production, the team reckons that your workload would be extraordinarily stable.

With both these inputs, which of the following throughput options would you consider?

  1. Serverless
  2. Standard
  3. Autoscale
  4. Hybrid

Correct Answer: B

Explanation: Standard throughput is well-suited for workloads with steady traffic. It needs a static number of RUs to be assigned ahead of time. Whereas the Autoscale throughput suits better for unpredictable traffic.

Option A is incorrect. Serverless is a better option for workloads with widely varying traffic and low average-to-peak traffic ratios.

Option B is correct. Standard throughput suits best for workloads with steady traffic as expected in the given scenario.

Option C is incorrect. Autoscale throughput suits best for unpredictable traffic.

Option D is incorrect. Hybrid is not a valid throughput option.

Reference: To know more about standard throughput, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-choose-offer

 

Domain: Design and Implement Data Models

Subdomain: Implement client connectivity options in the Azure Cosmos DB SDK

Question 25. Gateway mode using the dedicated gateway is one of the ways to connect to an Azure Cosmos DB Account. Which statement is not true regarding this connecting mode?

  1. A dedicated gateway cluster can’t be provisioned in Core (SQL) API accounts.
  2. A dedicated gateway cluster can have up to 5 nodes.
  3. You can’t change the size of the dedicated gateway nodes once it is created.
  4. Dedicated gateway nodes are independent of each other.

Correct Answer: A

Explanation: You can provision a dedicated gateway cluster in Core (SQL) API accounts. A dedicated gateway cluster can accommodate up to 5 nodes and nodes can be added or removed at any time. Even the same connection string is shared by all dedicated gateway nodes in your account.

Option A is correct. The given statement is not true. A dedicated gateway cluster can be provisioned in Core (SQL) API accounts.

Option B is incorrect. The given statement is true. A dedicated gateway cluster can have up to 5 nodes.

Option C is incorrect. It is not possible to change the size of the dedicated gateway nodes once it is created.

Option D is incorrect. Dedicated gateway nodes are independent of each other.

Reference: To know more about Azure Cosmos DB dedicated gateway, please visit the below-given link:

https://docs.microsoft.com/en-us/azure/cosmos-db/dedicated-gateway#connect-to-azure-cosmos-db-using-direct-mode

Summary

We hope the above list of DP-420 exam questions are helpful for you. DP-420 is an advanced-level certificate exam by Microsoft Azure in which a working professional interested to upskill their cloud native knowledge using Azure cosmos DB can attempt this exam.

We strongly ensure that we have covered all the objectives of the DP-420 exam questions, so that you can pass the exam at ease in your first attempt. Hence, keep on practicing until you are confident to take the real exams. You can also try Azure hands-on-labs & cloud sandbox where you can experience the real time working experiments.

About Dharmendra Digari

Dharmalingam carries years of experience as a product manager. He pursued his MBA, which honed his skills of seeing products differently than others perceive. He specialises in products from the information technology and services domain, with a proven history of expertise. His skills include AWS, Google Cloud Platform, Customer Relationship Management, IT Business Analysis and Customer Service Operations. He has specifically helped many companies in the e-commerce domain establish themselves with refined and well-developed products, carving a niche for themselves.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top