Free AWS Solutions Architect Professional Exam Questions (Updated 2023)

So, how’s your preparation going for the AWS Solutions Architect Professional Exam (AWS CSAP)? To help you with your preparation, here we bring FREE AWS Solutions Architect Professional Exam Questions so can get familiar with the exam pattern and get ready to pass the exam.

Note that in our series of free exam questions, we’ve previously covered Free AWS Solutions Architect Associate Exam Questions and Free AWS Developer Associate Exam Questions

AWS Solutions Architect Professional exam has been updated recently to the February 2019 version, we’d recommend you go through the AWS Solutions Architect Professional exam preparation guide to follow the right track for your exam preparation.

AWS Solutions Architect Professional exam is intended for those performing the role of Solutions Architect Professional. AWS CSAP exam recognizes and validates the advanced technical knowledge and expertise of candidates in designing distributed systems and applications on the AWS platform.

AWS CSAP exam validates the candidate’s knowledge and skills in –

  • Designing and deploying scalable, highly available, reliable, and robust applications on the AWS platform
  • Selecting suitable services for designing and deploying applications as per requirements
  • Migrating multi-tier, complex applications on AWS platform
  • Implementing solutions for cost control

So, AWS Certified Solutions Architect Professional certification is a credential that demonstrates your skills in designing and deploying AWS systems and applications. Let’s move forward to the AWS Solutions Architect Professional exam questions that will help you achieve your goal of becoming an AWS Certified Solutions Architect.

Practice with Free AWS Solutions Architect Professional Exam Questions

While preparing for the AWS CSAP exam, it is recommended to go through various resources including AWS Whitepapers, documentation, books, and online training. But there is no match of practicing with the questions that are in the same format as that of the real exam. For this, we’ve prepared this blog, where you will get 10 free AWS Solutions Architect Professional Exam Questions. This will help you understand the pattern covered in the AWS CSAP exam.

These practice questions have been prepared by our team of certified professionals and subject matter experts. These free AWS Solutions Architect Professional Exam Questions have a detailed explanation for the correct as well as incorrect options. So, will clear your doubts why a particular option is correct or incorrect. What are you thinking now? Just go through these AWS CSAP exam questions and get ready for the real exam.

Q1 : Two departments A and B have been added into a consolidated billing organization. Department A has 5 reserved RDS instances with DB Engine as MySQL. During a particular hour, department A used three DB Instances and department B used two RDS instances, for a total of 5 DB Instances on the consolidated bill. How should the RDS instances in department B be configured so that all five instances are charged as Reserved DB Instances?

A. Department B should launch DB instances in the same availability zone as a Reserved Instance in department A.

B. The DB engine in Department B should be MySQL.

C. The DB Instance Class should be the same in both departments such as m1.large.

D. The deployment type such as Multi-AZ should be the same in both department A and department B.

E. All of the above are needed.

Correct Answer: E

Explanation:

In order to receive the cost-benefit from Reserved DB Instances, all the attributes of DB Instances (DB Instance class, DB Engine, License Model, and Deployment type) in another account have to match with the attributes of the Reserved DB Instances.

Option A~D are incorrect: Refer to the reason in Option E.

Option E is CORRECT: Because all of the other options are needed. The reference is in https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidatedbilling-other.html.

Q2: As an AWS specialist, you are in charge of configuring consolidated billing in a multinational IT company. In the linked accounts, users have set up AWS resources using a tag called Department, which is used to differentiate resources. There are some other user-created tags such as Phase, CICD, Trial, etc. In the cost allocation report, you only want to filter it using the tag of Department and other tags are excluded in the report. How should you implement this so that the cost report is properly set up?

A. In the Cost Allocation Tags console of master account, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. The tag starts appearing on the cost allocation report after it is applied but does not appear on earlier reports.

B. In the Cost Explorer console of master account, deactivate all the other tags except the Department tag in the User-Defined Cost Allocation Tags area. By default, all user-defined tags are activated.

C. In the Cost Explorer console of master account, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. Make sure that other tags are inactive at the same time.

D. In the Cost Allocation Tags console of master account and linked accounts, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. The tag starts appearing on the cost allocation report after it is applied and also appears on earlier reports after 1 hour.

Correct Answer: A

Explanation:

User-Defined Cost Allocation Tags can be selected and activated in the Cost Allocation Tags console:

AWS CSAP Exam Questions

Option A is CORRECT: Because using this method, only the user-defined tag Department will appear in the cost allocation report.

Option B is incorrect: Because it should be the Cost Allocation Tags console rather than the Cost Explorer console. Moreover, by default, all user-defined tags are deactivated.

Option C is incorrect: Similar to Option B.

Option D is incorrect: Because only the master account can activate or deactivate the user-defined tags. Besides, the tag does not appear on earlier reports before it is activated.

Preparing for an AWS Solution Architect Interview? Check out these top AWS Solutions Architect Interview Questions and get yourself ready to crack the interview. If you are preparing to become an AWS Certified Solution Architect Professional, enroll in the AWS Solution Architect Professional training.

Q3: You are an AWS solutions architect and are in charge of the maintenance of an RDS on VMware database which is deployed on-premise. You have created a read replica in ap-south-1 region to share some read traffic. The system ran smoothly for a while then the company decides to migrate all the products to AWS including the on-premise RDS instance. Other than that, the instance needs to have another replica in another region ap-southeast-1. What actions should you take to fulfill this requirement?

  1. Use Data Migration Service to migrate the on-premise database to a RDS instance in AWS. Create a read replica in ap-southeast-1 region afterwards.
  2. In RDS console, click “migrating the instance” to create a new RDS instance. Then create a new read replica in the ap-southeast-1 region.
  3. Create another read replica in ap-southeast-1 region to share the read traffic for the RDS instance on VMware. Promote the RDS read replica in ap-south-1 to be the new RDS instance so that the original on-premise database is migrated in AWS with a replica in ap-southeast-1.
  4. Promote the RDS read replica in ap-south-1 to be the new RDS instance. Create another read replica in ap-southeast-1 for this new instance.

Correct Answer: D

Explanation:

Amazon RDS on VMware database instances can be easily migrated to Amazon RDS database instances in AWS with no impact to uptime, giving you the ability to rapidly deploy databases in all AWS regions without interrupting your customer experience. The process is as below:

AWS CSAP Exam Questions

Option A is incorrect: Because Data Migration Service is not needed. You just need to promote the read-replica to be the new RDS instance.

Option B is incorrect: Same reason as Option A. Also “migrating the instance” is incorrect.

Option C is incorrect: Because the read replica in ap-southeast-1 is still syncing with the original on-premise RDS instance. A new read replica should be created from the instance in ap-south-1.

Option D is CORRECT: Because the database can be easily migrated by promoting the read replica in ap-south-1.

Q4: There are two departments in a company. Both departments have owned several EC2 instances. Department A has a requirement to backup EBS volumes every 12 hours and the administrator set up a Data LifeCycle Policy in DLM for their instances. Department B requires a similar Data LifeCycle Policy as well for their instances. However, they prefer the schedule to run every 24 hours.  The administrator has noticed that 2 EC2 EBS volumes have been owned by two departments at the same time. How can the administrator set up the Data LifeCycle Policy for Department B?

A. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will be taken every 12 hours and 24 hours.

B. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will not be taken as there is a schedule conflict between two policies. However other EBS volumes are not affected.

C. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will be taken every 12 hours as 12 hours schedule takes priority.

D. Add a tag for EBS volumes that Department B has owned except the EBS volumes owned by two departments. Set up a Data LifeCycle Policy based on this tag. For the EBS volumes owned by two departments, snapshots are taken every 12 hours due to the policy of Department A.

Correct Answer: A

Multiple policies can be created to take snapshots for an EBS volume, as long as each policy targets a unique tag on the volume. In this case, the EBS volumes owned by two departments should have two tags, where tag A is the target for policy A to create a snapshot every 12 hours for Department A, and tag B is the target for policy B to create a snapshot every 24 hours for Department B, Amazon DLM creates snapshots according to the schedules for both policies.

Details refer to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html.

Explanation:

Option A is CORRECT: Because when an EBS volume has two tags, multiple policies can run at the same time.

Option B is incorrect: Because there is no schedule conflict for this scenario.

Option C is incorrect: Because 12 hours schedule does not take priority over 24 hours. And both schedules can run in parallel.

Option D is incorrect: Because the EBS volumes owned by two departments can add another tag and be included in the policy for Department B.

Preparing for AWS Solutions Architect Associate exam? Go through these Free AWS Certified Solutions Architect Exam Questions and get ready for the real exam.

Q5: You work at an AWS consulting company. A customer plans to migrate all its products into AWS and you are required to provide a detailed plan. The company has good experiences of Chef and prefers to continue using that. They wish that their EC2 instances use a blue/green deployment method. Moreover, it will be best if their infrastructure setup such as network layer can be easily re-run using scripts. Automatic scalability is also required for EC2. Which below options should you choose for the migration plan? Choose 3.

A. As Blue/Green deployment is not supported in OpsWorks, use Elastic Beanstalk Swap Url feature to deploy the application. Swap CNAMEs of the two environments to redirect traffic to the new version instantly.

B. Use Chef/Recipes in OpsWorks to add/deploy/edit the app in EC2 instances. The Blue/Green deployment in OpsWorks would require the Route 53 weighted routing feature.

C. In OpsWorks, set up a set of load-based EC2 instances, which AWS OpsWorks Stacks starts and stops to handle unpredictable traffic variations.

D. Create an autoscaling group with a suitable configuration based on CPU usage. Add the autoscaling group in OpsWorks stack so that its EC2 instances can scale up and down according to the CPU level automatically.

E. Edit CloudFormation templates and creates stacks for infrastructure. Add a dedicated CloudFormation stack for OpsWorks deployment and use the nested infrastructure stacks.

F. Create CloudFormation stacks for infrastructure. For the OpsWorks configurations, use AWS CLI commands such as “AWS Opsworks create-app”.

Correct Answer: B, C, E

In this scenario, as Chef is needed, OpsWorks should be considered at first priority unless there are conditions that it does not meet.

OpsWork has a key feature to scale based on time or load. For example:

AWS Certified Solutions Architect Professional Exam Questions

In terms of infrastructure, CloudFormation stack should be used. Besides, CloudFormation supports OpsWorks which means OpsWorks stack can use other nested CloudFormation stacks. In this way, the whole deployment is implemented as code.

Nested stacks are stacks created as part of other stacks. A nested stack is created within another stack by using the “AWS::CloudFormation::Stack” resource.

Explanation:

Option A is incorrect: Because OpsWorks supports Blue/Green Deployment. It needs the involvement of Route53. Refer to https://d1.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf.

Option B is CORRECT: Because OpsWorks can meet the need of Blue/Green Deployment and also use Chef which the customer prefers to use.

Option C is CORRECT: Because AWS OpsWorks supports scaling based on load including:
CPU: The average CPU consumption, such as 80%

Memory: The average memory consumption, such as 60%

Load: The average computational work a system performs in one minute.

Option D is incorrect: Because it is not straightforward to add an autoscaling group to OpsWorks although this may work. Refer to https://aws.amazon.com/blogs/devops/auto-scaling-aws-opsworks-instances/ on how to do that. The native OpsWorks Scaling feature should be chosen in Option C as it can already meet the customer’s need.

Option E is CORRECT: Because nested stacks are suitable for infrastructure and OpsWorks to work together.

Option F is incorrect: Because using AWS CLI commands to configure OpsWorks is not a automated method. An OpsWorks stack in CloudFormation should be considered.

Q6. API gateway and Lambda non-proxy integrations have been chosen to implement an application by a software engineer. The application is a data analysis tool that returns some statistic results when the HTTP endpoint is called. The lambda needs to communicate with some back-end data services such as Keen.io however there are chances that error happens such as wrong data requested, bad communications, etc. The lambda is written using Java and two exceptions may be returned which are BadRequestException and InternalErrorException. What should the software engineer do to map these two exceptions in API gateway with proper HTTP return codes? For example, BadRequestException and InternalErrorException are mapped to HTTP return codes 400 and 500 respectively. Select 2.

A. Add the corresponding error codes (400 and 500) on the Integration Response in API gateway.

B. Add the corresponding error codes (400 and 500) on the Method Response in API gateway.

C. Put the mapping logic into Lambda itself so that when exception happens, error codes are returned at the same time in a JSON body.

D. Add Integration Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes.

E. Add Method Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes 400 and 500.

Correct Answer : B, D

Explanation:

When an API gateway is established, there are four parts:

AWS CSAP Exam Questions

Method Request/Method Response are part mainly deal with API gateways and they are the API’s interface with the API’s frontend (a client), whereas Integration Request and Integration Response are the API’s interface with the backend. In this case, the backend is a lambda.

For the mapping of exceptions that come from Lambda, Integration Response is the correct place to configure. However, the corresponding error code (400) on the method response should be created first. Otherwise, API Gateway throws an invalid configuration error response at runtime. The below is an example to map BadRequestException to HTTP return code 400:

AWS CSAP Exam Questions

Option A is incorrect: Because HTTP error codes are defined firstly in Method Response instead of Integration Response.

Option B is CORRECT:  Because HTTP error codes are defined firstly in Method Response instead of Integration Response. (Same reason as A).

Option C is incorrect: Because Integration Response in API gateway should be used. Refer to https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html on “how to Handle Lambda Errors in API Gateway”.

Option D is CORRECT: Because BadRequest or InternalError should be mapped to 400 and 500 in Integration Response settings.

Option E is incorrect: Because Method Response is the interface with the frontend. It does not deal with how to map the response from Lambda/backend.

Preparing for an AWS interview? Prepare yourself with these top 50 AWS interview questions and answers to ace the interview.

Q7: An IT company owns a web product in AWS that provides discount restaurant information to customers. It has used one S3 Bucket (my_bucket) to store restaurant data such as pictures, menus, etc. The product is deployed in VPC subnets. The company’s Cloud Architect decides to configure a VPC endpoint for this S3 bucket so that the performance will be enhanced. To be compliance to security rules, it is required that the new VPC endpoint is only used to communicate with this specific S3 Bucket and on the other hand, the S3 bucket only allows the read/write operations coming from this VPC endpoint. Which two options should the Cloud Architect choose to meet the security needs?

A. Use a VPC Endpoint policy for Amazon S3 to restrict access to the S3 Bucket “my_bucket” so that the VPC Endpoint is only allowed to perform S3 actions on “my_bucket”.

B. Modify the security group of the EC2 instance to limit the outbound actions to the VPC Endpoint if the outgoing traffic destination is the S3 bucket “my_bucket”.

C. In the S3 bucket “my_bucket”, add an S3 bucket policy in which all actions are denied if the source IP address is not equal to the EC2 public IP (use “NotIpAddress” condition).

D. For the S3 bucket “my_bucket”, use an S3 bucket policy that denies all actions if the source VPC Endpoint is no equal to the endpoint ID that is created.

E. Create a S3 bucket policy in the S3 bucket “my_bucket” which denies all actions unless the source IP address is equal to the EC2 public IP (use “IpAddress” condition).

Correct Answer : A, D

In this case, two restrictions are required:

1, For the VPC endpoint, restricting access to the specific S3 Bucket “my_bucket”. A VPC Endpoint policy is needed:

{
  "Statement": [
    {
      "Sid": "Access-to-my-bucket-only",
      "Principal": "*",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::my_bucket",
                   "arn:aws:s3:::my_bucket/*"]
    }
  ]
}

2, For the S3 bucket “my_bucket”, restricting access to the new VPC Endpoint. S3 Bucket policy is required:

{
  "Version": "2012-10-17",
  "Id": "Policy1415115909152",
  "Statement": [
    {
      "Sid": "Access-to-specific-VPCE-only",
      "Principal": "*",
      "Action": "s3:*",
      "Effect": "Deny",
      "Resource": ["arn:aws:s3:::my_bucket",
                   "arn:aws:s3:::my_bucket/*"],
      "Condition": {
        "StringNotEquals": {
          "aws:sourceVpce": "vpce-1a2b3c4d"
        }
      }
    }
  ]
}

In terms of S3 bucket policy for VPC Endpoint, the aws:SourceIp condition cannot be used as for either NotIpAddress or IpAddresse, the condition fails to match any specified IP address or IP address range. Instead, the specific endpoint ID should be used for the S3 bucket policy.

Explanation:

Option A is CORRECT: Because VPC Endpoint policy helps on restricting which entity is able to use the VPC Endpoint. It is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint.

Option B is incorrect: Because security group cannot limit the actions for VPC endpoints.

Option C is incorrect: Because for S3 bucket policy, NotIpAddress condition is always met for VPC endpoint so that it cannot help on restricting the traffic from VPC endpoint.

Option D is CORRECT: Because in S3 bucket policy, a rule can be set up to deny all actions if the incoming traffic is not from the VPC Endpoint ID.

Option E is incorrect: Same reason as option C.

Q8: An IOT company has a new product which is a camera device. The device has installed several sensors and can record video as required. The device has AWS Kinesis Video Streams SDK in the software and is able to transmit recorded video in real time to AWS Kinesis. Then the end users can use a desktop or web client to view, download or share the video stream. The client app should be simple and use a third-party player such as Google Shaka Player to display the video stream from Kinesis. How should the client app be designed?

A. The client can use HTTP Live Streaming (HLS) for live playback. Use GetMedia API to process and play Kinesis video streams.

B. The client can use HLS for live playback. Use GetHLSStreamingSessionURL API to retrieve the HLS streaming session URL then provide the URL to the video player.

C. The client can use Adobe HTTP Dynamic Streaming (HDS) for live playback. Use GetHDSStreamingSessionURL API to retrieve the HDS streaming session URL then provide the URL to the video player.

D. The client can use Microsoft Smooth Streaming (MSS) for live playback. Use GetMSSStreaming API to retrieve the MSS streaming to the video player.

Correct Answer : B

Explanation:

The most straightforward way to view or live playback the video in Kinesis Video Streams is using HLS. HTTP Live Streaming (HLS) is an industry-standard HTTP-based media streaming communications protocol.

Option A is incorrect: Because although GetMedia API may work, it is not as simple as HLS. You may have to create a player that uses GetMedia and build it yourself. However, in this case, a third party player is needed. Reference is in https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-hls.html#how-hls-ex1-session.

Option B is CORRECT: Because GetHLSStreamingSessionURL API is required for third-party player to play the HLS streams.

Option C is incorrect: Because HTTP Live Streaming (HLS) should be used to playback the Kinesis Video Streams.

Option D is incorrect: Same reason as Option C.

Selection of good books is important while preparing for the AWS Solutions Architect Associate exam. Check out the list of the Best Books for AWS Certified Solutions Architect Exam now!

Q9: You are hired as an AWS solutions architect in a startup company. You notice that there are some issues for the backup strategy of EC2 instances and there is no snapshot lifecycle management at all. Users just create snapshots manually without a routine policy to control. You want to suggest to use a proper EBS Snapshot Lifecycle policy. How would you persuade your team lead to approve this suggestion?  (Select TWO)

A. A snapshot lifecycle policy helps to retain backups as required by auditors or internal compliance.

B. An EBS Snapshot Lifecycle helps to protect valuable data by enforcing a regular backup schedule.

C. A proper snapshot lifecycle policy is able to reduce storage costs as the snapshots taken by the scheduling policy are free.

D. The user can design their own schedule to backup snapshots according to different requirements, such as every 1 hour, 12 hours, 24 hours, 1 week, etc.

Correct Answer : A, B

Explanation:

EBS Snapshot Lifecycle policy, as a backup strategy, can bring lots of benefits for EC2 users. About the details, please refer to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html.

Option A is CORRECT: Because EC2 EBS volumes can have a routine backup which helps on quality audit.

Option B is CORRECT: Because this is the major benefit of lifecycle policy which helps on preserve important data and EBS volumes can be easily restored via the snapshots.

Option C is incorrect: The snapshot lifecycle policy can reduce storage costs by deleting outdated backups. However the snapshots themselves still have costs.

Option D is incorrect: Because the snapshots can only be created every 12 hours or 24 hours.

Q10: An IT company has a big data analytics application that is deployed in EC2 in multiple availability zones. These EC2 instances simultaneously access a shared Amazon EFS file system using a traditional file permissions model. A recent internal security audit has found that there is a potential security risk as the EFS file system is not encrypted for either at rest or in transit. What actions could be taken to address the potential security threat posed by non encryption of the EFS volume?

A. The encryption of data at rest has to be enabled when the Amazon EFS file system is created. The encryption of data in transit can be enabled when the file system is mounted in EC2 instance.

B. The encryption of data at rest and in transit can be enabled when the Amazon EFS file system is created.

C. The encryption of data at rest and in transit can only be enabled when the Amazon EFS file system is mounted in EC2 instance.

D. The encryption of data at rest is able to be enabled when the Amazon EFS file system is mounted in EC2 instance. The encryption of data in transit is enabled when the EFS file system is created using AWS console or CLI.

Correct Answer: A

Explanation:

Both encryption of data in transit and at rest are supported for EFS. Because of this, Amazon EFS now offers a comprehensive encryption solution. Blog https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/ has an introduction of this.

Option A is CORRECT: For the encryption at rest, it can be enabled as an option when the EFS file system is created:

AWS Certified Solutions Architect Professional Exam Questions

For the encryption in transit, it can be enabled when the EFS file system is mounted:

sudo mount -t efs  -o tls fs-12345678:/ /mnt/efs

Reference is in https://docs.aws.amazon.com/efs/latest/ug/encryption.html.

Option B is incorrect: Because the encryption of data in transit is enabled when EFS file system is mounted.

Option C is incorrect: Because the encryption of data at rest is enabled when EFS file system is created.

Option D is incorrect: Same reason as Option B & C.

Below are the New Questions & Answers for the AWS Solutions Architect Professional exam – updated on May 2022

Domain: Cost Control

Q11 : You are looking to migrate your Development and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each account bill to a Management AWS account using Consolidated Billing. To make sure that you keep within the budget, you would like to implement a way for administrators in the Management account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which of the options will allow you to achieve this goal.

A. Create IAM users in the Management account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant Management account access to the resources in the account by inheriting permissions from the Management account
B. Create IAM users and a cross-account role in the Management account that grants full Admin permissions to the Dev and Test accounts
C. Create IAM users in the Management account with the “AssumeRole” permissions. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant Management account access
D. Link the accounts using Consolidated Billing. This will give IAM users in the Management account access to the resources in Dev and Test accounts

Correct Answer: C

Explanation

The scenario here is asking you to give permissions to administrators in the Management account such that they can have access to stop, delete, and terminate the resources in two accounts: Dev and Test.

Tip: Remember that you always create roles in the account whose resources are to be accessed. In this example, that would be Dev and Test. Then you create the users in the account who will be accessing the resources and give them that particular role. In this example, the Management account should create the users.

Option A is incorrect because the Management account IAM user needs to assume roles from the Dev and Test accounts. The roles should have suitable permissions so that the Management account IAM user can access resources.
Option B is incorrect because the cross-account role should be created in Dev and Test accounts, not in the Management account.
Option C is CORRECT because (a) the cross-account role is created in Dev and Test accounts, and the users are created in the Management account given that role.
Option D is incorrect because consolidated billing does not give access to resources in this fashion.

For more information on cross-account access, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

Domain : Design for Organizational Complexity

Q12 : An administrator in your company has created a VPC with an IPv4 CIDR block 10.0.0.0/24. Now they want to add additional address space outside of the current VPC CIDR. Because there is a requirement to host more resources in that VPC. Which of the below requirement can be used to accomplish this?

A. You cannot change a VPC’s size. Currently, to change the size of a VPC, you must terminate your existing VPC and create a new one
B. Expand your existing VPC by adding secondary IPv4 IP ranges (CIDRs) to your VPC
C. Delete all the subnets in the VPC and expand the VPC
D. Create a new VPC with a greater range and then connect the older VPC to the newer one

Correct Answer: B

Explanation

An existing CIDR for a VPC is not modifiable. However, you can add additional CIDR blocks, i.e., up to four secondary IPv4 CIDR blocks to an already existing VPC.

Option A is incorrect because you can change the CIDR of VPC by adding up to 4 secondary IPv4 IP CIDRs to your VPC.
Option B is CORRECT because you can expand your existing VPC by adding up to four secondary IPv4 IP ranges (CIDRs) to your VPC.
Option C is incorrect because deleting the subnets is unnecessary.
Option D is incorrect because this configuration would peer the VPC. It will not alter the existing VPC’s CIDR.

For more information on VPC and its FAQs, please refer to the following links: https://aws.amazon.com/about-aws/whats-new/2017/08/amazon-virtual-private-cloud-vpc-now-allows-customers-to-expand-their-existing-vpcs/, https://aws.amazon.com/vpc/faqs/

 

Domain : Migration Planning

Q13 : A middle-sized company is planning to migrate its on-premises servers to AWS. At the moment, they have used various licenses, including windows operating system server, SQL Server, IBM Db2, SAP ERP, etc. After migration, the existing licenses should continue to work in EC2. The IT administrators prefer to use a centralized place to control and manage the licenses to prevent potential non-compliant license usages. For example, SQL Server Standard’s license only allows 50 vCPUs, which means a rule is needed to limit the number of SQL Servers in EC2. Which option is correct for the IT administrators to use?

A. Create license rules in AWS System Manager for all BYOL licenses. Use the rules to make sure that there are no non-compliant activities. Link the rules when EC2 AMI is created. System Manager console has provided license usage status
B. Define license rules in AWS License Manager for the required licenses. Enforce the license rules in EC2 and track usage in the AWS License Manager console
C. Use a license management blueprint to create a dedicated Lambda to control license usage. Lambda outputs the usage status to Cloudwatch Metrics which can be used by the administrators to track the status
D. Define and enforce license rules in AWS License Manager for the Microsoft relevant licenses such as windows, SQL Server as only Microsoft licenses are supported. For the other licenses such as IBM Db2, track the license usage in AWS System Manager

Correct Answer: B

Explanation

AWS License Manager is a central place to manage licenses in AWS EC2 and on-premises instances. It contains 3 parts to use:

  • Define licensing rules.
  • Enforce licensing rules.
  • Track usage.

AWS License Manager currently integrates with Amazon EC2, allowing you to track licenses for default (shared-tenancy) EC2 instances, Dedicated Instances, Dedicated Hosts, Spot Instances, and Spot Fleet, and Auto Scaling groups. Refer to https://docs.aws.amazon.com/license-manager/latest/userguide/license-manager.html.

Option A is incorrect. Because AWS License Manager manages the BYOL licenses. Although AWS System Manager can work together with AWS License Manager to manage licenses for on-premises servers and non-AWS public clouds, it is not the central place to provide license management.
Option B is CORRECT: Because AWS License Manager can define licensing rules, track license usage, and enforce controls on license use to reduce the risk of license overages.
Option C is incorrect: Because the AWS License manager should be considered first for licensing management.
Option D is incorrect: Because AWS License Manager can manage non-Microsoft licenses.

According to https://docs.aws.amazon.com/license-manager/latest/userguide/license-manager.html, license Manager tracks various software products from Microsoft, IBM, SAP, Oracle, and other vendors.

 

Domain : Design for New Solutions

Q14 : An outsourcing company is working on a government project. Security is very important to the success of the application. The application is developed mainly in EC2 with several application load balancers. CloudFront and Route53 are also configured. The major concern is that it should be able to be protected against DDoS attacks. The company decides to activate the AWS Shield Advanced feature. To this effect, it has hired an external consultant to ‘educate’ its employees on the same. For the below options, which ones help the company to understand the AWS Shield Advanced plan?

A. AWS Shield Advanced plan is able to protect application load balancers, CloudFront and Route53 from DDoS attacks
B. AWS Shield Advanced plan does not have a monthly base charge. The company only needs to pay the data transfer fee. Other than that, AWS WAF includes no additional cost
C. Route 53 is not covered by AWS Shield Advanced plan. However, Route 53 is able to be protected under AWS WAF. A dedicated rule in WAF should be customized
D. 24*7 support by the DDoS Response team. Critical and urgent priority cases can be answered quickly by DDoS experts. Custom mitigations during attacks are also available
E. Real-time notification of attacks is available via Amazon CloudWatch. Historical attack reports are also provided
F. AWS Shield is a sub-feature within AWS WAF. AWS Shield Advanced can be activated in AWS WAF console, which also provides the near real-time metrics and packet captures for attack forensics

Correct Answers: A, D and E

Explanation

AWS Shield has two plans – AWS Shield Standard and AWS Shield Advanced.

AWS Shield Standard:

AWS Shield Standard activates automatically at no additional charge. AWS Shield Standard defends against the most common, frequently occurring network and transport layer DDoS attacks that target your applications.

AWS Shield Advanced:

For higher levels of protection against attacks. It has a subscription fee which is $ 3000 per month.

Option A is CORRECT. Because Elastic Load Balancing (ELB), Amazon CloudFront, Amazon Route 53 are all covered by AWS Shield Advanced.
Option B is incorrect. Because AWS Shield Advanced has a subscription commitment of 1 year with a base monthly fee of 3000$.
Option C is incorrect. Because Route 53 is covered by AWS Shield Advanced.
Option D is CORRECT. Because 24*7 support by the DDoS Response team is a key feature of the advanced plan.
Option E is CORRECT. Because AWS Shield Advanced integrates with AWS CloudWatch and provides relevant reports.
Option F is incorrect. Because AWS Shield is not within AWS WAF. Please note that both of them help protect the AWS resources. AWS WAF is a web application firewall service, while AWS Shield provides expanded DDoS attack protection for the AWS resources.

 

Domain : Design for Organizational Complexity

Q15 : Your organization is planning to shift one of the high-performance data analytics applications running on Linux servers purchased from the 3rd party vendor to the AWS. Currently, the application works in an on-premises load balancer, and all the data is stored in a very large shared file system for low-latency and high throughput purposes. The management wants minimal disruption to existing service and also wants to do stepwise migration for easy rollback.

A. Save all the data on S3 and use it as shared storage. Use an application load balancer with EC2 instances to share the processing load.
B. Create a RAID 1 storage using EBS and run the application on EC2 with application-level load balancers to share the processing load.
C. Use the VPN or Direct Connect to create a link between your company premise and AWS regional data center.
D. Create an EFS with provisioned throughput and share the storage between your on-premise instances and EC2 instances.
E. Setup a Route 53 record to distribute the load between on-premises and AWS load balancer with the weighted routing policy.

Correct Answers: C, D and E

Explanation

Options C, D and E are correct because network extension via VPN or Direct Connect will allow the on-premises instances to use the AWS resources like EFS. EFS is elastic file storage that can be mounted on EC2 and other instances. It is inherently durable and scalable. EFS stores the data by default at multiple availability zones. With Route 53 Weighted policy, the requests can be distributed to on-premise and AWS resources easily in a controlled manner.
Option A is INCORRECT because S3 will work as shared, durable storage. But it may not be a suitable choice for low-latency, high throughput load processing. As the application cannot be easily modified, presenting the S3 as a local file system will be another task and has to be done via File Storage Gateway.
Option B is INCORRECT because the purpose is to use a shared file system solution (EFS). RAID1 for EBS is not necessary as the application requires data from EFS rather than the local storage.

 

Domain : Continuous Improvement for Existing Solutions

Q16 : A communication company has deployed several EC2 instances in region ap-southeast-1 which are used to monitor user activities. The AWS administrator has configured an EBS lifecycle policy to create a snapshot every day for each EBS volume to preserve data. The retention is configured as 5, which means the oldest snapshot will be deleted after 5 days. The administrator plans to copy some snapshots manually to another region ap-southeast-2 as these snapshots contain some important data. Can these snapshots be retained? 

A. These new snapshots may be deleted after the retention period, as they are still affected by the retention policy
B. These new snapshots can be kept only when they are copied to another region. Otherwise, they may be deleted by the retention policy. In this case, the snapshots can be kept
C. These new snapshots can be kept as the retention schedule is not carried over to the copy
D. The new snapshots in region ap-southeast-2 will be deleted after 5 days unless the delete protection option is enabled

Correct Answer: C

Explanation

Copying a snapshot to a new Region is commonly used for geographic expansion, migration, disaster recovery, etc.

EBS snapshots’ lifecycle policies contain some rules. One of the rules is that when you copy a policy’s snapshot, the new copy is not influenced by the retention schedule.

Option A is incorrect: Because the new snapshots will be kept.
Option B is incorrect: Because no matter the new snapshots are in the same region or not, they can be retained.
Option C is CORRECT: Because the new snapshots are not affected by the original policy. 

Reference is in https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html.

Option D is incorrect: Because there is no delete protection option for snapshots.

 

Domain : Design for Organizational Complexity

Q17 : You are working for a large company. You have set up the AWS consolidated billing with a Management account and several member accounts. However, the management account’s cost allocation report does not use the AWS generated cost allocation tags to organize the resource costs.
For example, there is an AWS tag called “createdBy” which tracks who created a resource. But in the report, the operator cannot track the cost filtered by “createdBy” tag. How can you fix this issue in the cost allocation report?

A. Use the Management account to log in to the AWS console and activate the user-defined tags in the Billing and Cost Management console
B. For both, the Management account and member accounts, use AWS CLI to activate AWS generated tags for Billing and Cost Management
C. Log in to the AWS console of both Management account and member accounts, activate the user-defined tags in Billing -> Cost Explorer -> Cost Allocation Tags
D. Log in to the AWS console using the Management account and activate the AWS-generated tags in the Billing and Cost Management console

Correct Answer: D

Explanation

AWS provides two types of cost allocation tags: AWS-generated tags and user-defined tags. AWS defines, creates, and applies the AWS-generated tags for you, and users define, create, and apply user-defined tags.

To use the AWS-generated tags, a management account owner must activate them in the Billing and Cost Management console. When a management account owner activates the tag, the tag is also activated for all member accounts.

Option A is incorrect: Because AWS-generated tags should be activated.
Option B is incorrect: Because AWS-generated tags can only be activated in the management account.
Option C is incorrect: Same reason as Option B. Also, it is not user-defined tags.
Option D is CORRECT: Because the tag can be activated in “Billing -> Cost Management.

Reference:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/aws-tags.html

Domain : Continuous Improvement for Existing Solutions

Q18 : Your company has a logging microservice used to generate logs when users have entered certain commands in another application. This logging service is implemented via an SQS standard queue that an EC2 instance is listening to. However, you have found that on some occasions, the order of the logs is not maintained. As a result, it becomes harder to use this service to trace users’ activities. How should you simply fix this issue?

A. Convert the existing standard queue into a FIFO queue. Add a deduplication ID for the messages that are sent to the queue
B. Delete the existing standard queue and recreate it as a FIFO queue. As a result, the order for the messages to be received is ensured
C. Migrate the whole microservice application to SWF so that the operation sequence is guaranteed
D. The wrong order of timestamps is a limitation of SQS, which does not have a fix

Correct Answer: B

Explanation

The FIFO queue improves upon and complements the standard queue. The most important features of this queue type are FIFO (First-In-First-Out) delivery and exactly-once processing. The FIFO queue is mainly used to process the messages in the queue that needs to be guaranteed without any items being out of order or duplicated.

Option A is incorrect because you can’t convert an existing standard queue into a FIFO queue. This is clarified in FIFO-queues.
Option B is CORRECT because in this context the FIFO queue can guarantee the sequence for users’ operations so that the issue of the logging system is fixed. Note that according to the question description the DelaySeconds parameter is assumed as per-queue delay, there is a message group ID per message, logs message bodies must be considered by SQS as different e.g. timestamp, application, command, user. Otherwise, scenarios when additional logic applies to uniqueness or duplicates should be dealt with by the producer providing a message deduplication ID value.
Option C is incorrect because this is not a straightforward method by changing the whole microservice to SWF. Option B is much simpler than this option.
Option D is incorrect. Refer to the explanations in Option B.

References:
FIFO-queues-message-order, Sqs-best-practices

 

Domain : Design for New Solutions

Q19 : API gateway and Lambda integration have been chosen to implement an application by a software engineer. The application is a data analysis tool that returns some statistical results when the HTTP endpoint is called. The Lambda needs to communicate with some back-end data services such as Keen.io. However, there are chances that error happens, such as wrong data requested, bad communications, etc. The Lambda is written using Java. Two exceptions may return which are BadRequestException and InternalErrorException. What should the software engineer do to map these two exceptions in the API gateway with proper HTTP return codes?
For example, BadRequestException and InternalErrorException are mapped to HTTP return codes 400 and 500 respectively.

A. Add the corresponding error codes (400 and 500) on the Integration Response in the API gateway
B. Add the corresponding error codes (400 and 500) on the Method Response in the API gateway
C. Put the mapping logic into Lambda itself so that when an exception happens, error codes are returned at the same time in a JSON body
D. Add Integration Responses where regular expression patterns are set, such as BadRequest or InternalError. Associate them with HTTP status codes
E. Add Method Responses where regular expression patterns are set, such as BadRequest or InternalError. Associate them with HTTP status codes 400 and 500

Correct Answers: B and D

Explanation

When an API gateway is established, there are four parts.

Method Request/Method Response mainly deals with API gateways. They are the API’s interface with the API’s frontend (a client), whereas Integration Request and Integration Response are the API’s interface with the backend. In this case, the backend is a Lambda.

For the mapping of exceptions that come from Lambda, Integration Response is the correct place to configure. However, the corresponding error code (400) on the method response should be created first. Otherwise, API Gateway throws an invalid configuration error response at runtime. Below is an example of mapping BadRequestException to HTTP return code 400:

Option A is incorrect: Because HTTP error codes are defined as firstly in Method Response instead of Integration Response.
Option B is CORRECT:   Because HTTP error codes are defined as firstly in Method Response instead of Integration Response. (Same reason as A).
Option C is incorrect: Because Integration Response in API gateway should be used. Refer to https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html on “how to Handle Lambda Errors in API Gateway”.
Option D is CORRECT: Because BadRequest or InternalError should be mapped to 400 and 500 in Integration Response settings.
Option E is incorrect: Because Method Response is the interface with the frontend. It does not deal with how to map the response from Lambda/backend.

Reference:

https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html

 

Domain : Migration Planning

Q20 : A company has run a major auction platform where people buy and sell a wide range of products. The platform requires that transactions from buyers and sellers get processed in exactly the order received. At the moment, the platform is implemented using RabbitMQ which is a light-weighted queue system. The company consulted you to migrate the on-premise platform to AWS. How should you design the migration plan?

A. When the bids are received, send the bids to an SQS FIFO queue before they are processed
B. When the users have submitted the bids from the frontend, the backend service delivers the messages to an SQS standard queue
C. Add a message group ID to the messages before they are sent to the SQS queue so that the message processing is in a strict order
D. Use an EC2 or Lambda to add a deduplication ID to the messages before the messages are sent to the SQS queue to ensure that bids are processed in the right order

Correct Answers: A and C

Explanation

SQS has 2 types – standard queue and FIFO queue. In this case, the FIFO queue should be chosen as the order of message processing is critical to the application. FIFO queue has the below key features.

Option A is CORRECT: Because the SQS FIFO queue can help with the message processing in the right order.
Option B is incorrect: Because the SQS standard queue may have an issue that some messages are handled in the wrong sequence.
Option C is CORRECT: Because the message group ID is a feature to help with the FIFO delivery. Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group.
Option D is incorrect: Because deduplication ID is a method to help on preventing messages to be processed duplicately, which is not used to guarantee the message order.

 

Domain : Design for New Solutions

Q21 : Server-side encryption is about data encryption at rest. That is, Amazon S3 encrypts your data at the object level as it writes it to disk in its data centers and decrypts it for you when you go to access it. A few different options are depending on how you choose to manage the encryption keys. One of the options is called ‘Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)’. Which of the following best describes how this encryption method works?

A. There are separate permissions for the use of an envelope key (a key that protects your data’s encryption key) that provides added protection against unauthorized access of your objects in S3 and also provides you with an audit trail of when your key was used and by whom
B. Each object is encrypted with a unique key employing strong encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates
C. You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disk, and decryption when you access your objects
D. A randomly generated data encryption key is returned from Amazon S3, which is used by the client to encrypt the object data

Correct Answer: B

Explanation

Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

Option A is incorrect because there are no separate permissions to the key that protects the data key.
Option B is CORRECT because as mentioned above, each object is encrypted with a strong unique key and that key itself is encrypted by a master key.
Option C is incorrect because the keys are managed by the AWS.
Option D is incorrect because there is no randomly generated key and the client does not do the encryption.

For more information on S3 encryption, please visit the links: https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.htmlhttps://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html

 

Domain : Cost Control

Q22 : You work in a video game company, and your team is working on a feature that tells how many times certain web pages have been viewed or clicked. You also created an AWS Lambda function to show some key statistics of the data. You tested the Lambda function, and it worked perfectly.
However, your team lead requires you to show the statistics every day at 8:00 AM GMT on a big TV screen so that when employees come into the office every morning, they have a rough idea of how the feature runs. What is the most cost-efficient and straightforward way for you to make this happen?

A. Create an AWS CloudWatch Events rule that is scheduled using a cron expression. Configure the target as the Lambda function
B. Create an Amazon Linux EC2 T2 instance and set up a Cron job using Crontab. Use AWS CLI to call your AWS Lambda every 8:00 AM
C. Use Amazon Batch to set up a job with a job definition that runs every 8:00 AM for the Lambda function
D. In AWS CloudWatch Events console, click “Create Event” using the cron expression “ * ? * * 08 00”. Configure the target as the Lambda function

Correct Answer: A

Explanation

Potentially, more than one option may work. However, this question asks the most cost-efficient and straightforward method that needs to be considered.

Option A is CORRECT because the AWS CloudWatch Events rule is free and quite easy to begin with. To schedule a daily event at 8:00 AM GMT, you just need to set up a cron rule, as given in the below screenshot.
Option B is incorrect: Because launching a new EC2 instance for this task is not cost-efficient.
Option C is incorrect: Because this is not something AWS Batch works. For AWS Batch, it runs as a containerized application on an Amazon EC2 instance in your computing environment.
Option D is incorrect: Because firstly, it should be “Create rule” rather than “Create Event”. Secondly, the Cron expression of “ * ? * * 08 00” is incorrect.

For More information, Please check below AWS Docs:  https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html

 

Domain : Design for Organizational Complexity

Q23 : Your company has developed a suite of business analytics services as a SaaS application used by hundreds of customers worldwide. Recently there has been an acquisition of a product, and the management has decided to integrate the product with the main service. The product also runs onto the AWS platform. The initial phase required the product software to use some private resources of the main SaaS service.
The operations team created the Cross-Account Role with the required policies and assigned the role to the account to start using the resources. After a few days, the operations team found that someone deleted an important S3 bucket from their AWS account which has caused the feature disruption across the service.
The management has asked the auditing team to inspect and identify the root cause of the resource deletion based on the CloudTrail logs. Select two valid options through which the auditing team can identify who deleted the resources.

A. The auditing team will need the CloudTrail logs detail of both the SaaS application’s AWS account and the product’s AWS account
B. The auditing team can find the detail only from the SaaS application’s AWS account, as the bucket was part of that account
C. Look for the DeleteBucket API record into the SaaS application’s AWS account CloudTrail logs. It should have a user Id and the bucket detail as part of the log detail
D. Look for the sharedEventID and the userIdentity for the DeleteBucket API event in both AWS accounts
E. Look for the sharedEventID and the userIdentity for the AssumeRole API event in both AWS accounts

Correct Answers: A and D

Explanation

Option A is CORRECT because the request is made from the product’s AWS account and the resource was part of the main(SaaS application’s) AWS account. The user will have to check the log trail of both the accounts and match the user token being used.
Option B is INCORRECT because the CloudTrail logs of the SaaS application’s account will not reveal the user identity. The cross-account role issues a token, and all the further interaction is logged with that token. To know which user the token belongs to, the auditor will have to look into the product’s AWS account’s log trail as well.
Option C is INCORRECT because the DeleteBucket will not have the user identity information. The log will have the user token information only, as the API was invoked with a cross-account role.
Option D is CORRECT because, at the time of assuming the role into the main AWS account, the product team’s AWS account must have created an entry with the sharedEventID and the userIdentity information. sharedEventID helps to identify the real user, and userIdentity provides the IAM ARN that performs the action. These two can help to find who has executed the DeleteBucket API. Please check the references in https://docs.aws.amazon.com/awscloudtrail/latest/userguide/shared-event-ID.html and https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html
Option E is INCORRECT because the userIdentity information will only be available inside the product team’s AWS account in response to the AssumeRole operation. The sharedEventID will be available in both the account’s log trail though.

 

Domain : Design for New Solutions

Q24 : You are a software engineer. You are developing an online food order web application. The Node.js backend needs to get the client’s IP to understand users’ locations.  The application is deployed in AWS EC2 with a network load balancer to distribute traffic. For the network load balancer, the target is specified using instance id. TLS is also terminated on the Network Load Balancer. You are worried that the backend cannot get the client’s IP due to the network load balancer. Which below description is correct in this situation?

A. Enable proxy protocol using AWS CLI for the network load balancer so that you can get the client IP in the backend service
B. You just need to get the client IP from the TCP X-Forwarded-For header, which is used to identify the user’s originating IP address connecting to the webserver
C. Source IP continues to be preserved to your back-end applications when TLS is terminated on the Network Load Balancer in this case
D. Change listener protocol to TCP or change the load balancer to the application or classic load balancer. Otherwise, the client IP cannot be preserved

Correct Answer: C

Explanation

Network Load Balancer supports TLS termination between the clients and the load balancer.

You can configure a target group so that you register targets by instance ID or IP address. If you specify targets using an instance ID, the clients’ source IP addresses are preserved and provided to your applications. If you specify targets by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.

Therefore, in this case, the source IP is preserved since the targets are specified by instance ID.

References are in

https://aws.amazon.com/elasticloadbalancing/features/#compare and 
https://docs.amazonaws.cn/en_us/elasticloadbalancing/latest/network/elb-ng.pdf.
Option A is incorrect because the proxy protocol is not required in this case since the source IP is preserved.
Option B is incorrect because X-Forwarded-For is an HTTP header instead of a TCP header. Also, it is not needed in this scenario.
Option C is CORRECT because since the source IP is preserved, nothing else needs to be done.
Option D is incorrect because changing the protocol to TCP will have a security issue. Again it is not required.

 

Domain : Design for New Solutions

Q25 : You decide to create a bucket on AWS S3 called ‘mybucket’ and then perform the following actions in the order that they are listed here.
– You upload a file to the bucket called ‘file1’
– You enable versioning on the bucket
– You upload a file called ‘file2’
– You upload a file called ‘file3’
– You upload another file called ‘file2’
Which of the following is true for ‘mybucket’?

A. There will be 1 version ID for file1, 2 version IDs for file2, and 1 version ID for file3
B. The version ID for file1 will be null. There will be 2 version IDs for file2, and 1 version ID for file3
C. There will be 1 version ID for file1, the version ID for file2 will be null, and there will be 1 version ID for file3
D. All file version ID’s will be null because versioning must be enabled before uploading objects to ‘mybucket’

Correct Answer: B

Explanation

Objects stored in your bucket before you set the versioning state have a version ID of null. When you enable versioning, existing objects in your bucket do not change. What changes is how Amazon S3 handles the objects in future requests.

Option A is incorrect because the version ID for file1 would be null.
Option B is CORRECT because the file1 was put in the bucket before the versioning was enabled. Hence, it will have a null version ID. The file2 will have two version IDs, and file3 will have a single version ID.
Option C is incorrect because file2 cannot have a null version ID as the versioning was enabled before putting it in the bucket.
Option D is incorrect because once the versioning is enabled, all the files put after that will not have a null version ID. But file1 was put before versioning was enabled. So it will have null as its version ID.

For more information on S3 versioning, please visit the below link: http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

 

Domain : Migration Planning

Q26 : You work in the integration team of a company, and your team is integrating the infrastructure with Amazon VPC. You are recently assigned a task to create a VPN connection. You have the AWS management console logging access. The first step that you plan to do is to create a customer gateway in the AWS VPC console. In order to do that, which information do you need?

A. A Border Gateway Protocol (BGP) Autonomous System Number (ASN) if the routing type is Dynamic
B. A BGP Autonomous System Number (ASN) if the routing type is static
C. A dynamic public IP address for the customer gateway device. If the customer gateway is behind a NAT device, use the NAT device’s dynamic public IP address
D. A static, internet-routable IP address for the customer gateway device

Correct Answers: A and D

Explanation

The first step of creating a VPN connection is to set up a customer gateway in the AWS VPC console according to https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html.

Option A is correct: AWS VPN has used BGP ASN to establish the connection for dynamic routing.
Option B is incorrect: For static routing, no BGP ASN is needed.
Option C is incorrect: The internet-routable IP address for the customer gateway device’s external interface is required. And the value must be static.
Option D is correct: Same reason as option C.

 

Domain : Continuous Improvement for Existing Solutions

Q27 : You are writing an AWS CloudFormation template, and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure which part of the template they can use. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template?

A. You can use intrinsic functions in any part of a template
B. You can use intrinsic functions only in specific parts of a template. You can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes
C. You can use intrinsic functions only in the resource properties part of a template
D. You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description

Correct Answer: B

Explanation

As per AWS documentation:

You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to create stack resources conditionally.

Hence, B is the correct answer.

For more information on intrinsic function, please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html

 

Domain : Continuous Improvement for Existing Solutions

Q28 : Your application uses an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage, and 90% of I/O operations on the database are reads. To improve the performance, you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks, the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability of the application with the anticipated additional load, and why?

A. Yes. You should deploy two Memcached ElastiCache clusters in different AZ’s with a change in application logic to support both clusters because the RDS instance will not be able to handle the load if the cache node fails
B. No. If the cache node fails, the automated ElastiCache node recovery feature will prevent any availability impact
C. Yes. You should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails
D. No. If the cache node fails, you can always get the same data from the DB without having any availability impact

Correct Answer: A

Explanation

Option A is CORRECT because having two clusters in different AZs provides high availability of the cache nodes, removing the single point of failure. It will help caching the data; hence, reducing the overload on the database, maintaining the availability, and reducing the impact.
Option B is incorrect because, even though AWS will automatically recover the failed node, there are no other nodes in the cluster once the failure happens. So, the data from the cluster would be lost once that single node fails. For higher availability, there should be multiple nodes. Also, once the cache node fails, all the cached read load will go to the database, which will not be able to handle the load with a 30% increase to current levels. This means there will be an availability impact.
Option C is incorrect because provisioning the nodes in the same AZ does not tolerate an AZ failure. For higher availability, the nodes should be spread across multiple AZs.
Option D is incorrect because the very purpose of the cache node was to reduce the impact on the database by not overloading it. If the cache node fails, the database will not be able to handle the 30% increase in the load; so, it will have an availability impact.

More information on this topic from AWS Documentation: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/BestPractices.html

Mitigating Node Failures

To mitigate the impact of a node failure, spread your cached data over more nodes. Because Memcached does not support replication, a node failure will always result in some data loss from your cluster.

When you create your Memcached cluster, you can create it with 1 to 20 nodes or more by special request. Partitioning your data across a greater number of nodes means you’ll lose less data if a node fails. For example, if you partition your data across 10 nodes, any single node stores approximately 10% of your cached data. In this case, a node failure loses approximately 10% of your cache which needs to be replaced when a replacement node is created and provisioned.

Mitigating Availability Zone Failures

To mitigate the impact of an availability zone failure, locate your nodes in as many availability zones as possible. In the unlikely event of an AZ failure, you will lose only the data cached in that AZ, not the data cached in the other AZs.

 

Domain : Design for New Solutions

Q29 : You are an AWS solutions architect in an IT company. Your company has a big data product that analyzes data from various sources like transactions, web servers, surveys, and social media. You need to design a new solution in AWS that can extract data from the source in S3 and transform the data to match the target schema automatically. Moreover, the transformed data can be analyzed using standard SQL. Which combination of solutions would meet these requirements in the best way?

A. Set up an AWS ECS cluster to manage the extract, transform, and load (ETL) service to prepare and load the data in S3 for analytics
B. Configure an AWS EMR cluster to transform the data to a target format for downstream. Execute the EMR tasks on a schedule
C. Create an AWS Glue ETL to run on schedule to scan the data in S3 and populate the Glue data catalog accordingly
D. Configure AWS QuickSight to analyze the data using standard SQL commands
E. Use AWS Athena to query the data using standard SQL

Correct Answers: C and E

Explanation

AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

AWS Athena is out-of-the-box integrated with AWS Glue Data Catalog, allowing you to create a metadata repository across various services.

AWS Glue and AWS Athena, when used together, can meet the requirements for this case.

Option A is incorrect:  Although the ECS cluster may work, it is not fully managed by AWS. By AWS Glue, you can create and run an ETL job with a few clicks in the AWS Management Console.
Option B is incorrect: Similar reason as Option A. AWS Glue should be considered as the first choice when an extract, transform, and load (ETL) service is required in AWS.
Option C is correct because AWS Glue generates Scala or PySpark (the Python API for Apache Spark) scripts with AWS Glue extensions. You can use and modify these scripts to perform various ETL operations. You can extract, clean, and transform raw data, and then store the result in a different repository, where it can be queried and analyzed.
Option D is incorrect: Because AWS QuickSight is a service to deliver insights rather than perform SQL queries.
Option E is CORRECT: Because AWS Athena can directly query the data in AWS Glue Data Catalog with SQL commands. No additional step is needed.

 

Domain : Continuous Improvement for Existing Solutions

Q30 : You are recently hired as an AWS architect in a startup company. The company just developed an online photo-sharing product. After the product was deployed for some time, you have found that the instances in the Auto Scaling group have reached the maximum value from time to time due to the high CPU rate and traffic.
The network team has identified three IP addresses that sent many malicious requests during the time. You plan to configure a WAF Access Control List (ACL) with a rule to filter these three IP addresses. Which component or service are you able to associate the new WAF ACL with?

A. The global CloudFront distribution that the product is using
B. The Auto Scaling group that the product has used
C. All EC2 instances created for the product. Add each instance ID to associate with the ACL
D. The application load balancer for the product
E. The network load balancer for the product, however, only in US East (N. Virginia) region

Correct Answers: A and D

Explanation

For AWS WAF, an ACL can be configured to control the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront, or an Application Load Balancer.

Option A is CORRECT: Because CloudFront distribution is a global resource that WAF ACL can be associated with.
Option B is incorrect: Because the Auto Scaling group is not a valid resource that ACL can be linked with.
Option C is incorrect: Same as Option B.
Option D is CORRECT: Because the application load balancer in a region can be selected for the ACL as below.
Option E is incorrect: Because the network load balancer is not supported. For the regions where the application load balancer is supported for AWS WAF ACL, please check the AWS document in

https://docs.aws.amazon.com/general/latest/gr/rande.html#waf_region.

 

Domain : Design for New Solutions

Q31 : You work in a financial company as an AWS architect. The security team has informed you that the company’s AWS web product has recently been attacked by SQL injection. Several attackers tried to insert certain malicious SQL code into web requests to extract data from the MySQL database. The database is deployed in several EC2 instances under an application load balancer. Although the attack was unsuccessful, you are expected to provide a better solution to protect the product. Which action should you perform?

A. Configure a rule in AWS Firewall Manager to block all malicious SQL injection requests for the EC2 instances
B. Create a WAF Access Control List (ACL) with a rule to block the malicious SQL injection requests. Associate the application load balancer with this new ACL
C. Use AWS Shield Advanced service to block the malicious SQL injection requests that go to the application load balancer
D. Configure a WAF Access Control List (ACL) with a rule to allow all requests except the malicious SQL injection requests. Associate each EC2 instance with the new ACL

Correct Answer: B

Explanation

There are several firewall services that AWS has provided, including AWS WAF, AWS Shield, and AWS Firewall Manager. The differences among them can be found in https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html.

In this case, AWS WAF should be used as it can help to prevent SQL injection attacks.

Option A is incorrect: AWS Firewall Manager is a tool to simplify the AWS WAF and AWS Shield Advanced administration. However, an ACL is still needed in AWS WAF. Option B is more accurate.
Option B is CORRECT: Because AWS WAF can protect against SQL injection attacks for an application load balancer.
Option C is incorrect: Because AWS Shield Advanced provides expanded DDoS attack protection rather than SQL injection attack protection.
Option D is incorrect: Because after a WAF Access Control List (ACL) is created, the application load balancer should be associated instead of all EC2 instances.

 

Domain : Continuous Improvement for Existing Solutions

Q32 : A user has set up Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance. How can the user configure this?

A. The user can get an email using SNS when the CPU utilization is less than 10%. The user can use the desired capacity of Auto Scaling to remove the instance
B. Use CloudWatch to monitor the data and Auto Scaling to remove the instances using scheduled actions
C. Configure a CloudWatch alarm in the execute policy that notifies the Auto Scaling Launch configuration when the CPU utilization is less than 10%, and configure the Auto Scaling policy to remove the instance
D. Configure a CloudWatch alarm in the execute policy that notifies the Auto Scaling group when the CPU Utilization is less than 10%, and configure the Auto Scaling policy to remove the instance

Correct Answer: D

Explanation

Option A is incorrect because for the user to get the notification, they have to configure CloudWatch which triggers a notification to Auto Scaling Group to terminate the instance. Updating the desired capacity will not work in this case.
Option B is incorrect because scheduled scaling is used to scale your application in response to predictable load changes, not upon any notification.
Option C is incorrect because the notification should be sent to Auto Scaling Group, not the launch configuration.
Option D is CORRECT because the notification is sent to Auto Scaling Group, removing an instance from the running instances.

More information on Auto Scaling, Scheduled Actions:

Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define. You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances. Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs.

For more information on AutoScaling, please visit the links: https://aws.amazon.com/autoscaling/, https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html

 

Domain : Design for New Solutions

Q33 : You are working on a proof of concept serverless project and presenting it to the shareholders in a week. This project needs an API gateway, a Lambda function, and a DynamoDB table to store user data. To save time, you plan to use AWS Serverless Application Model (AWS SAM) as it provides templates to deploy all required resources quickly. You have found that SAM templates are very similar to CloudFormation templates. However, which resource types are specially introduced by the SAM template?

A. AWS::DynamoDB::Table
B. AWS::Serverless::Api
C. AWS::Lambda::Api
D. AWS::Serverless::Function
E. AWS::ApiGateway::RestApi

Correct Answers: B and D

Explanation

AWS SAM is an extension of AWS CloudFormation. A SAM serverless application is defined in a CloudFormation template and deployed as a CloudFormation stack. The AWS SAM template can be regarded as a CloudFormation template. However, it has its own special resources. Check https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessapplication for details.

Note: to include objects defined by AWS SAM, the template must include a Transform section in the document with a value of AWS::Serverless-2016-10-31.

Option A is incorrect: Because AWS::DynamoDB::Table is not SAM special and belongs to the CloudFormation resource type.
Option B is CORRECT: Because AWS::Serverless::Api is designed for API gateway resource in the SAM framework.
Option C is incorrect: Similar to Option A.
Option D is CORRECT: Because AWS::Serverless::Function is the SAM resource that creates a Lambda function, IAM execution role, and event source mappings.
Option E is incorrect: Because AWS::ApiGateway::RestApi also belongs to the CloudFormation resource type. 

Refer to: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html

 

Domain : Cost Control

Q34 : The company you are working for has an on-premises blog web application, which is built on VMware vSphere virtual machines. As an AWS solutions architect, you need to evaluate the proper methods to migrate the application to AWS. After the initial analysis, you have suggested using AWS Server Migration Service (SMS) to migrate. During this migration process, which AWS services will NOT bring extra costs?

A. The Server Migration Connector downloaded from AWS
B. Amazon EBS snapshots generated during the migration
C. Amazon S3 which is used to store the uploaded VMDK
D. The replication job that is created in Server Migration Service
E. The EC2 instances that run based on the new AMI

Correct Answers: A and D

Explanation

AWS SMS is a free service to use for the server migration which means that there is no additional fee. However, the storage resources used during the migration process, such as Amazon EBS snapshots and Amazon S3, can generate standard fees. 

Refer to https://docs.aws.amazon.com/server-migration-service/latest/userguide/server-migration.html for the introductions to AWS SMS.

Option A is CORRECT: Because the Server Migration Connector is a pre-configured FreeBSD virtual machine available for deployment in the VMware environment.
Option B is incorrect: Because during the migration, there is a step to convert the VMDK to an Amazon Elastic Block Store (Amazon EBS) snapshot, which generates cost.
Option C is incorrect: Because the S3 usage is charged when VMDK files are uploaded to Amazon S3.
Option D is CORRECT: Because the replication job or task itself is free to use.
Option E is incorrect: Because EC2 instances are charged at a standard rate.

 

Domain : Continuous Improvement for Existing Solutions

Q35 : You have deployed a Windows Server instance (x86_64) in AWS EC2. After the instance has run for a week, you realized that you needed to run a script in PowerShell. You were logged in the AWS EC2 console and started using Systems Manager to run a command. You chose the command as “AWS-RunPowerShellScript”. However, your instance cannot be selected as the target. How should you troubleshoot the issue so that the command can run in the current Windows instance successfully?

A. Change the command “AWS-RunPowerShellScript” to “AWS-RunShellScript”
B. Systems Manager Run Command only works for Linux instances so that the Windows instance is unavailable
C. Check if the latest version of SSM (AWS Systems Manager) Agent is installed on the Windows instance
D. Check if the Windows instance has the latest system patches installed
E. Verify that the instance has configured with the IAM role that enables it to communicate with the Systems Manager API

Correct Answers: C and E

Explanation

AWS Systems Manager Run Command is a service to run a command for Windows or Linux instance:

However, if the instance cannot be seen as the target for the command to run, some items must be checked according to https://docs.aws.amazon.com/systems-manager/latest/userguide/troubleshooting-remote-commands.html?icmpid=docs_ec2_console.

Option A is incorrect: Because “AWS-RunPowerShellScript” is correct while “AWS-RunShellScript” is for Linux instance.
Option B is incorrect: Because the Windows instance is supported. Check https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-prereqs.html for the supported Operating Systems.
Option C is CORRECT: Because only Amazon EC2 Windows Amazon Machine Images (AMIs) and certain Linux AMIs are pre-configured with the SSM Agent. You need to check if the SSM Agent is properly installed.
Option D is incorrect: Because system patches do not impact the SSM connection.
Option E is CORRECT: Because a proper IAM instance role is required; otherwise, EC2 cannot communicate with SSM API.

Preparing for the AWS Certified Developer Associate exam? Try these AWS Developer Associate exam questions for FREE and check your preparation level.

Updated Questions on Dec 2022

Domain: Continuous Improvement for Existing Solutions

Q36. An IT Firm is using AWS Organizations for managing multiple accounts in the firm. During an annual security audit, it was found that many users are having excess permissions which are not required. The Security head has assigned you to work on the least-privilege access for all users in the AWS Organizations. While removing these permissions, permission to resources that users have accessed in the last 180 days should not be removed.

Which report can be viewed to get resources accessed by users before fine-tuning permissions in the AWS Organizations?    

A. Use AWS Organizations management account credentials to log in to the AWS IAM console and fetch the last accessed information to get a list of resources accessed by the users.

B. Use Organization aggregator with AWS Config for getting a list of resources accessed by users in the AWS Organizations.

C. Use Organizational view for the AWS Trusted advisor to get a list of resource access from all users in the AWS Organizations.

D. Use AWS Control Tower along with AWS Organizations to get a list of resource access from all users in the AWS Organizations.

Correct Answer: A

Option A is correct

Explanation:  Last Accessed Information can be used to fine-tune permissions set to all users and accounts within an AWS Organization. It helps to check if users are granted additional permission which is never used by the user or not required in case the user has moved to another domain. Using Last Access Information additional unused permissions can be removed and thus can adhere to the best practice of least privilege.

Option B is incorrect as AWS Config can be used to assess and record configuration changes done by users on AWS resources. It does not help in providing a list of resources that the user has accessed. An Organization aggregator will collect AWS Config configurations and compliance data from all the accounts within an AWS Organization.

Option C is incorrect as AWS Trusted Advisor can be used to evaluate AWS resources and recommend best practices with respect to cost optimization, performance, security, fault tolerance, and service quotas. It’s not suitable to get a list of resources accessed by users in the AWS cloud.

Option D is incorrect as AWS Control Tower can help to ensure security and cross-account permissions for all accounts in an AWS Organizations are correctly applied and there is no drift in the permissions. It is not suitable to get a list of resources accessed by users in AWS Organizations.

For more information on last accessed information with AWS Organizations, refer to the following URLs,

https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor-example-scenarios.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor-view-data-orgs.html

 

Domain: Continuous Improvement for Existing Solutions

Q37. An IT firm has hybrid connectivity between on-premises location and AWS using AWS Direct Connect links. On-Premises users need to download project files stored in an Amazon S3 bucket. For providing access, IT firms do not want to create IAM users but are looking to use existing credentials created in SAML compatible Identity provider (IdP) deployed at on-premises locations.  

Which sequence of events takes place while users access the Amazon S3 bucket? 

A

  1. Users make a request to IdP and get authenticated.
  2. IdP sends a SAML assertion to STS using AssumeRolewithSAML API on behalf of the users.
  3. STS returns temporary security credentials to IdP.
  4. IdP forwards this security credential to users allowing access to the Amazon S3 bucket.

B

  1. Users make a request to IdP and get authenticated.
  2. IdP sends a SAML assertion to users.
  3. Users forward this SAML assertion to STS using AssumeRolewithSAML API.
  4. STS returns a temporary security credential that allows users to access the Amazon S3  bucket.

C

  1. Users make a request to IdP and get authenticated.
  2. IdP sends a SAML assertion to users.
  3. Users forward this SAML assertion to STS using AssumeRoleWithWebIdentity API.
  4. STS returns temporary security credentials that allow users to access the Amazon S3    bucket.

D

  1. Users make a request to IdP and get authenticated.
  2. IdP sends a SAML assertion to STS using AssumeRoleWithWebIdentity API on behalf of the users.
  3. STS returns temporary security credentials to IdP.
  4. IDP forwards this security credential to users allowing access to the Amazon S3 bucket.

Correct Answer: B

Option A is incorrect as LDAP (Lightweight Directory Access Protocol) – based Identity providers do not directly send requests to STS.  

Option B is correct

Explanation:  For providing access to AWS resources, Identity providers can be used instead of creating IAM users. A trust relationship needs to be created between the AWS account and the (Identity provider) IdP. IAM supports IdPs that are compatible with OpenID Connect (OIDC) or SAML 2.0 (Security Assertion Markup Language 2.0). 

If a company has an existing IdP, the same can be used by users to get access to AWS resources instead of using new credentials created with IAM. For accessing the Amazon S3 bucket from an on-premises location following sequence of steps needs to be followed, 

  1. On-Premises users get authenticated with Local Identity providers. This IdP needs to support SAML2 which is compatible with AWS.
  2. IdP sends a SAML assertion to the users.
  3. Users forward this SAML assertion to AWS STS which returns temporary credentials to users for accessing the Amazon S3 bucket.
  4. Using this credential, users access the Amazon S3 bucket and can download project files stored in the Amazon S3 bucket. 

Option C is incorrect as the client needs to send SAML assertion to STS using AssumeRolewithSAML API and not with AssumeRoleWithWebIdentity API.

Option D is incorrect as LDAP-based Identity providers do not directly send requests to STS. Also, AssumeRolewithSAML API should be used by the client while sending SAML assertion to STS.

For more information on using SAML identity providers for providing access to users, refer to the following URL,

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html

 

Domain: Design Solutions for Organizational Complexity

Q38. A company has applied Service Control Policies (SCP) to an AWS Organizations to deny any instance launch other than t2.micro instance type. This AWS Organizations comprises a single OU with two accounts: Production and Development. Development accounts need to launch a c5.xlarge instance type for a month’s time to perform testing of a new application. Post testing phase, the Development account should only launch the t2.micro instance type. The production account should launch only the t2.micro instance type at all times.

Which combination of changes can be done in SCP to meet this requirement?

A. Create a new OU named Development. Move the Development account to this OU. Create a new SCP allowing access to launch the c5.xlarge instance type. Make no changes to the existing SCP attached to the root.

B. Create a new OU named Development. Move the Development account to this OU. Create a new SCP allowing to launch c5.xlarge instance type and attach it to the new OU. Detach existing SCP from the root and attach it to the OU which has a Production account.

C. Detach existing SCP from the root of the Organization. Attach a new SCP which will allow the launch of the c5.xlarge instance type. Post testing phase, revert these SCP.

D. Attach a new SCP to the Development account which will allow the launch of the c5.xlarge instance type. Make no changes is the existing SCP attached to the root of the Organizations.

Correct Answer: B

Option A is incorrect as Since SCP at the root level denies the launch of the c5.xlarge instance type, attaching an SCP at OU level to allow the launch of the c5.xlarge instance type won’t grant permission.

Option B is correct

Explanation:  Service Control policies are a type of AWS Organizations policy to manage permissions within the Organizations. They are applied at the root of the organizations, at the OU (Organization Unit), or at the account level. 

SCP applied at the root of the organization affects permission for all OUs and accounts within the AWS Organizations, while SCP applied at the OU level affects permission to all accounts within that OU. 

If permission is denied at the level of OU or account, even explicit “allow” permissions at the OU or account level will not grant the permission. In the above case, there is an SCP at the root level which denies permission to launch instances other than the t2.micro instance type. This permission will be inherited by the OU and both accounts: Production and Development in the OU. To provide permission to only the Development account to launch c5.large instance type following can be performed, 

  1. Move Development Account to a new OU. Apply a new SCP to this OU to allow the launch of the c5.xlarge instance type. 
  2. Remove SCP which denies launch of instance type other than t2.micro instance type from the root level and apply it to the OU which has a Production account.

Option C is incorrect as allowing the launch of the c5.xlarge instance type in SCP at the root level will allow both the Production and Development accounts.

Option D is incorrect as SCP at the root level denies the launch of the c5.xlarge instance type, attaching an SCP at the account level to allow the launch of the c5.xlarge instance type won’t grant permission.  

For more information on attaching SCPs in AWS Organizations, refer to the following URLs,

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_attach.html

 

Domain: Design for New Solutions

Q39. A start-up firm has deployed thousands of sensors across the globe to capture environmental changes. These sensors send a continuous data log of size less than 4Kb which needs to be analyzed on a real-time basis and a summary of the environmental data needs to be stored for future requirements. The firm is looking for a cost-effective managed solution to implement this setup in AWS Cloud. The solution should be highly scalable to handle any future growth.

What solution can be designed to meet this requirement? 

A. Capture the streaming data using Amazon Kinesis Data Streams. Use Kinesis Data Analytics for Apache Flink for the analysis of streaming data and store processed data in Amazon S3.

B. Capture the streaming data using Amazon Data Pipeline. Use Kinesis Data Analytics for Apache Flink for analysis of streaming data and store processed data in Amazon S3.

C. Capture the streaming data using Amazon Data Pipeline. Store the data in Amazon S3 and analyze the data using Amazon Athena. Store the analyzed data in different Amazon S3 buckets.

D. Capture the streaming data using Amazon Kinesis Data Streams and send it to the Amazon EC2 instance. Use KCL libraries on the Amazon EC2 instance to analyze the data.  Store the analyzed data in the Amazon S3 bucket.

Correct Answer: A

Option A is correct

Explanation:  Amazon Kinesis Data Analytics for Apache Flink is a fully managed service for performing analysis on streaming data. Amazon Kinesis Data Streams is a fully managed scalable service for capturing, processing, and storing data streams. In the above case, streaming data from thousands of sensors can be captured using Amazon Kinesis Data Streams in real time. Amazon Kinesis Data Analytics can perform an analysis of this streaming data and save the results in an Amazon S3 bucket for future storage. Using Amazon Kinesis Data Analytics for capturing data and Amazon Kinesis Data Analytics for Apache Flink for analysis data provides a cost-effective scalable solution.     

Option B is incorrect as since there is a requirement for real-time data processing, Amazon Data Pipeline is not suitable. Amazon Data Pipeline is suitable for data transfer between AWS services or from on-premises at regular intervals 

Option C is incorrect as since there is a requirement for real-time data processing, Amazon Data Pipeline is not suitable. Amazon Data Pipeline is suitable for data transfer between AWS services or from on-premises at regular intervals. Also, storing data from thousands of sensors in Amazon S3 will incur additional charges.

Option D is incorrect as this will require separate software with Amazon EC2 instance to perform analysis of streaming data. This will incur additional costs.

For more information on Amazon Kinesis Data Analytics, refer to the following URLs,

https://docs.aws.amazon.com/kinesisanalytics/latest/java/how-it-works.html

https://aws.amazon.com/kinesis/data-analytics/faqs/?nc=sn&loc=6

 

Domain: Design Solutions for Organizational Complexity

Q40. A manufacturing company has deployed AWS Managed Microsoft AD for managing a large user base in the AWS cloud. The Security team is looking for capturing AD controller events logs such as failed login attempts or changes made to user groups. The captured logs should be analyzed in real-time and a dashboard should be created depicting trends and insights of each event.

How can a solution be designed for this requirement in the most efficient manner?

A. Stream event logs from AD controllers to Amazon Kinesis Data Analytics using Amazon CloudWatch Logs and Amazon Kinesis Data Streams. Store the events analyzed in Amazon Kinesis Data Analytics in an Amazon S3 bucket using Amazon Kinesis Firehose. Use Amazon QuickSight to create a dashboard for trends based on the data stored in Amazon S3.

B. Stream event logs from AD controllers to Amazon OpenSearch Service using Amazon CloudWatch Logs and AWS Lambda. Analyze events in Amazon OpenSearch Service and create visualization in the Amazon OpenSearch dashboard.

C. Stream event logs from AD controllers to Amazon OpenSearch Service using Amazon CloudWatch Logs and Amazon Kinesis Firehose. Analyze events in Amazon OpenSearch Service and create visualization in the Amazon OpenSearch dashboard.

D. Stream event logs from AD controllers to Amazon Kinesis Data Analytics using Amazon CloudWatch Logs and Amazon Kinesis Firehose. Analyze the events in Amazon Kinesis Data Analytics and store the results in Amazon Redshift using Amazon Kinesis Firehose. Use Amazon QuickSight to create a dashboard for trends based on the data stored in Amazon Redshift.

Correct Answer: B

Explanation:  Amazon OpenSearch Service is a managed service for search and analytical engine use cases supporting OpenSearch and legacy AWS Elasticsearch. It can also be used to create visualizations based on the analyzed data and create a dashboard with data. For the above case, event logs of the Amazon Managed Microsoft AD can be captured using Amazon CloudWatch logs. AWS Lambda can be used to push these logs in real time to Amazon OpenSearch. In Amazon OpenSearch, logs will be analyzed and a dashboard will be created based on log data.

Option A is incorrect as this will require additional services to be deployed such as Amazon Kinesis Data Streams, Amazon Kinesis Data Analytics, Amazon Kinesis Firehose, and Amazon QuickSight to set up the dashboard. This will incur additional costs and admin work.

Option C is incorrect as with Amazon Kinesis Firehose, data would be sent to Amazon OpenSearch service in near-real time and not in real-time. In real-time, data is sent to Amazon OpenSearch instantaneously while in near-real time, there will be some delay in sending data to Amazon OpenSearch.

Option D is incorrect as with Amazon Kinesis Firehose, data would be sent to Amazon Kinesis Data Analytics in near-real time and not in real-time.

For more information on the analysis of events logs using Amazon OpenSearch service, refer to the following URL,

https://aws.amazon.com/blogs/modernizing-with-aws/analyze-active-directory-event-logs-using-amazon-opensearch/

Summary

So, you’ve gone through the 10 Free AWS Solutions Architect exam questions for the Professional-level exam. This set of AWS CSAP exam questions will definitely prove an important resource and help you be confident to take the AWS Solutions Architect Professional exam.

Whizlabs team is working dedicatedly to help you in your AWS Certified Solutions Architect exam syllabus and thus we offer 15 more AWS Certified Solutions Architect Professional exam free questions and AWS Certified Solutions Architect Professional practice exam with 400 unique practice questions that will get you fully prepared for the real exam. You can learn 100% practically by accessing our 94+ Videos and 74+ AWS Hands-on Labs.

Practice now and get enough confidence to pass AWS CSAP exam! Seeking any help in your AWS CSAP exam? Submit your query/concern in Whizlabs Forum to get answered by the industry experts.

About Pavan Gumaste

Pavan Rao is a programmer / Developer by Profession and Cloud Computing Professional by choice with in-depth knowledge in AWS, Azure, Google Cloud Platform. He helps the organisation figure out what to build, ensure successful delivery, and incorporate user learning to improve the strategy and product further.

1 thought on “Free AWS Solutions Architect Professional Exam Questions (Updated 2023)”

  1. Regarding question 4) of the architect professions exam sample: wouldn’t two schedules create snapshots of the same shared volumes in parallel, thus creating a contention, not to mention that that they would create redundant snapshots?
    And why would option D be incorrect?

Leave a Comment

Your email address will not be published. Required fields are marked *


Scroll to Top