Blog Amazon Web Services Free AWS Solutions Architect Professional Exam Questions (Released February 2019)
AWS Solutions Architect Professional exam questions

Free AWS Solutions Architect Professional Exam Questions (Released February 2019)

So, how’s your preparation going for the AWS Certified Solutions Architect Architect Professional Exam (AWS CSAP)? To help you with your preparation, here we bring FREE AWS Solutions Architect Professional Exam Questions so can get familiar with the exam pattern and get ready to pass the exam.

Note that in our series of free exam questions, we’ve previously covered Free AWS Solutions Architect Associate Exam Questions and Free AWS Developer Associate Exam Questions

AWS Solutions Architect Professional exam has been updated recently to the February 2019 version, we’d recommend you go through the AWS Solutions Architect Professional exam preparation guide to follow the right track for your exam preparation.

AWS Solutions Architect Professional exam is intended for those performing the role of Solutions Architect Professional. AWS CSAP exam recognizes and validates the advanced technical knowledge and expertise of candidate in designing distributed systems and applications on the AWS platform.

AWS CSAP exam validates the candidate’s knowledge and skills in –

  • Designing and deploying scalable, highly available, reliable, and robust applications on the AWS platform
  • Selecting suitable services for designing and deploying applications as per requirements
  • Migrating multi-tier, complex applications on AWS platform
  • Implementing solutions for cost control

So, AWS Certified Solutions Architect Professional certification is a credential that demonstrates your skills of designing and deploying AWS systems and applications.

Practice with Free AWS Solutions Architect Professional Exam Questions

While preparing for the AWS CSAP exam, it is recommended to go through various resources including AWS Whitepapers, documentation, books, and online training. But there is no match of practicing with the questions that are in the same format as that of the real exam. For this, we’ve prepared this blog, where you will get 10 free AWS Solutions Architect Professional Exam Questions. This will help you understand the pattern covered in the AWS CSAP exam.

These practice questions have been prepared by our team of certified professionals and subject matter experts. These free AWS Solutions Architect Professional Exam Questions have a detailed explanation for the correct as well as incorrect options. So, will clear your doubts why a particular option is correct or incorrect. What are you thinking now? Just go through these AWS CSAP exam questions and get ready for the real exam.

1. Two departments A and B have been added into a consolidated billing organization. Department A has 5 reserved RDS instances with DB Engine as MySQL. During a particular hour, department A used three DB Instances and department B used two RDS instances, for a total of 5 DB Instances on the consolidated bill. How should the RDS instances in department B be configured so that all five instances are charged as Reserved DB Instances?

A. Department B should launch DB instances in the same availability zone as a Reserved Instance in department A.

B. The DB engine in Department B should be MySQL.

C. The DB Instance Class should be the same in both departments such as m1.large.

D. The deployment type such as Multi-AZ should be the same in both department A and department B.

E. All of the above are needed.

Correct Answer: E

Explanation:

In order to receive the cost-benefit from Reserved DB Instances, all the attributes of DB Instances (DB Instance class, DB Engine, License Model, and Deployment type) in another account have to match with the attributes of the Reserved DB Instances.

Option A~D are incorrect: Refer to the reason in Option E.

Option E is CORRECT: Because all of the other options are needed. The reference is in https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidatedbilling-other.html.

2. As an AWS specialist, you are in charge of configuring consolidated billing in a multinational IT company. In the linked accounts, users have set up AWS resources using a tag called Department, which is used to differentiate resources. There are some other user-created tags such as Phase, CICD, Trial, etc. In the cost allocation report, you only want to filter it using the tag of Department and other tags are excluded in the report. How should you implement this so that the cost report is properly set up?

A. In the Cost Allocation Tags console of master account, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. The tag starts appearing on the cost allocation report after it is applied but does not appear on earlier reports.

B. In the Cost Explorer console of master account, deactivate all the other tags except the Department tag in the User-Defined Cost Allocation Tags area. By default, all user-defined tags are activated.

C. In the Cost Explorer console of master account, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. Make sure that other tags are inactive at the same time.

D. In the Cost Allocation Tags console of master account and linked accounts, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. The tag starts appearing on the cost allocation report after it is applied and also appears on earlier reports after 1 hour.

Correct Answer: A

Explanation:

User-Defined Cost Allocation Tags can be selected and activated in the Cost Allocation Tags console:

AWS CSAP Exam Questions

Option A is CORRECT: Because using this method, only the user-defined tag Department will appear in the cost allocation report.

Option B is incorrect: Because it should be the Cost Allocation Tags console rather than the Cost Explorer console. Moreover, by default, all user-defined tags are deactivated.

Option C is incorrect: Similar to Option B.

Option D is incorrect: Because only the master account can activate or deactivate the user-defined tags. Besides, the tag does not appear on earlier reports before it is activated.

Preparing for an AWS Architect Interview? Check out these top AWS Solutions Architect Interview Questions and get yourself ready to crack the interview.

3. You are an AWS solutions architect and are in charge of the maintenance of an RDS on VMware database which is deployed on-premise. You have created a read replica in ap-south-1 region to share some read traffic. The system has run smoothly for a while then the company decides to migrate all the products to AWS including the on-premise RDS instance. Other than that, the instance needs to have another replica in another region ap-southeast-1. What actions should you take to fulfill this requirement?

  1. Use Data Migration Service to migrate the on-premise database to a RDS instance in AWS. Create a read replica in ap-southeast-1 region afterwards.
  2. In RDS console, click “migrating the instance” to create a new RDS instance. Then create a new read replica in the ap-southeast-1 region.
  3. Create another read replica in ap-southeast-1 region to share the read traffic for the RDS instance on VMware. Promote the RDS read replica in ap-south-1 to be the new RDS instance so that the original on-premise database is migrated in AWS with a replica in ap-southeast-1.
  4. Promote the RDS read replica in ap-south-1 to be the new RDS instance. Create another read replica in ap-southeast-1 for this new instance.

Correct Answer: D

Explanation:

Amazon RDS on VMware database instances can be easily migrated to Amazon RDS database instances in AWS with no impact to uptime, giving you the ability to rapidly deploy databases in all AWS regions without interrupting your customer experience. The process is as below:

AWS CSAP Exam Questions

Option A is incorrect: Because Data Migration Service is not needed. You just need to promote the read-replica to be the new RDS instance.

Option B is incorrect: Same reason as Option A. Also “migrating the instance” is incorrect.

Option C is incorrect: Because the read replica in ap-southeast-1 is still syncing with the original on-premise RDS instance. A new read replica should be created from the instance in ap-south-1.

Option D is CORRECT: Because the database can be easily migrated by promoting the read replica in ap-south-1.

4. There are two departments in a company. Both departments have owned several EC2 instances. Department A has a requirement to backup EBS volumes every 12 hours and the administrator set up a Data LifeCycle Policy in DLM for their instances. Department B requires a similar Data LifeCycle Policy as well for their instances. However, they prefer the schedule to run every 24 hours.  The administrator has noticed that 2 EC2 EBS volumes have been owned by two departments at the same time. How can the administrator set up the Data LifeCycle Policy for Department B?

A. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will be taken every 12 hours and 24 hours.

B. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will not be taken as there is a schedule conflict between two policies. However other EBS volumes are not affected.

C. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will be taken every 12 hours as 12 hours schedule takes priority.

D. Add a tag for EBS volumes that Department B has owned except the EBS volumes owned by two departments. Set up a Data LifeCycle Policy based on this tag. For the EBS volumes owned by two departments, snapshots are taken every 12 hours due to the policy of Department A.

Correct Answer: A

Multiple policies can be created to take snapshots for an EBS volume, as long as each policy targets a unique tag on the volume. In this case, the EBS volumes owned by two departments should have two tags, where tag A is the target for policy A to create a snapshot every 12 hours for Department A, and tag B is the target for policy B to create a snapshot every 24 hours for Department B, Amazon DLM creates snapshots according to the schedules for both policies.

Details refer to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html.

Explanation:

Option A is CORRECT: Because when an EBS volume has two tags, multiple policies can run at the same time.

Option B is incorrect: Because there is no schedule conflict for this scenario.

Option C is incorrect: Because 12 hours schedule does not take priority over 24 hours. And both schedules can run in parallel.

Option D is incorrect: Because the EBS volumes owned by two departments can add another tag and be included in the policy for Department B.

Preparing for AWS Solutions Architect Associate exam? Go through these Free AWS Certified Solutions Architect Exam Questions and get ready for the real exam.

5. You work at an AWS consulting company. A customer plans to migrate all its products into AWS and you are required to provide a detailed plan. The company has good experiences of Chef and prefers to continue using that. They wish that their EC2 instances use a blue/green deployment method. Moreover, it will be best if their infrastructure setup such as network layer can be easily re-run using scripts. Automatic scalability is also required for EC2. Which below options should you choose for the migration plan? Choose 3.

A. As Blue/Green deployment is not supported in OpsWorks, use Elastic Beanstalk Swap Url feature to deploy the application. Swap CNAMEs of the two environments to redirect traffic to the new version instantly.

B. Use Chef/Recipes in OpsWorks to add/deploy/edit the app in EC2 instances. The Blue/Green deployment in OpsWorks would require the Route 53 weighted routing feature.

C. In OpsWorks, set up a set of load-based EC2 instances, which AWS OpsWorks Stacks starts and stops to handle unpredictable traffic variations.

D. Create an autoscaling group with a suitable configuration based on CPU usage. Add the autoscaling group in OpsWorks stack so that its EC2 instances can scale up and down according to the CPU level automatically.

E. Edit CloudFormation templates and creates stacks for infrastructure. Add a dedicated CloudFormation stack for OpsWorks deployment and use the nested infrastructure stacks.

F. Create CloudFormation stacks for infrastructure. For the OpsWorks configurations, use AWS CLI commands such as “AWS Opsworks create-app”.

Correct Answer: B, C, E

In this scenario, as Chef is needed, OpsWorks should be considered at first priority unless there are conditions that it does not meet.

OpsWork has a key feature to scale based on time or load. For example:

AWS Certified Solutions Architect Professional Exam Questions

In terms of infrastructure, CloudFormation stack should be used. Besides, CloudFormation supports OpsWorks which means OpsWorks stack can use other nested CloudFormation stacks. In this way, the whole deployment is implemented as code.

Nested stacks are stacks created as part of other stacks. A nested stack is created within another stack by using the “AWS::CloudFormation::Stack” resource.

Explanation:

Option A is incorrect: Because OpsWorks supports Blue/Green Deployment. It needs the involvement of Route53. Refer to https://d1.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf.

Option B is CORRECT: Because OpsWorks can meet the need of Blue/Green Deployment and also use Chef which the customer prefers to use.

Option C is CORRECT: Because AWS OpsWorks supports scaling based on load including:
CPU: The average CPU consumption, such as 80%

Memory: The average memory consumption, such as 60%

Load: The average computational work a system performs in one minute.

Option D is incorrect: Because it is not straightforward to add an autoscaling group to OpsWorks although this may work. Refer to https://aws.amazon.com/blogs/devops/auto-scaling-aws-opsworks-instances/ on how to do that. The native OpsWorks Scaling feature should be chosen in Option C as it can already meet the customer’s need.

Option E is CORRECT: Because nested stacks are suitable for infrastructure and OpsWorks to work together.

Option F is incorrect: Because using AWS CLI commands to configure OpsWorks is not a automated method. An OpsWorks stack in CloudFormation should be considered.

6. API gateway and Lambda non-proxy integrations have been chosen to implement an application by a software engineer. The application is a data analysis tool that returns some statistic results when the HTTP endpoint is called. The lambda needs to communicate with some back-end data services such as Keen.io however there are chances that error happens such as wrong data requested, bad communications, etc. The lambda is written using Java and two exceptions may be returned which are BadRequestException and InternalErrorException. What should the software engineer do to map these two exceptions in API gateway with proper HTTP return codes? For example, BadRequestException and InternalErrorException are mapped to HTTP return codes 400 and 500 respectively. Select 2.

A. Add the corresponding error codes (400 and 500) on the Integration Response in API gateway.

B. Add the corresponding error codes (400 and 500) on the Method Response in API gateway.

C. Put the mapping logic into Lambda itself so that when exception happens, error codes are returned at the same time in a JSON body.

D. Add Integration Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes.

E. Add Method Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status codes 400 and 500.

Correct Answer : B, D

Explanation:

When an API gateway is established, there are four parts:

AWS CSAP Exam Questions

Method Request/Method Response are part mainly deal with API gateways and they are the API’s interface with the API’s frontend (a client), whereas Integration Request and Integration Response are the API’s interface with the backend. In this case, the backend is a lambda.

For the mapping of exceptions that come from Lambda, Integration Response is the correct place to configure. However, the corresponding error code (400) on the method response should be created first. Otherwise, API Gateway throws an invalid configuration error response at runtime. The below is an example to map BadRequestException to HTTP return code 400:

AWS CSAP Exam Questions

Option A is incorrect: Because HTTP error codes are defined firstly in Method Response instead of Integration Response.

Option B is CORRECT:  Because HTTP error codes are defined firstly in Method Response instead of Integration Response. (Same reason as A).

Option C is incorrect: Because Integration Response in API gateway should be used. Refer to https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html on “how to Handle Lambda Errors in API Gateway”.

Option D is CORRECT: Because BadRequest or InternalError should be mapped to 400 and 500 in Integration Response settings.

Option E is incorrect: Because Method Response is the interface with the frontend. It does not deal with how to map the response from Lambda/backend.

Preparing for an AWS interview? Prepare yourself with these top 50 AWS interview questions and answers to ace the interview.

7. An IT company owns a web product in AWS that provides discount restaurant information to customers. It has used one S3 Bucket (my_bucket) to store restaurant data such as pictures, menus, etc. The product is deployed in VPC subnets. The company’s Cloud Architect decides to configure a VPC endpoint for this S3 bucket so that the performance will be enhanced. To be compliance to security rules, it is required that the new VPC endpoint is only used to communicate with this specific S3 Bucket and on the other hand, the S3 bucket only allows the read/write operations coming from this VPC endpoint. Which two options should the Cloud Architect choose to meet the security needs?

A. Use a VPC Endpoint policy for Amazon S3 to restrict access to the S3 Bucket “my_bucket” so that the VPC Endpoint is only allowed to perform S3 actions on “my_bucket”.

B. Modify the security group of the EC2 instance to limit the outbound actions to the VPC Endpoint if the outgoing traffic destination is the S3 bucket “my_bucket”.

C. In the S3 bucket “my_bucket”, add an S3 bucket policy in which all actions are denied if the source IP address is not equal to the EC2 public IP (use “NotIpAddress” condition).

D. For the S3 bucket “my_bucket”, use an S3 bucket policy that denies all actions if the source VPC Endpoint is no equal to the endpoint ID that is created.

E. Create a S3 bucket policy in the S3 bucket “my_bucket” which denies all actions unless the source IP address is equal to the EC2 public IP (use “IpAddress” condition).

Correct Answer : A, D

In this case, two restrictions are required:

1, For the VPC endpoint, restricting access to the specific S3 Bucket “my_bucket”. A VPC Endpoint policy is needed:

{
  "Statement": [
    {
      "Sid": "Access-to-my-bucket-only",
      "Principal": "*",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::my_bucket",
                   "arn:aws:s3:::my_bucket/*"]
    }
  ]
}

2, For the S3 bucket “my_bucket”, restricting access to the new VPC Endpoint. S3 Bucket policy is required:

{
  "Version": "2012-10-17",
  "Id": "Policy1415115909152",
  "Statement": [
    {
      "Sid": "Access-to-specific-VPCE-only",
      "Principal": "*",
      "Action": "s3:*",
      "Effect": "Deny",
      "Resource": ["arn:aws:s3:::my_bucket",
                   "arn:aws:s3:::my_bucket/*"],
      "Condition": {
        "StringNotEquals": {
          "aws:sourceVpce": "vpce-1a2b3c4d"
        }
      }
    }
  ]
}

In terms of S3 bucket policy for VPC Endpoint, the aws:SourceIp condition cannot be used as for either NotIpAddress or IpAddresse, the condition fails to match any specified IP address or IP address range. Instead, the specific endpoint ID should be used for the S3 bucket policy.

Explanation:

Option A is CORRECT: Because VPC Endpoint policy helps on restricting which entity is able to use the VPC Endpoint. It is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint.

Option B is incorrect: Because security group cannot limit the actions for VPC endpoints.

Option C is incorrect: Because for S3 bucket policy, NotIpAddress condition is always met for VPC endpoint so that it cannot help on restricting the traffic from VPC endpoint.

Option D is CORRECT: Because in S3 bucket policy, a rule can be set up to deny all actions if the incoming traffic is not from the VPC Endpoint ID.

Option E is incorrect: Same reason as option C.

8. An IOT company has a new product which is a camera device. The device has installed several sensors and can record video as required. The device has AWS Kinesis Video Streams SDK in the software and is able to transmit recorded video in real time to AWS Kinesis. Then the end users can use a desktop or web client to view, download or share the video stream. The client app should be simple and use a third-party player such as Google Shaka Player to display the video stream from Kinesis. How should the client app be designed?

A. The client can use HTTP Live Streaming (HLS) for live playback. Use GetMedia API to process and play Kinesis video streams.

B. The client can use HLS for live playback. Use GetHLSStreamingSessionURL API to retrieve the HLS streaming session URL then provide the URL to the video player.

C. The client can use Adobe HTTP Dynamic Streaming (HDS) for live playback. Use GetHDSStreamingSessionURL API to retrieve the HDS streaming session URL then provide the URL to the video player.

D. The client can use Microsoft Smooth Streaming (MSS) for live playback. Use GetMSSStreaming API to retrieve the MSS streaming to the video player.

Correct Answer : B

Explanation:

The most straightforward way to view or live playback the video in Kinesis Video Streams is using HLS. HTTP Live Streaming (HLS) is an industry-standard HTTP-based media streaming communications protocol.

Option A is incorrect: Because although GetMedia API may work, it is not as simple as HLS. You may have to create a player that uses GetMedia and build it yourself. However, in this case, a third party player is needed. Reference is in https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-hls.html#how-hls-ex1-session.

Option B is CORRECT: Because GetHLSStreamingSessionURL API is required for third-party player to play the HLS streams.

Option C is incorrect: Because HTTP Live Streaming (HLS) should be used to playback the Kinesis Video Streams.

Option D is incorrect: Same reason as Option C.

Selection of good books is important while preparing for the AWS Solutions Architect Associate exam. Check out the list of the Best Books for AWS Certified Solutions Architect Exam now!

9. You are hired as an AWS solutions architect in a startup company. You notice that there are some issues for the backup strategy of EC2 instances and there is no snapshot lifecycle management at all. Users just create snapshots manually without a routine policy to control. You want to suggest to use a proper EBS Snapshot Lifecycle policy. How would you persuade your team lead to approve this suggestion?  (Select TWO)

A. A snapshot lifecycle policy helps to retain backups as required by auditors or internal compliance.

B. An EBS Snapshot Lifecycle helps to protect valuable data by enforcing a regular backup schedule.

C. A proper snapshot lifecycle policy is able to reduce storage costs as the snapshots taken by the scheduling policy are free.

D. The user can design their own schedule to backup snapshots according to different requirements, such as every 1 hour, 12 hours, 24 hours, 1 week, etc.

Correct Answer : A, B

Explanation:

EBS Snapshot Lifecycle policy, as a backup strategy, can bring lots of benefits for EC2 users. About the details, please refer to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html.

Option A is CORRECT: Because EC2 EBS volumes can have a routine backup which helps on quality audit.

Option B is CORRECT: Because this is the major benefit of lifecycle policy which helps on preserve important data and EBS volumes can be easily restored via the snapshots.

Option C is incorrect: The snapshot lifecycle policy can reduce storage costs by deleting outdated backups. However the snapshots themselves still have costs.

Option D is incorrect: Because the snapshots can only be created every 12 hours or 24 hours.

10. An IT company has a big data analytics application that is deployed in EC2 in multiple availability zones. These EC2 instances simultaneously access a shared Amazon EFS file system using a traditional file permissions model. A recent internal security audit has found that there is a potential security risk as the EFS file system is not encrypted for either at rest or in transit. What actions could be taken to address the potential security threat posed by non encryption of the EFS volume?

A. The encryption of data at rest has to be enabled when the Amazon EFS file system is created. The encryption of data in transit can be enabled when the file system is mounted in EC2 instance.

B. The encryption of data at rest and in transit can be enabled when the Amazon EFS file system is created.

C. The encryption of data at rest and in transit can only be enabled when the Amazon EFS file system is mounted in EC2 instance.

D. The encryption of data at rest is able to be enabled when the Amazon EFS file system is mounted in EC2 instance. The encryption of data in transit is enabled when the EFS file system is created using AWS console or CLI.

Correct Answer: A

Explanation:

Both encryption of data in transit and at rest are supported for EFS. Because of this, Amazon EFS now offers a comprehensive encryption solution. Blog https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/ has an introduction of this.

Option A is CORRECT: For the encryption at rest, it can be enabled as an option when the EFS file system is created:

AWS Certified Solutions Architect Professional Exam Questions

For the encryption in transit, it can be enabled when the EFS file system is mounted:

sudo mount -t efs  -o tls fs-12345678:/ /mnt/efs

Reference is in https://docs.aws.amazon.com/efs/latest/ug/encryption.html.

Option B is incorrect: Because the encryption of data in transit is enabled when EFS file system is mounted.

Option C is incorrect: Because the encryption of data at rest is enabled when EFS file system is created.

Option D is incorrect: Same reason as Option B & C.

Preparing for the AWS Certified Developer Associate exam? Try these AWS Developer Associate exam questions for FREE and check your preparation level.

Final Words

So, you’ve gone through the 10 Free AWS Solutions Architect exam questions for the Professional-level exam. This set of AWS CSAP exam questions will definitely prove an important resource and help you be confident to take the AWS Solutions Architect Professional exam.

Whizlabs team is working dedicatedly to help you in your AWS Certified Solutions Architect exam syllabus and thus we offer 15 more AWS Certified Solutions Architect Professional exam free questions and AWS Certified Solutions Architect Professional practice exam with 400 unique practice questions that will get you fully prepared for the real exam.

Practice now and get enough confidence to pass AWS CSAP exam!

Seeking any help in your AWS CSAP exam? Submit your query/concern in Whizlabs Forum to get answered by the industry experts.

About Neeru Jain

Technology Scientist by Mind and Passionate Writer by Heart!! With an enthusiasm for technological research and learning, Neeru turned out to be a technology expert. Her Belief: “Words are powerful enough to change Mind, Life, and the World; only the writer should have a real passion for Writing!!”
Spread the love

LEAVE A REPLY

Please enter your comment!
Please enter your name here