PAS-C01 exam

30+ Free Questions on AWS Certified SAP on AWS – Specialty Exam (PAS-C01)

Preparing for AWS Certified SAP on AWS Specialty Exam (PAS-C01) exam? Here we provide a list of free AWS Certified SAP on AWS Specialty exam questions and answers to well-prepare for the exam.

These sample PAS-C01 practice exam questions were found similar to the real exam format. In this blog we list out our newly updated 25+ FREE questions on the AWS Certified SAP on AWS exam.

Table of Contents

What do AWS SAP Professionals do?

AWS SAP Professionals engaged in designing, deploying, operating and migrating the workloads of SAP on AWS platform. They can help in improving the workloads, scaling process and reduction of time by constructing  flexible infrastructure.

On the basis of best practices defined by AWS Well-Architecture Framework, the AWS SAP professionals design SAP solutions and thus help to achieve optimized SAP on AWS. 

What to expect in AWS Certified SAP on AWS Specialty Exam (PAS-C01) exam questions?

This exam main focus is to design, implement and manage the workloads of SAP while moving to the AWS platform.

pas-c01 aws

AWS SAP Professionals exam questions helps to assess your abilities to:

  • Creation of SAP solution that operates in AWS cloud in accordance with the AWS architecture framework
  • Develop SAP solution that operates in AWS cloud and maintain the standards for certification of SAP and support
  • Deployment of new workloads of SAP on AWS
  • Migration of the existing workloads of the SAP into AWS
  • Running the workloads of SAP on aws infrastructure

How difficult is the AWS Certified SAP on AWS Specialty (PAS-C01) Exam?

If you plan to take then AWS Certified SAP on AWS – Specialty Exam (PAS-C01), you will start to think about the difficulties of the exam. AWS Certified SAP on AWS – Specialty Exam (PAS-C01) found as quite difficult and thus it requires lot of preparation for passing the exam.

Here are some of the tips to get well-prepared for the exam:

  1. Understand exam objectives and domains. Before initiating the study process, ensure you have thorough knowledge on what to focus on the studies and what to expect.
  2. Use study resources. There exists various detailed study resources and sample questions and make use of it.
  3. Take a lot of AWS Certified SAP on AWS Specialty Exam (PAS-C01) practice tests and do many hands-on exercises. If you practice more, then you’ll get familiarity with the exam formats. 
  4. Take care of your health and take rest. Ensure you get sufficient rest before appearing for the exam. It can help to think in a clear way and aids in delivery of best results in exams.
  5. Be positive and believe in yourself. Make time to encourage yourself when you feel down. If you have proper preparation, you can definitely pass the  AWS Certified SAP exam.

FREE Questions on AWS Certified SAP on AWS Specialty Exam (PAS-C01)

These free questions on the AWS Certified SAP on AWS Specialty exam can help to assess the ability of a candidate whether they are ready or not to take the real exam. Spend some time to take a look at these  AWS Certified SAP exam free questions and try it out before appearing for the exam.

Domain : Design of SAP workloads on AWS

Question 1 : An EMEA region customer is planning to run their SAP workloads on AWS. The customer’s SAProuter is currently running on-premise in their network’s demilitarized Zone(DMZ). They are looking for a similar solution to set up SAProuter in the AWS cloud. 
Which of the following combinations of steps can help meet the customer’s requirement? (Select TWO)

A. Launch the instance that the SAProuter software will be installed on into a public subnet of the VPC and assign it an Elastic IP address
B. Launch the instance that the SAProuter software will be installed on into a private subnet of the VPC and assign it an Elastic IP address
C. Create and configure a security group for the SAProuter instance, which allows the inbound and outbound access from SAP provide IP address along with TCP port 3299
D. Create and configure a security group for the SAProuter instance, which allows the inbound and outbound access from SAP provide IP address along with TCP port 3600
E. Create and configure a security group for the SAProuter instance, which allows inbound and outbound access from the internet along with TCP port 3600

Correct Answers: A and C

Explanation: 

SAProuter is support software that provides a remote connection between our customer’s network and SAP. This means that SAProuter always needs to be able to access SAP’s support network and at the same time provide a secure connection to the SAP systems. 

Therefore, SAProuter needs to be installed in a public subnet. Also, only the inbound and outbound access from SAP-provided IP addresses should be allowed along with TCP port 3299. 

Option A is CORRECT because SAProuter software needs to be installed in a public subnet. 

Option B is incorrect because SAProuter software should not be installed in a private subnet. 

Option C is CORRECT because TCP port 3299 along with only SAP-provided IP is the correct choice. 

Option D is incorrect because TCP port 3600 is an incorrect choice. TCP port 3600 is used to connect to the SAP message server. 

Option E is incorrect because TCP port 3600 is an incorrect choice and access to the internet should not be allowed (only to SAP-provided IP address).

References: https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-support.htmlhttps://docs.aws.amazon.com/sap/latest/general/overview-router-solman.html

Domain: Operation and maintenance of SAP workloads on AWS

Question 2: A US-based customer is running their SAP S/4 HANA workloads on SUSE Linux on AWS cloud. The landscape includes Sandbox (SBX), Development (DEV), Quality (QAS) and Production (PRD) systems. To optimize the running cost, the customer’s SAP Basis team wants to stop all the non-production systems on Friday at 10:00 pm and then start them again on Monday at 06:00 AM. The SAP Basis team has already created a shell script to stop and start the SAP applications. They have scheduled it to run on Friday at 09:00 pm and Monday at 07:00 AM in crontab. 

The customer’s Basis team is now looking for a method to automatically stop and start the EC2 instances using an AWS managed service. Which of the following solutions can help the Basis team with their requirements? 

  1. Using the AWS Dataprovider agent for SAP, the customer can automatically stop and start the EC2 instance. 
  2. Using the Systems Manager (SSM) maintenance window using the AWS-StartEC2Instance or AWS-StopEC2Instance tasks. 
  3. Using the Systems Manager (SSM) maintenance window using the AWS-StartEC2Downtime or AWS-StopEC2Downtime tasks. 
  4. Using the AWS Backint agent for SAP, the customer can automatically stop and start the EC2 instance. 

Answer: B
Explanation: To schedule Amazon EC2 managed instances to start or stop using Systems Manager maintenance windows, register AWS-StartEC2Instance or AWS-StopEC2Instance Automation tasks to a maintenance window. The maintenance window targets the configured EC2 instances, and then stops or starts the instances using the provided Automation document steps on the chosen schedule.

There is no such task called AWS-StartEC2Downtime or AWS-StopEC2Downtime. AWS Backint agent is used for SAP HANA backups, whereas AWS Dataprovider agent is a monitoring agent installed on the EC2 instances.
 Option A is incorrect because AWS Dataprovider agent for SAP cannot automate the start and stop task of EC2 instance.

Option B is CORRECT because the AWS-StartEC2Instance or AWS-StopEC2Instance tasks can help the customer schedule desired start and stop time for EC2 instances.

Option C is incorrect because there is no such automation task as AWS-StartEC2Downtime or AWS-StopEC2Downtime tasks.

Option D is incorrect because AWS Backint agent for SAP cannot automate the start and stop task of the EC2 instance.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/ssm-ec2-stop-start-maintenance-window/

https://aws.amazon.com/sap/ 

Domain:  Operation and maintenance of SAP workloads on AWS

Question 3:  A European pharma company is running their SAP HANA workloads on AWS cloud. They are using AWS Backint Agent for SAP HANA backups. The backups are stored in an Amazon S3 Standard bucket. The company’s SAP Basis team has set up a retention policy for the backups which will delete the backups older than 14 days. 

The SAP Basis team notices that the deleted backup files from SAP HANA backup console still appear in the Amazon S3 folder.

What should the Basis team do to fix the issue with backups? 

  1. Ensure that the IAM profile attached to the EC2 instance has s3:DeleteObject permission.
  2. Reinstall and reconfigure the AWS Backint Agent on the EC2 instance. 
  3. Ensure that the IAM profile attached to the EC2 instance has s3:RemoveObject permission.
  4. Restart the SAP HANA database and reconfigure the AWS Backint Agent on the EC2 instance.

Answer: A

Explanation: AWS Backint agent requires s3:DeleteObject permission to delete the backup files from the target Amazon S3 bucket when we delete the backup from the SAP HANA backup console. Ensure that the IAM profile attached to the EC2 instance has s3:DeleteObject permission. For backups that are already deleted from SAP HANA, we can manually delete the associated files from the Amazon S3 bucket. It is recommended to take additional precautions before manually deleting any backup files. Manually deleting the wrong backup file could impact the ability to recover the SAP HANA system in the future.

Reinstalling and reconfiguring the AWS Backint Agent on the EC2 instance won’t fix the permission issue. The same is with restarting the HANA database. Also, there is no such AWS permission called s3:RemoveObject.
Option A is CORRECT because providing s3:DeleteObject permission to the IAM profile attached to the EC2 instance will fix the issue.

Option B is incorrect because reinstalling and reconfiguring the AWS Backint Agent on the EC2 instance will not fix the issue.

Option C is incorrect because there is no such permission called s3:RemoveObject. It is s3:DeleteObject.

Option D is incorrect because restarting the HANA database will not solve the permission issues.
Reference:

https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-troubleshooting.html
https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-what-is.html
Domain:  Operation and maintenance of SAP workloads on AWS

Question 4: A customer has been running their SAP S/4 HANA workloads on the AWS cloud for 6 months now. They have a 4 system landscape which includes Development, Quality, Pre-production, and Production systems. The customer is using Amazon Elastic Block Store (EBS) Provisioned IOPS SSD io2 as a storage option for /hana/data and /hana/logs in the entire landscape.

The customer is worried about the increased cost of operations from running SAP workloads on AWS. AWS Compute Optimizer suggests that the /hana/data volume on the production system is underutilized.

Which of the following steps can the customer take to reduce the storage operational cost for the SAP S/4 HANA system? 

  1. Change the storage type of /hana/data and /hana/logs in the non-production systems to Amazon EBS General Purpose SSD gp3. Reduce the size of the /hana/data volume in the production to the appropriate size.
  2. Change the storage type of /hana/data and /hana/logs in the non-production systems to Amazon EBS Throughput Optimized HDD st1. Increase the size of the /hana/data volume in the production to the appropriate size.
  3. Change the storage type of /hana/data and /hana/logs in the non-production systems to Amazon EBS Throughput Optimized HDD st1. Reduce the size of the /hana/data volume in the production to the appropriate size.
  4. Change the storage type of /hana/data and /hana/logs in the non-production systems to Amazon EBS General Purpose SSD gp3. Increase the size of the /hana/data volume in the production to the appropriate size.

Answer: A

Explanation:  Both Amazon EBS General Purpose SSD (gp2 and gp3) and Provisioned IOPS SSD (io1, io2, and io2 Block Express) storage solutions for SAP HANA workloads are certified by SAP. For non-production use cases, the customer can use gp3, which balances performance and cost. Also, the customer can reduce the /hana/data to an appropriate size to save storage cost.

Amazon EBS Throughput Optimized HDD st1 is not certified for use with HANA.

 

Option A is CORRECT because gp3 is sufficient for non-production workloads and reducing the size of /hana/data can help with cost reduction.

 

Option B is incorrect because Amazon EBS Throughput Optimized HDD st1 is not a certified storage type for HANA

 

Option C is incorrect because Amazon EBS Throughput Optimized HDD st1 is not a certified storage type for HANA

 

Option D is incorrect because the /hana/data size of the production system should decrease and not increase.

 

Reference:
https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-storage-config.html 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hdd-vols.html

Domain:  Operation and maintenance of SAP workloads on AWS

Question 4: A US-based customer has recently migrated their SAP workloads on AWS using lift and shift migration methodology. After running the systems in the AWS cloud for a few months, they realize that the cost of running the workloads on AWS is higher than they expected. While doing the root cause analysis of this, they found that some of the AWS services are either oversized or underutilized. 

The customer is looking for guidance on resizing the AWS services which can help them with cost optimization. Which of the following guides can help the customer to refine to the size that they actually need?

  1. The customer can refer to the AWS Well-Architected Framework guide.
  2. The customer can refer to the AWS Right Sizing guide.
  3. The customer can refer to the SAP Lens for AWS guide.
  4. The customer can refer to the “SAP on AWS sizing and capacity planning guide”.

Answer: B
Explanation:  Right sizing is the process of matching instance types and sizes to the customer’s workload performance and capacity requirements at the lowest possible cost. It’s also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirements, which results in lower costs.
Right sizing is done after the SAP workloads are deployed on the AWS cloud.
AWS Well-Architected Framework and SAP Lens for AWS guide, both provide the best practices around architecting SAP solutions. There is no such guide called ‘SAP on AWS sizing and capacity planning guide”
Option A is incorrect because Well-Architected Framework does not provide resizing guidance.

Option B is CORRECT because the Right Sizing guide provides best practices around sizing the EC2 instance after deployment.

 

Option C is incorrect because SAP Lens for AWS does not provide resizing guidance.

Option D is incorrect because SAP on AWS sizing and capacity planning guide does not exist.


Reference:
https://docs.aws.amazon.com/sap/latest/sap-netweaver/net-win-sizing.html
https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-right-sizing/right-sizing-ongoing-process.html

Domain: Operation and maintenance of SAP workloads on AWS

Question 5: A customer is running their SAP S/4 HANA workloads on AWS Cloud. They want to set up a SAP HANA system for a proof-of-concept (PoC) project on AWS cloud. This PoC project will last 3 weeks. The customer wants to build a S/4 HANA 2021 FPS02 system that is pre-activated with SAP Best Practices for SAP S/4HANA core functions.

Which of the following solutions can meet the customer’s requirement in the shortest time?

  1. The customer can use a pre-activated SAP S/4 HANA 2021 FPS02 AMI image from AWS Marketplace to build the system.
  2. The customer can use the SAP S/4 HANA 2021 FPS02 Appliance Templates from SAP Cloud Appliance Library to build the system.
  3. The customer can deploy the SAP S/4 HANA 2021 FPS02 system using AWS LaunchWizard.
  4. The customer can deploy the SAP S/4 HANA 2021 FPS02 system using AWS CloudFormation scripts.

Answer: B

Explanation: Cloud Appliance Library comes pre-activated with SAP Best Practices for SAP S/4HANA core functions. It is also a recommended method of deployment when deploying for a PoC for a shorter duration. The EC2 instance types are pre-selected here and usually sized for test and demo systems. This option also provides the shortest time.

AWS Marketplace does not have a pre-activated SAP S/4 HANA 2021 FPS02 AMI image. LaunchWizard asks for instance types and other details. The guided procedure can take some time in LaunchWizard. CloudFormation scripts are lengthier options because of the time required to create a CloudFormation script. It is not the shortest time option.

Option A is incorrect because the pre-activated SAP S/4 HANA 2021 FPS02 AMI image is not available in the AWS marketplace.

Option B is CORRECT because Cloud Appliance Library provides pre-activated content with SAP Best Practices for SAP S/4HANA core functions.

 

Option C is incorrect because LaunchWizard does not provide pre-activate content.

 

Option D is incorrect because the CloudFormation script takes time to develop and it does not provide pre-activated content. 

Reference:

https://docs.aws.amazon.com/sap/latest/general/overview-sap-on-aws.html 

Domain : Design of SAP workloads on AWS

Question 6 : A Singapore based Financial customer that has been running their SAP workloads on-premise in their datacenter for the last 30 years is considering migrating to AWS cloud. Their SAP team is worried about the security of SAP applications on the AWS cloud.
Which of the following is true about the security in the AWS cloud? (Select TWO)

A. AWS is responsible for “Security of the cloud”. It involves protecting the infrastructure that runs the services offered in the AWS Cloud
B. Customer is responsible for “Security in the cloud”. It involves protecting the infrastructure that runs the services offered in the AWS Cloud
C. AWS is responsible for “Security of the cloud”. It involves protecting the data owned by customers along with Identity and access management in the AWS cloud
D. Customer is responsible for “Security in the cloud”. It involves protecting the data owned by customers along with Identity and access management in the AWS cloud

Correct Answers: A and D

Explanation: 

Here the understanding of the AWS Shared Responsibility Model is important. AWS is responsible for “Security of the cloud” and the customer is responsible for “Security in the cloud”. “Security of the cloud” means protecting the infrastructure that runs the services offered in the AWS cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.  “Security in the cloud” means the data, platform, identity, and access management, etc. It depends on the services which the customer selects to run in the AWS cloud. 

Option A is CORRECT because AWS is responsible for “Security of the cloud”.

Option B is incorrect because the definition of “Security in the cloud” here is wrong.

Option C is incorrect because the definition of “Security of the cloud” here is wrong.

Option D is CORRECT because the Customer is responsible for “Security in the cloud”.

References: Shared Responsibility Model – Amazon Web Services (AWS), Best Practice 5.1 – Define security roles and responsibilities – SAP Lens

 

Domain : Design of SAP workloads on AWS

Question 7: A customer is running their SAP workloads on their on-premise environment. The customer wants to connect their on-premise datacenter to the AWS cloud. They are looking for a networking solution that provides encryption during data transfer and are not worried about the cost of setup. 
Which of the following AWS services meets the customer’s requirement?

A. AWS Site-to-Site VPN
B. AWS Direct Connect
C. AWS Client VPN
D. AWS Customer Gateway

Correct Answer: A

Explanation: 

AWS Site-to-Site VPN and AWS Direct Connect are two options that provide connectivity of customer datacenter to the AWS cloud. Out of these AWS Site-to-Site VPN is both cost-effective and provides encryption by default. AWS Direct Connect is fast but it is costly and does not provide encryption for data-in-transit. 

AWS Client VPN is a managed client-based VPN service that is used by users to connect to either the AWS network or the customer’s network. AWS customer gateway is a physical or software appliance that a customer manages in their on-premises network for Site-to-Site VPN.
Option A is CORRECT because Site-to-Site VPN provides encryption by default.

Option B is incorrect because AWS Direct connect does not provide encryption by default.

Option C is incorrect because AWS Client VPN does not connect to the customer’s on-premise datacenter.

Option D is incorrect because AWS customer gateway is a component used in Site-to-Site VPN. 

References: https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.htmlhttps://docs.aws.amazon.com/wellarchitected/latest/sap-lens/security.html

 

 

Question 8 : A company is planning to deploy their SAP BW/4HANA workloads on AWS. They are planning for a scale-out implementation of the HANA database with 4 nodes. The company is also planning to set up a HANA Auto host failover where three nodes will be acting as worker nodes and the fourth node will be a standby node.
The company is looking for a low network latency solution that is required for internode communication in a scale-out deployment. 
Which of the following solutions will meet the company’s requirements?

A. Deploy the HANA nodes in a cluster placement group across multiple availability zones
B. Deploy the HANA nodes in a spread placement group across a single availability zone
C. Deploy the HANA nodes in a cluster placement group across a single availability zone
D. Deploy the HANA nodes in a spread placement group across multiple availability zones

Correct Answer: C

Explanation: 

To meet the SAP certification for internode communication in an SAP HANA scale-out deployment, it is necessary to use a cluster placement group. A cluster placement group can only be deployed in a single availability zone.
A Cluster placement group is recommended for applications that benefit from low network latency, high network throughput, or both. Whereas, a spread placement group is a group of instances that are each placed on distinct hardware.
Option A is incorrect because a cluster placement group cannot span availability zones.

Option B is incorrect because a spread placement group is not the correct solution. 

Option C is CORRECT because a cluster placement group is required to meet the HANA internode low network latency requirements.

Option D is incorrect because a spread placement group is not the correct solution.

Reference: Best Practice 13.4 – Choose Regions and Availability Zones to minimize latency – SAP Lens

 

Domain : Design of SAP workloads on AWS

Question 9 : A US-based Financial company is running their SAP S/4 HANA system in the us-east-1 region on AWS. They are running the SAP system in high availability mode within two different availability zones. They plan to have a multi-region disaster recovery(DR) solution for their S/4 HANA system. The company is also looking for a database native solution for data replication.
Which of the following is the most optimal solution for the company’s DR requirements?

A. Set up the disaster recovery system in us-west-1. Use HANA System Replication in an asynchronous (async) mode for data replication
B. Set up the disaster recovery system in us-west-1. Use HANA System Replication in synchronous (sync) mode for data replication
C. Set up the disaster recovery system in us-west-1. Use S3 Cross Region Replication (CRR) for backup replication across regions
D. Set up the disaster recovery system in us-west-1. Use HANA System Replication in synchronous in-memory (syncmem) mode for data replication

Correct Answer: A 

Explanation: 

Here the understanding of Multi-Region architecture patterns for SAP HANA is important. The architect also needs to know different HANA system replication modes. For disaster recovery (DR) solutions, the asynchronous mode of SAP HANA System Replication is recommended because of the increased latency between the AWS regions. 

A synchronous (sync) or synchronous in-memory (syncmem) mode waits for the operation to be completed on the secondary side and then commits the transaction in the database on the primary side. 

S3 Cross Region Replication (CRR) is not a database native solution. 

Option A is CORRECT because the asynchronous (async) mode of HANA system replication is the correct choice as the latency between the AWS regions is comparatively higher than availability zones. 

Option B is incorrect because the synchronous (sync) mode of HANA system replication is not the correct choice due to latency issues.

Option C is incorrect because S3 Cross Region Replication (CRR) is not a database native solution.

Option D is CORRECT because the synchronous in-memory (syncmem) mode of HANA system replication is not the correct choice due to latency issues.

References: https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns-multi.html, SAP on AWS: Build for availability and reliability

 

Domain : Implementation of SAP workloads on AWS

Question 10 : A US-based banking customer is running their SAP workloads on AWS. They have set up a high availability and disaster recovery solution for their production SAP system in AWS. The highly available systems are running in the us-east-1 region in two availability zones AZ-1 & AZ-2. The disaster recovery systems are placed in the us-west-1 region.
The customer is looking for a solution against logical data loss that could happen due to malicious activity or human error. 
Which of the following solutions is recommended by AWS to meet the customer’s requirement?

A. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket should be replicated to another Amazon S3 bucket owned by a separate AWS account in either us-east-1 or us-west-1 using Same-Region Replication (SRR) or Cross-Region Replication (CRR) respectively
B. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket is replicated using Cross-Region replication (CRR) to another Amazon S3 bucket in us-west-1
C. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket is replicated using Same-Region replication (SRR) to another Amazon S3 bucket in us-east-1
D. To protect against logical data loss, it is recommended that regular copies of the data are backed up to an Amazon S3 bucket. This bucket should have Lifecycle rules enabled so that data can be periodically moved to AWS Glacier

Correct Answer: A

Explanation:  

Here the understanding of what is a ‘logical data loss’ is important. Data becoming corrupted or lost due to human error should also be considered in good architecture. To protect against logical data loss, AWS recommends replicating the S3 bucket in another account, it does not matter if it’s in the same region or another. This ensures that data loss due to malicious activity within the AWS account or due to human error can be recovered. 

Using S3 Same-Region replication (SRR) or CRR is not a valid solution if the S3 buckets are replicated under the same AWS accounts.

AWS Glacier is used for archiving old backups or old data. It cannot protect against logical data loss. 

Option A is CORRECT because replicating to S3 buckets in another account is the correct choice.  

Option B is incorrect because unless data is not replicated to S3 buckets in another account it is not protected against logical data loss.

Option C is incorrect because unless data is not replicated to S3 buckets in another account it is not protected against logical data loss.

Option D is incorrect because AWS Glacier is not the correct choice here. 

Reference: Failure scenarios – General SAP Guides 

 

Domain : Implementation of SAP workloads on AWS

Question 11 : A US-based OTT platform company is running their SAP workloads on-premise.
To reduce the overhead administration of infrastructure, the company has decided to move the non-production SAP workloads to AWS. This includes the sandbox, development, and quality environment. The company on-premises is connected to AWS via a site-to-site VPN connection. 
The company has decided to keep running SAProuter and SAP Solution Manager systems in their on-premise environment.
Which of the following statements is true regarding this architecture? (Select TWO)

A. Set up support connectivity for the SAP systems on AWS a change in saprouttab files
B. Do not set up support connectivity for the SAP systems on AWS a change in saprouttab files
C. A new connection to SAP Support Backbone is required from the Solution Manager system
D. A new connection to SAP Support Backbone is not required from the Solution Manager system

Correct Answers: A and D

Explanation:

The only change that is required is to add IPs of the new SAP system in the saprouttab file. So, a change in saprouttab files is required. Secondly, unless we are not adding a managed system to the Solution Manager ( which is not mentioned in this question) no change in Solution Manager is required. 

A new connection to SAP Support Backbone is also not required as the existing connection of the SAP solution manager will work just as fine. 

Option A is CORRECT because a change in saprouttab files is required to add the changed IPs.

Option B is incorrect because a change in saprouttab files is required

Option C is incorrect because a new SAP Support Backbone connection is not required as the existing connection of the SAP solution manager will work just as fine.  

Option D is CORRECT because it is true that a new SAP Support Backbone connection is not required.

Reference: SAProuter and SAP Solution Manager – General SAP Guides 

 

Domain : Implementation of SAP workloads on AWS

Question 12 : A customer who is running their SAP workloads on-premise wants to deploy the SAP HANA database on the AWS cloud. The customer wants to first try a Proof of Concept (POC) architecture and use HANA Quick Start Reference Deployment in a Single-AZ, single-node deployment scenario. The customer wants to monitor the Quick Start deployment progress.
Where can the customer check relevant deployment logs? 

A. The customer can find the deployment logs in the /root/install/ folder of the SAP HANA instance. The name of the log file is install.log 
B. The customer can find the deployment logs in the /root/install/ folder of the SAP HANA instance. The name of the log file is deploy.log
C. The customer can find the deployment logs in the /root/deployment/ folder of the SAP HANA instance. The name of the log file is install.log
D. The customer can find the deployment logs in the /root/deployment/ folder of the SAP HANA instance. The name of the log file is deploy.log

Correct Answer: A

Explanation: 

The deployment logs for Quick Start reference deployment for HANA are located in /root/install/ folder. The name of the log file is install.log.

There is no such folder called /root/deployment or file called deploy.log

Option A is CORRECT because the deployment log file, install.log is in the /root/install/ folder.

Option B is incorrect because there is no such file called deploy.log.

Option C is incorrect because there is no such folder called /root/deployment/

Option D is incorrect because there is no such file called deploy.log or folder called  /root/deployment/

References: Troubleshooting – SAP HANA on AWSArchitecture – SAP HANA on AWS 

Domain : Migration of SAP workloads to AWS

Question 13 : A customer is running their SAP workloads in a Hybrid cloud model. The non-production systems are hosted on the AWS cloud and production systems are running on customers’ on-premise datacenter. There is a Direct Connect connectivity between the on-premises datacenter and AWS cloud.
The customer is planning to create a new HANA sandbox system (SBX) on the AWS cloud from the data of the production system (PRD) based on HANA 2.0 SPS 4, using the backup-restore method. 
Which of the following combination of steps should the customer perform to achieve the requirement? (Select TWO)

A. Launch the EC2 instance hosting the SBX system in the public subnet of VPC. Install a HANA database with a version same or higher than the version of the PRD database on this EC2 instance
B. Launch the EC2 instance hosting the SBX system in the private subnet of VPC. Install a HANA database with a version same or higher than the version of the PRD database on this EC2 instance
C. Launch the EC2 instance hosting the SBX system in the private subnet of VPC. Install a HANA database of any version on this EC2 instance
D. Create a backup of the tenant database of the PRD system and transfer this backup to an S3 bucket that is accessible by the SBX EC2 instance. Restore the backup to the newly created SBX database
E. Create a backup of the SYSTEMDB and tenant database of the PRD system and transfer this backup to an S3 bucket that is accessible by the SBX EC2 instance. Restore the backup to the newly created SBX database

Correct Answers: B and D

Explanation: 

Here the understanding of SAP Homogeneous copy along with SAP Note 1844468 is important. The version of the target database should always be the same or higher than the source database and a database EC2 instance should always be launched in a private subnet.  The database EC2 instance should have access to the S3 bucket to restore the database.
Backup of only tenant DB is required for restoring the HANA database on the target system. SYSTEMDB backup is not required.

Option A is incorrect because a database EC2 instance is not recommended to be launched in a public subnet. 

Option B is CORRECT because a database EC2 instance is recommended to be launched in a private subnet. The version of the target database should be the same or higher than the version of the source database. 

Option C is incorrect because the version of the target database cannot be lower than the source version.

Option D is CORRECT because only tenant database backup is required and the EC2 instance should have access to the S3 bucket where backups will be stored. 

Option E is incorrect because only tenant database backup is required

Reference: https://docs.aws.amazon.com/sap/latest/sap-hana/migrating-hana-hana-to-aws.html 

Domain : Migration of SAP workloads to AWS

Question 14 : A company that is migrating their SAP workloads on AWS is looking for an option that can be used to resolve the IP address and hostname in the VPC. In their on-premise environment, they are using a highly available domain name system (DNS) server. The company is looking for a similar AWS-managed reliable option in the AWS cloud. 
Which of the following is the most optimal option that can help the customer meet their requirement?

A. Use Amazon Route53 as a DNS service. It provides inherent high availability as part of its design
B. Set up a DNS server in the EC2 instance. Ensure high availability of this EC2 instance
C. Maintain /etc/hosts files in each EC2 instance and ensure high availability for these instances
D. Use Amazon CloudFront as a DNS service. It provides inherent high availability as part of its design

Correct Answer: A

Explanation:  

Amazon Route53 is the correct choice as it is an AWS managed DNS service. Also, the reliability pillar of AWS well-architected framework suggests using AWS services that have inherent availability where applicable. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 can connect user requests to internet applications running on AWS or on-premises. 

Setting up a DNS server in an EC2 instance will put the administration overhead on the customer. Maintaining /etc/hosts is not an optimal approach and the administration still lies with the customer.
Amazon CloudFront is not a DNS service but a content delivery network (CDN) service. 

Option A is CORRECT because Amazon Route53 is an AWS-managed, highly available DNS service.

Option B is incorrect because a DNS server on an EC2 instance is not AWS managed option

Option C is incorrect because  the EC2 instance is not AWS managed option.

Option D is incorrect because Amazon CloudFront is not a DNS service.

References: https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/best-practice-11-2.htmlhttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html

Domain : Design of SAP workloads on AWS

Question 15 : A US-based customer is planning to deploy 10 TB of SAP HANA database with high availability option in AWS cloud. The primary and secondary SAP HANA databases are running in separate private subnets in different Availability Zones within an AWS Region. The VPC CIDR range of this setup is 10.0.0.0/16.
The customer is using SUSE Linux Enterprise High Availability Extension as the clustering solution. The Overlay IP assigned for the SAP HANA database cluster is 192.168.0.54.
Which of the following solutions can the customer use for routing the Overlay IP address? (select TWO)

A. The customer can use an AWS Transit Gateway that serves as a central hub to facilitate network connection to the Overlay IP address
B. The customer can use an AWS Virtual Private Gateway that serves as a central hub to facilitate network connection to the Overlay IP address
C. The customer can use an AWS Network Load Balancer that enables network access to the Overlay IP address
D. The customer can use an AWS Application Load Balancer that enables network access to the Overlay IP address
E. The customer can use the clustering capabilities of SUSE Linux Enterprise High Availability Extension to facilitate network connection to the Overlay IP address

Correct Answers: A and C

Explanation: 

An Overlay IP address is a private IP address that is outside of the VPC CIDR range. To route the traffic to both primary and secondary databases either an AWS Transit Gateway can be used or an AWS Network Load Balancer can be used. 

An AWS Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes.
Similarly, when an AWS Network Load Balancer receives a connection request, it selects a target from the Network Load Balancer target group to route network connection requests to a destination address which can be an overlay IP address.
AWS Virtual Private Gateway is the VPN endpoint on the Amazon side of your Site-to-Site VPN connection. It is not capable of routing network traffic. Similarly, an AWS Application Load Balancer works on an application layer (HTTP/HTTPS) and is not capable of providing load balancing at the TCP/IP layer. SUSE Linux Enterprise High Availability Extension is just a  clustering solution and does not have the capability to enable network access to an Overlay IP address.

Option A is CORRECT because An AWS Transit Gateway acts as a hub and the connected networks act as spokes. The source and destinations are maintained in the Transit Gateway route tables.

Option B is incorrect because a Virtual Private Gateway (VPG) is used to connect a VPN connection or AWS Direct Connect to the VPC. 

Option C is CORRECT because Network Load Balancer can be used for routing the Overlay IP address.

Option D is incorrect because an AWS Application Load Balancer does not work at the fourth layer of the Open Systems Interconnection (OSI) model.

Option E is incorrect because a cluster solution such as SUSE Linux Enterprise Server High Availability Extension does not enable network access to an Overlay IP address.

Reference: https://docs.aws.amazon.com/sap/latest/sap-hana/sap-ha-overlay-ip.html

Domain : Design of SAP workloads on AWS

Question 16 : A SAP Solution architect in a Pharma company is designing a high availability solution for their SAP S/4 HANA production system on AWS cloud. They are planning to use a clustering solution for automatic switchover and failover scenarios. This highly available system will be deployed in two different Availability Zones in different subnets in a single AWS Region.
Which of the following services of SAP S/4 HANA systems are considered Single Points of Failure (SPOFs) where the architect needs to configure high availability?

A. The Application Servers and database are considered Single Points of Failures (SPOFs) in standard SAP architecture where high availability is required
B. The ABAP SAP Central Service (ASCS) and database are considered Single Points of Failures (SPOFs) in standard SAP architecture where high availability is required
C. The ABAP SAP Central Service (ASCS) and Application Servers are considered Single Points of Failures (SPOFs) in standard SAP architecture where high availability is required
D. The database only is considered a Single Point of Failure (SPOF) in standard SAP architecture where high availability is required

Correct Answer: B

Explanation:  

It is both ABAP SAP Central Service (ASCS) and database that are considered as Single Points of Failures (SPOFs) in standard SAP architecture. The ABAP SAP Central Service (ASCS) consists of enqueue, and message services. These along with the database cannot be made redundant by configuring multiple instances of them on different host machines.

Additional SPOFs in an SAP installation are Network File System (NFS) (for UNIX-based application hosts) and file shares (for Microsoft Windows-based application hosts). If a Domain Name Service (DNS) is used, then DNS is also considered a single point of failure.

Application Servers can be made redundant by configuring them as multiple instances on different hosts. Therefore they are not Single Points of Failures (SPOFs).

Option A is incorrect because Application Servers are not Single Points of Failures (SPOFs).

Option B is CORRECT because both ABAP SAP Central Service (ASCS) and the database are Single Points of Failures (SPOFs). 

Option C is incorrect because Application Servers are not Single Points of Failures (SPOFs).

Option D is incorrect because  ABAP SAP Central Service (ASCS), common file shares, and DNS if used also Single Points of Failure (SPOF).

References: https://aws.amazon.com/blogs/awsforsap/deploying-highly-available-sap-systems-using-sios-protection-suite-on-aws/System Failure (SAP NetWeaver AS) (SAP Library – SAP High Availability). https://aws.amazon.com/sap/docs/ 

Domain : Design of SAP workloads on AWS

Question 17 : A Singapore – based public sector company has deployed their SAP workloads on AWS in ap-southeast-1 region, which is the only available AWS region in the country. As per the company’s policy, the data must reside within the country. 
They are looking for a solution that will ensure High Availability(HA) and Disaster Recovery (DR). Which of the following options meets the company’s requirements? 

A. Set up High Availability for SAP workloads in AZ-1 & AZ-2 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ1 of the ap-south-1 region
B. Set up High Availability for SAP workloads in AZ-1 & AZ-2 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ-3 of the ap-southeast-1 region
C. Set up High Availability for SAP workloads in AZ-1 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ-2 of the ap-southeast-1 region
D. Set up High Availability for SAP workloads in AZ-1 & AZ-2 of the ap-southeast-1 region. Set up DR for SAP workloads in AZ-1 & AZ-2 of the ap-south-1 region

Correct Answer: B

Explanation: 

It is important to understand the single region and multi-region deployment patterns of SAP workloads on AWS. Here the requirement is very clear, that the data should not leave the country with the only AWS region as ap-southeast-1. Therefore, ap-south-1 is not an option to deploy HA or DR setup. 

Most of the AWS regions have 3 Availability Zones, so we need to deploy the HA in AZ-1 and AZ-2 and use AZ-3 for disaster recovery (DR) setup. 

Option A is incorrect because ap-south-1 is not a correct region for DR setup as the data leaves the country in this case.

Option B is CORRECT because using AZ-3 of the ap-southeast-1 region is a valid choice. 

Option C is incorrect because a high availability (HA) setup in a single AZ is not a valid option as it does not protect against AZ failures.

Option D is incorrect because ap-south-1 is not a correct region for DR setup as the data leaves the country in this case. 

References: https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns.html, https://docs.aws.amazon.com/sap/latest/sap-hana/hana-ops-patterns-single.html 

Domain : Implementation of SAP workloads on AWS 

Question 18 : A customer is running their SAP workloads on-premise. The landscape consists of multiple SAP systems running on SAP Adaptive Server Enterprise (ASE) database on Linux operating systems. The customer is looking for a method to directly backup the database on the AWS cloud in an S3 bucket using the NFS protocol.
Which of the following is a valid solution that meets the customer’s requirement? 

A. Create an Amazon S3 File Gateway using AWS Storage Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share
B. Create an Amazon S3 Volume Gateway using AWS Storage Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share
C. Create an Amazon S3 Tape Gateway using AWS Storage Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share
D. Create an Amazon Transit Gateway. Create an NFS file share and connect it to Amazon S3. Create a mount point on a database host and mount the NFS file share

Correct Answer: A

Explanation: 

Amazon S3 File Gateway is the correct choice here as it supports the NFS protocol and most often used to transfer the direct backups to S3 using NFS or SMB protocols. Amazon S3 Volume Gateway and Amazon S3 Tape Gateway support iSCSI and iSCSI VTL protocols respectively. They do not support NFS protocols.
Amazon Transit Gateway is not a storage gateway service. It provides a hub and spoke model for connecting VPCs and on-premises networks as a fully managed service

Option A is CORRECT because Amazon S3 File Gateway supports NFS protocol.

Option B is incorrect because Amazon S3 Volume Gateway supports only the iSCSI protocol and does not support the NFS protocol.

Option C is incorrect because Amazon S3 Tape Gateway supports only the iSCSI VTL protocol and does not support the NFS protocol.

Option D is incorrect because Amazon Transit Gateway is not a storage gateway service.

References: Integrate an SAP ASE database to Amazon S3 using AWS Storage Gateway, AWS Storage Gateway | Amazon Web Services  

Domain : Implementation of SAP workloads on AWS 

Question 19 : A US-based Pharma company is planning to deploy the SAP HANA database on the AWS cloud. The database size is around 10 TB. They will be deploying the database in a private subnet of a VPC. They are looking for an operating system to host the HANA database, which provides inherent high availability capabilities. 
Which of the following operating systems, available in AWS Marketplace meets the company’s requirements? 

A. SUSE Linux Enterprise Server (SLES) 15.1 for SAP
B. SUSE Linux Enterprise Server (SLES) 15.1
C. Red Hat Enterprise Linux (RHEL) 8.1
D. Microsoft Windows Server 2016 

Correct Answer: A

Explanation: 

SUSE and RedHat provide additional benefits in their SAP edition of operating systems. SUSE Linux Enterprise Server (SLES) for SAP and Red Hat Enterprise Linux (RHEL) for SAP comes with benefits like extended support, high availability and tuning packages for SAP applications, etc. Hence, SUSE Linux Enterprise Server (SLES) for SAP is the correct choice here.
The regular SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL) operating systems do not have high availability packages by default and they have to be installed separately. Microsoft Windows Server 2016 is not a supported operating system for the SAP HANA database. 

Option A is CORRECT because SUSE Linux Enterprise Server (SLES) 15.1 for SAP provides additional benefits like extended support, high availability and SAP tuning packages. 

Option B is incorrect because SUSE Linux Enterprise Server (SLES) does not have high availability packages by default.

Option C is incorrect because  Red Hat Enterprise Linux (RHEL) does not have high availability packages by default.

Option D is incorrect because Microsoft Windows Server 2016 is not a supported operating system for the SAP HANA database. 

Reference: https://docs.aws.amazon.com/sap/latest/sap-hana/planning-the-deployment.html 

Domain : Implementation of SAP workloads on AWS 

Question 20 : A customer is planning for a greenfield implementation of the SAP S/4 HANA system on the AWS cloud. They have performed the sizing of the HANA database using the SAP Quick Sizer report and have the required value for SAPS (SAP Application Performance Standard). The next step is to select an EC2 instance for the HANA database. 
Which of the following sources can the customer refer to choose an appropriate EC2 instance for the HANA database on the AWS cloud? (select TWO)

A. The customer can refer to the SAP Community Network (SCN) blog for selecting the EC2 instance
B. The customer can refer to the AWS blog for selecting the EC2 instance
C. The customer can refer to the ‘SAP Certified and Supported SAP HANA Hardware Directory’ page for selecting the EC2 instance
D. The customer can refer to ‘SAP Note 1656099 – SAP Applications on AWS: Supported DB/OS and Amazon EC2 products’ for selecting the EC2 instance
E. The customer can refer to the SAP Product Availability Matrix (PAM) for selecting the EC2 instance

Correct Answers: C and D 

Explanation: 

SAP and AWS work together to test and certify Amazon EC2 instance types for SAP on AWS solutions. SAP Note ‘1656099 – SAP Applications on AWS: Supported DB/OS’ and SAP page ‘SAP Certified and Supported SAP HANA Hardware Directory’ are the only single source of truth for selecting the EC2 instance for the HANA database. Documentation or a SAP or AWS blog may mention EC2 instances, but they are not to be treated as official sources of information. 

SAP Product Availability Matrix (PAM) provides information about SAP software releases: release types, maintenance durations, planned availability, etc. It does not provide the required details for selecting EC2 instances.

Option A is CORRECT because SAP Community Network (SCN) blogs are not the official source of information for certified EC2 instances. 

Option B is incorrect because AWS blogs are not the official source of information for certified EC2 instances.

Option C is CORRECT because the ‘SAP Certified and Supported SAP HANA Hardware Directory’ page is a valid source of information.

Option D is CORRECT because  SAP Note ‘1656099 – SAP Applications on AWS: Supported DB/OS’ is a valid source of information.

Option E is incorrect because the SAP Product Availability Matrix (PAM) does not provide the required details for selecting EC2 instances. 

References: https://docs.aws.amazon.com/sap/latest/general/ec2-instance-types-sap.htmlCertified and Supported SAP HANA® Hardware Directory 

Domain : Implementation of SAP workloads on AWS

Question 21 : A US-based financial company is planning to deploy its SAP S/4 HANA workloads on the AWS cloud. The HANA database will be launched in an EC2 instance with SUSE Linux as an operating system. For /hana/data and /hana/log directory they will be using EBS volumes. The company’s SAP solution architect wants to understand the encryption for EBS volumes. 
Which of the following statements are TRUE for encrypted EBS volume in the AWS cloud? (Select THREE)

A. Data at rest inside the volume is encrypted
B. All data moving between the volume and S3 storage is encrypted
C. All data moving between the volume and the instance is encrypted
D. All snapshots created from the encrypted volume are encrypted
E. All data moving between the volume and EFS storage is encrypted

Correct Answers: A, C and D

Explanation:

In an encrypted EBS volume the data at rest inside the volume is encrypted using an encryption key. The only data in transit that is encrypted is data moving between the volume and instance it is attached to. Also, the snapshot of an encrypted volume is also encrypted. 

Data moving from EBS volume to S3 or any EFS storage is not encrypted unless we use data-in-transit encryption such as TLS or SSL.

Option A is CORRECT because Data inside the volume is encrypted.

Option B is incorrect because data moving between the volume and S3 is not encrypted by default.

Option C is CORRECT because data moving between the volume and the instance is encrypted.

Option D is CORRECT because an encrypted volume’s snapshots are also encrypted.

Option E is incorrect because data moving between the volume and EFS storage is not encrypted by default.

References: Security and Compliance – SAP NetWeaver on AWSAmazon EBS volumes – Amazon Elastic Compute Cloud 

Domain:  Design of SAP workloads on AWS

Main Topic : Design highly resilient solutions for SAP workloads on AWS

Question 22:  You are an SAP architect with a US-based Pharma company that is running their SAP workloads on AWS in the us-east-1 (N. Virginia) region. The company is planning to set up a disaster recovery (DR) solution in the us-west-2 (Oregon) region. The company’s management has agreed that they can tolerate the RPO and RTO in hours but are looking for a low operational cost DR solution. 
Which of the following Disaster Recovery architecture do you propose for minimum operational cost?

A. Set up a Passive DR by backing up the data to S3. Use S3 cross-region replication (CRR) to replicate the backups to the us-west-2 (Oregon) region. Create and replicate the AMIs of application servers and databases to the us-west-2 (Oregon) region. Build the DR environment using AMIs and backups in case of a switchover.
B. Set up a Passive DR by backing up the data to S3. Use S3 cross-region replication (CRR) to replicate the backups to the us-west-2 (Oregon) region. Create and replicate the AMIs of application servers and databases to the us-west-2 (Oregon) region. Build the SAP systems on DR using AMIs and backups. Switchover to the DR region in case of a disaster.

C. Set up a Pilot light DR by ensuring that all the systems running on us-east-1 (N. Virginia) are also built in the us-west-2 (Oregon) region. Shut down the application servers and keep the database in standby mode in the DR region. Replicate the data using a native database high availability/ disaster recovery solution. Start the database and application servers in case of a switchover.

D. Set up a Pilot light DR by ensuring that all the systems running on us-east-1 (N. Virginia) are also built in the us-west-2 (Oregon) region. Ensure that the application servers and database are running in the DR region. Replicate the data using a native database high availability/ disaster recovery solution. Switchover to the DR region in case of a disaster. 

Answer: A

Explanation: In Passive DR, no instances are started on the DR side unless there is a takeover requirement. In Pilot light DR, a ‘smaller secondary’ (EC2 instance) is created and running on the DR side, and during the takeover, it is resized to match the primary instances. From the database perspective in Pilot light DR, the database is in standby mode accepting and applying logs from the Primary side, whereas for Passive DR the backup is restored and recovered from the S3 bucket only during the takeover.
From a cost perspective, a passive DR is a correct choice as the operational cost to maintain it will be less than the Pilot light DR. In terms of Recovery Time Objective (RTO), a Pilot light DR is a better option than Passive DR.

Option A is CORRECT because Passive DR is the correct choice. All the instances need to be started only when there is a takeover requirement.

Option B is incorrect because this option is not Passive DR if we run the instances on the DR side without any takeover requirement.

Option C is incorrect because a Pilot light DR is less cost efficient than a Passive DR.

Option D is incorrect because a Pilot light DR is less cost efficient than a Passive DR.

Reference:
Passive Disaster Recovery for SAP applications using AWS Backup and AWS Backint Agent

High Availability and Disaster Recovery – SAP HANA on AWS

SAP HANA sizing considerations for secondary instance with reduced memory footprint

Domain: Design of SAP workloads on AWS

Main Topic : Design highly resilient solutions for SAP workloads on AWS

Question 23: A company is running their SAP S/4 HANA landscape on AWS. The company is building its disaster recovery (DR) solution in another region and is looking for a way to replicate the saptrans and sapglobal files to the DR region. Currently, these files are hosted on the Amazon Elastic File System(EFS) on the primary side.

Which of the following Amazon services can help companies meet this requirement? (Select TWO

A. AWS Backup

B. AWS Backint Agent

C. AWS DataSync

D. AWS Cross Region Replication

E. AWS Elastic File System (EFS) snapshots.

Answer: A & C

Explanation: AWS Backup and AWS DataSync are the correct choices as they support replication of Elastic File system Files from one region to another.
AWS Backint agent is used for HANA database backups on AWS. AWS Cross Region Replication or CRR is a feature of S3. And there is no such thing called Elastic File System(EFS) snapshots. Snapshots are taken of Elastic Block Store (EBS) volumes. 

Option A is CORRECT because AWS Backup supports the backup of EFS and replication of the backups to the DR region.

Option B is incorrect because AWS Backint Agent is used for HANA database backups.

Option C is CORRECT because AWS DataSync supports replication of EFS across AWS regions.

Option D is incorrect because Cross Region Replication is a feature of S3. 

Option E is incorrect because it is not possible to take snapshots of Elastic File System(EFS) 

Reference:
SAP on AWS: Build for availability and reliability

Domain:  Implementation of SAP workloads on AWS

Main Topic : Deploy databases for SAP workloads on AWS

Question 24: A US-based financial company has been running their SAP workloads on AWS and have become familiar with the AWS environment. They want to implement a new SAP BOBJ BI (BusinessObjects Business Intelligence) system on the AWS cloud and are looking for an AWS native database service. 

Which of the following database solutions can be used by the company considering SAP-certified OS/DB combinations on AWS? 

  1. Amazon Aurora
  2. Amazon DynamoDB
  3. Amazon RDS for SQL Server
  4. Amazon Neptune

Answer: C

Explanation: Here the understanding of SAP Note 1656099 – ‘SAP Applications on AWS: Supported DB/OS and AWS EC2 products’ is important. SAP Notes and SAP Product Availability Matrix (PAM) are the single source of truth when we need to find out the supported OS & DB combinations for SAP workloads on AWS. For BOBJ BI (BusinessObjects Business Intelligence) Amazon RDS for SQL Server is the only valid option. No other Amazon databases are supported.
Amazon Aurora is only supported for SAP Hybris installations. Amazon DynamoDB and Amazon Neptune are not supported databases for any SAP products. 

Option A is incorrect because Amazon Aurora is not supported for BOBJ BI.

Option B is incorrect because Amazon DynamoDB is not supported for any SAP product.

Option C is CORRECT because Amazon RDS for SQL Server is a supported database for BOBJ BI.

Option D is incorrect because Amazon Neptune is not supported for any SAP product.

Reference:

Architecture Options – SAP BusinessObjects on AWS

https://launchpad.support.sap.com/#/notes/1656099

Domain:  Implementation of SAP workloads on AWS

Main Topic : Configure high availability for SAP workloads

Question 25: A US-based customer has deployed their SAP S/4 HANA production system in a High Availability (HA) setup in AWS us-east-1 region. The VPC CIDR block is 10.0.0.0/16. The SAP ABAP central services instance (ASCS) is deployed in availability zone us-east-1a in a private subnet with CIDR block 10.0.1.0/24 and the SAP Enqueue Replication Server (ERS) instance is deployed in availability zone us-east-1b in a private subnet with CIDR block 10.0.1.0/24.
The customer is using AWS Network Load Balancer (NLB) attached to an Overlay IP 10.0.3.3 to ensure the high availability of SAP Central Services. During the testing of this setup, the customer found that the switchover is not happening.

What should be corrected to ensure that the high availability setup is working?

A. Place the ASCS and ERS instances in the same availability zone us-east-1a, in the same private subnet. This will ensure the communication between ASCS and ERS instances is working.

B. Place the IP addresses of ASCS and ERS instances in the target group that is attached to the Network Load balancer. 

C. Change the overlay IP address 10.0.3.3 to an IP address that does not overlap with the CIDR range of VPC.

D. Change the overlay IP address 10.0.3.3 to a public IP address as private IP addresses are not supported for load balancing. 

Answer: C
Explanation: Here the understanding of the concept of Overlay IP (OIP) is important. An overlay IP cannot be in the CIDR block range of the VPC where high availability is set up. The overlay IP address 10.0.3.3 is part of the VPC CIDR 10.0.0.0/16 and therefore the switchover is not working. Also, an overlay IP should be a private IP address as defined in RFC1918.

Overlay IPs are set as the target in the target group attached to the Network Load Balancer (NLB). 

Option A is incorrect because Placing the ASCS and ERS instances in the same private subnet will not eliminate a single point of failure from an availability perspective.

Option B is incorrect because The IP address of Overlay IP is added to the target group attached to the Network Load balancer and not ASCS and ERS IPs individually.

Option C is CORRECT because The overlay IP address should not be part of the VPC CIDR block.

Option D is incorrect because an overlay IP address cannot be a public IP address. 

Reference:

SAP on AWS High Availability with Overlay IP Address Routing – SAP HANA on AWS

SAP NetWeaver HA on AWS Configuration Guide for SLES and RHEL

RFC 1918: Address Allocation for Private Internets

Domain : Migration of SAP workloads to AWS

Question 26: A company is running their SAP workloads on-premise. They are planning to migrate their SAP BW Netweaver 7.5 landscape to AWS cloud, which is running on an Oracle database on AIX. The customer does not plan to change the underlying database for the SAP BW environment.
What are the steps and best practices the company needs to perform in order to migrate the SAP BW landscape to AWS? (Select THREE)

A. Migrate the SAP BW non-production systems first to ensure that migration is successful  and there are no issues with running BW workloads on AWS
B. Migrate all SAP BW systems together to ensure that downtime required is less and the migration project timeline can be shortened 
C. Perform a DB migration using Software Provisioning Manager (SWPM) to change the database to HANA as Oracle is not supported on AWS for SAP
D. Perform an OS migration using Software Provisioning Manager (SWPM) to change the operating system to Oracle Linux as AIX is not supported on AWS for SAP
E. Perform an OS migration using Software Provisioning Manager (SWPM) to change the operating system to SUSE Linux Enterprise Server (SLES) as AIX is not supported on AWS for SAP
F. Ensure to generate a migration key from the SAP Support Portal for the migration using Software Provisioning Manager (SWPM)
G. Ensure to generate a Hardware key from the SAP Support Portal for the migration using Software Provisioning Manager (SWPM)

Correct Answers: A, D and F

Explanation

It is important to have an understanding of the AWS Well-Architected Framework for SAP as well as the supported OS/DB combination for SAP on AWS. AWS recommends always migrating a non-production environment first to ensure that there are no issues in running workloads on the AWS cloud. Also, it helps to streamline project timelines, migration tasks and recognize any additional issues to expect during a production migration.
For SAP on Oracle workloads on AWS, only Oracle Linux is the supported operating system. 

The customer also needs a migration key when performing a heterogeneous migration to AWS. 

A Hardware key is not generated from the SAP Support Portal but an SAP license is generated using the hardware key. The hardware key can be found on the host where the message server is running.

Option A is CORRECT because AWS Well-Architected Framework recommends moving non-production workloads first.

Option B is incorrect because There is no requirement to reduce the project timeline. 

Option C is incorrect because the statement is incorrect. Oracle is a supported database on AWS.

Option D is CORRECT because Oracle Linux is the only supported Operating system for running Oracle on AWS for SAP.

Option E is incorrect because SUSE Linux Enterprise Server (SLES) is not a supported operating system for running Oracle on AWS for SAP.

Option F is CORRECT because a migration key is required for performing OS/DB migrations.

Option E is incorrect because a hardware key is not generated from the SAP support portal. 

References: https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/best-practice-2-4.htmlhttps://launchpad.support.sap.com/#/notes/1656099 

Domain : Design of SAP workloads on AWS

Question 27 : A company is running its SAP S/4 HANA production system on the AWS cloud. Both the SAP database, ABAP SAP Central Services (ASCS), and Primary Application Server (PAS) EC2 instances are in the same private subnet. After an OS hardening activity on a weekend, when the SAP engineers try to start the SAP application they get the error message – Database is not available via R3trans – Database must be started first
When they log in to the database EC2 instance they notice that the database is already running. Considering that the HANA database instance number is 00, What should the SAP engineers do to troubleshoot this issue ? 

A. Restart the database and the database EC2 instance again. Try starting the SAP application once the database restart has finished
B. Ensure that the security group of the database EC2 instances allows communication from the application server. Check if the port range 30015 – 39915 is allowed from the private IP address of the application server
C. Ensure that the security group of the database EC2 instances allows communication from the application server. Check if the port range 3600 – 3699 is allowed from the private IP address of the application server
D. Ensure that the Network Access Control lists (NACLs) of the private subnet allow communication from the application server

Correct Answer: B

Explanation: 

SAP application server uses the HANA database client to connect to the HANA database server. It is important to understand the ports required by the HANA database server to allow connection from the HANA client. The port range 30015-39915 is used by the SAP application server to connect to the HANA database. As the question mentions OS hardening activity was carried out, we should ensure that security groups allow proper communication. 

Port range 3600-3699 is used by SAPGUI to connect to SAP applications. Checking the Network Access Control List (NACL) is not required since all the EC2 instances are on the same subnet. Restarting the database and the database EC2 instance will not solve the issue.

Option A is incorrect because restarting the database and the database EC2 instance will not solve the issue.

Option B is CORRECT because port range 30015-39915 needs to be maintained in the security group to allow communication from the SAP application server. 

Option C is incorrect because port range 3600-3699 is an incorrect choice.

Option D is incorrect because checking Network Access Control lists (NACLs) is not required since all the EC2 instances are in the same subnet.

References: https://docs.aws.amazon.com/quickstart/latest/sap-hana/app-c.htmlSecurity groups in AWS Launch Wizard for SAP 

Domain : Design of SAP workloads on AWS

Question 28 : A US-based Banking and Insurance company is running their SAP workloads on-premise. They have been maintaining their SAP backup data for the last 15 years for regulatory compliance requirements in their on-premise datacenter. The company sometimes needs to access this backup data once or twice a year during March or December month-end. The company is looking for a low-cost durable solution to store these backups on the AWS cloud.
Which of the following solutions can help meet the company’s requirements with minimum cost? 

A. Use Amazon S3 Standard-Infrequent Access to store the SAP backups
B. Use Amazon S3 Glacier Instant Retrieval to store the SAP backups
C. Use Amazon S3 Glacier Deep Archive to store the SAP backups
D. Use Amazon S3 Standard to store the SAP backups

Correct Answer: C

Explanation: 

Here the understanding of various Amazon S3 Storage Classes is important. 

Out of all the available options, Amazon S3 Glacier Deep Archive is the lowest-cost option. 

Also, it meets the requirement of data retrieval only once or twice a year. S3 Glacier Deep Archive is more suitable for customers that are aware of when they need the data and put a retrieval request beforehand.
Amazon S3 Standard-Infrequent Access is for data that is accessed less frequently, but requires rapid access when needed. Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. Amazon S3 Standard is the costliest option of the Amazon S3 storage family. 

Option A is incorrect because S3 Standard-Infrequent Access is an incorrect choice.

Option B is incorrect because S3 Glacier Instant Retrieval is an incorrect choice.

Option C is CORRECT because S3 Glacier Deep Archive is a correct choice.

Option D is incorrect because S3 Standard is an incorrect choice.

References: https://aws.amazon.com/s3/storage-classes/, https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/best-practice-19-6.html 

Domain:  Implementation of SAP workloads on AWS

Main Topic : Configure high availability for SAP workloads

Question 29: You are a SAP solution architect with a global multinational company running their SAP workloads on AWS. The company has deployed a SAP S/4 HANA 2020 system with embedded Fiori in the availability zone, AZ1. The SAP system has multiple application servers and uses a Web-dispatcher for HTTP/ HTTPS load balancing. The company wants to build a highly available Active/ Active solution with an availability of 99.9 %. 

Which of the following solutions will meet the company’s requirements? (Select TWO)

A. Replicate the setup in another availability zone AZ2. Enable enqueue replication and HANA system replication to the system in AZ2. Use a cluster software for ABAP central services and HANA database. 

B. Add the Web dispatcher instances in a target group. Use Application load balancer (ALB) host-based routing for high availability between the two Web dispatchers.

C. Replicate the setup in another availability zone AZ2. Copy and restore the Amazon Elastic Block Store (EBS) snapshots from AZ1 to AZ2 periodically. 

D. Add the Web dispatcher instances in a target group. Use Application load balancer (ALB) IP-based routing for high availability between the two Web dispatchers.

E. Add the Web dispatcher instances in a target group. Use a Network load balancer (NLB) for high availability between the two Web dispatchers.

Answer: A and B

Explanation: An active/active solution means that both the Primary and Secondary instances should be running. As opposed to an active/passive solution for high availability, an active/active solution is more costly. 

Here we need to understand that high availability is required for four components – the ABAP central services (ASCS), the database, the application servers and the web dispatcher.
The ABAP central services (ASCS) is taken care of by the Enqueue replication server (ERS), the database is taken care of by HANA system replication (HSR). 

For the application servers, they should be deployed across availability zones to provide 99.9% availability. 

The web dispatcher availability is taken care of by an Application load balancer host-based routing which can direct the HTTP/HTTPS traffic based on web dispatcher availability across the AZs.

Using cluster software ensures automatic switching between High availability components.
Option A is CORRECT because this setup ensures that we provide high availability for the ABAP central services (ASCS), the database, and the application servers.
Option B is CORRECT because the use of application load balancer (ALB) host-based routing provides high availability for web dispatchers.
Option C is incorrect because copying the EBS snapshots and restoring them does not provide true high availability as there will always be some gap between primary and secondary data affecting the recovery point objective (RPO).
Option D is incorrect because application load balancer (ALB) host-based routing is required to check the health of web dispatcher EC2 instances, not their reachability which is done by IP-based routing. 

Option E is incorrect because the Network load balancer works on OSI level 4 and cannot be used for HTTP/HTTPS routing. 

Reference:

SAP NetWeaver Guides

SAP on AWS High Availability Setup – SAP HANA on AWS

OutDated Domain Questions

 

Domain:  Implementation of SAP workloads on AWS

Main Topic : Configure the disaster recovery setup for SAP workloads

Question 30: A US-based customer is running their SAP workloads on AWS. They are looking to build a disaster recovery (DR) solution for their SAP environment based on block-level replication technology, which can provide sub-second Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs) of minutes. 

Which of the following AWS services can help meet these requirements?

A. S3 Cross-Region Replication (CRR) 

B. CloudEndure Migration

C. Server Migration Service

D. HANA system replication

Answer: B

Explanation: CloudEndure Migration is the correct choice. CloudEndure installs the CloudEndure Agent on the source systems and performs block-level replication to the target environment. The Recovery Point Objectives provided by CloudEndure are in sub-seconds and Recovery Time Objectives (RTOs) are in minutes

S3 Cross-Region Replication (CRR) is not a valid option because it is used to copy the data from the S3 buckets in one region to another. This will require time to build the DR region thus increasing the RTO.

Server Migration Service is not a valid option too because the requirement is to move the data, not to move the virtual machines. 

HANA system replication is not an AWS service and is a HANA database native solution and does not provide block level replication. 

Option A is incorrect because S3 Cross-Region Replication (CRR) cannot provide RTO in minutes.

Option B is CORRECT because CloudEndure Migration is the only option that supports block level replication and can achieve the required RPO and RTO.

Option C is incorrect because AWS Server Migration Service (SMS) does not perform block level replication.
Option D is incorrect because HANA system replication is not an AWS service and does not perform block level replication.
Reference:

SAP Disaster Recovery Solution Using CloudEndure: Part 1 Failover | AWS for SAP

Automating SAP migrations using CloudEndure Migration | AWS for SAP

Domain:  Implementation of SAP workloads on AWS

Main Topic : Deploy databases for SAP workloads on AWS.

Question 31: A customer is planning to migrate their 10 TB of production HANA enterprise edition database from on-premise to AWS cloud. They are looking for an option to license the HANA database on AWS.

Which of the following statements is TRUE?

A. The customer can use an existing license by using AWS ‘Bring your own license (BYOL)’ model.

B. The customer can purchase the license for the HANA database from AWS Marketplace.

C. The customer does not need a license for the HANA database since it is included in the on-demand EC2 instances pricing for SAP. 

D. The customer can use the Trial and developer license from CAL (Cloud Appliance Library).

Answer: A

Explanation: Here ‘Bring your own license’ or BYOL is the correct choice because the customer is already running the HANA database on-premise with a valid license and can use the same license to run the database on AWS.
AWS marketplace doesn’t provide a license for enterprise edition HANA database but only for express edition HANA database. An enterprise edition license has to be procured from SAP.
On-demand is not a license option but an EC2 pricing model. The licenses are not included in the on-demand pricing. On-demand can be combined with BYOL to run the systems productively on AWS. Trial and developer licenses from SAP CAL (Cloud Appliance Library) are not suited for production workloads and are only meant for software trails.

Option A is CORRECT because there is no need to purchase a new license. The existing license can be used using the BYOL option. 

Option B is incorrect because AWS marketplace doesn’t offer a license for the SAP HANA enterprise edition.

Option C is incorrect because On-demand is an EC2 pricing choice and the licenses are not included in the on-demand pricing.

Option D is incorrect because CAL cannot be used for production environments.

Reference:

AWS | SAP HANA

SAP on AWS Overview – General SAP Guides

Domain : Design of SAP workloads on AWS

Question  32: A US-based retail company is running its SAP workloads on AWS. They are using multiple VPCs that are communicating with each other using the VPC peering method. After a recent merger and acquisition, the company expects its accounts and VPCs to grow as more SAP systems will be on-boarded on the AWS cloud. The company is looking for an AWS-managed solution that works on a hub and spoke model to ensure communication between all the VPCs across the company’s accounts.
Which of the following solutions can help meet the company’s requirements? 

A. Use an AWS Virtual Private Gateway to connect the company’s VPCs. The Virtual Private Gateway can be shared through AWS Resource Access Manager (RAM) across the company’s AWS accounts
B. Use an AWS Transit Gateway to connect the company’s VPCs. The Transit Gateway can be shared through AWS Resource Access Manager (RAM) across the company’s AWS accounts
C. Use an AWS Virtual Private Gateway to connect the company’s VPCs. The Virtual Private Gateway can be shared through AWS Control Tower across the company’s AWS accounts
D. Use an AWS Transit Gateway to connect the company’s VPCs. The Transit Gateway can be shared through AWS Control Tower across the company’s AWS accounts

Correct Answer: B

Explanation: 

The correct choice here is to use a Transit Gateway. AWS Transit Gateway connects the Amazon Virtual Private Clouds (VPCs) or on-premises networks through a central hub. AWS Resource Access Manager (RAM) can be used to share the resources across accounts. In this case, an AWS Resource Access Manager (RAM) can be used to share the transit gateway with other AWS accounts, thus facilitating VPC communication between accounts as well. 

AWS Virtual Private Gateway (VPG) is the VPN endpoint on the Amazon side of a Site-to-Site VPN connection. It cannot be used for connecting multiple VPCs. 

AWS Control Tower is used to automate the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use.

Option A is incorrect because Virtual Private Gateway cannot connect multiple VPCs.

Option B is CORRECT because AWS Transit Gateway can connect multiple VPCs. AWS RAM can share the transit gateway across accounts.

Option C is incorrect because Virtual Private Gateway cannot connect multiple VPCs.

Option D is incorrect because AWS Control Tower cannot share an account’s resources with another.

References: https://aws.amazon.com/transit-gateway/faqs/https://aws.amazon.com/ram/faqs/https://docs.aws.amazon.com/sap/latest/sap-hana/sap-oip-configuration-steps-for-aws-transit-gateway.html 

Domain : Design of SAP workloads on AWS

Question 33 : A customer is running their SAP workloads on AWS. Their SAP landscape includes SAP S/4 HANA, SAP Adobe Document Services (ADS), and SAP Solution Manager system. The ADS and SAP solution manager are running on the Oracle database.
They are looking for a solution that can fulfill their disaster recovery (DR) needs without the administrative overhead of using multiple solutions for data replication.
Which of the following solutions meets the customer’s requirements?  

A. Use CloudEndure disaster recovery for data replication 
B. Use HANA System Replication (HSR) for data replication
C. Use Oracle DataGuard for data replication
D. Use AWS DataSync for data replication

Correct Answer:  A

Explanation: 

CloudEndure Disaster Recovery is the correct choice here because it provides replication at the block level. CloudEndure Disaster Recovery can be used for protecting critical databases, including Oracle, HANA, MySQL, and Microsoft SQL Server, as well as enterprise applications such as SAP.

HANA System Replication (HSR) for data replication will provide data replication for only HANA databases.
Similarly, Oracle DataGuard provides data replication only for Oracle databases.
AWS DataSync is a data transfer service that moves and replicates data between on-premises storage systems and AWS storage services over the internet or AWS Direct Connect. It is not used for database replication. 

Option A is CORRECT because CloudEndure Disaster Recovery provides block-level replication. It is independent of database type and is used at the storage level.

Option B is incorrect because HANA System Replication (HSR) for data replication will only support HANA databases.

Option C is incorrect because Oracle DataGuard for data replication will only support Oracle databases.

Option D is incorrect because AWS DataSync is a file transfer service and is not a disaster recovery or data replication solution.

References: https://aws.amazon.com/blogs/awsforsap/sap-disaster-recovery-solution-using-cloudendure-part-1-failover/, https://docs.cloudendure.com/#Home.htm%3FTocPath%3DNavigation%7C_____1 

Domain: Design of SAP workloads on AWS

Main Topic: Define optimized and cost-effective infrastructure solutions for SAP workloads on AWS

Question 34: A global retail company is looking to migrate their SAP S/4 HANA landscape to AWS. The landscape includes Sandbox (SBX), Development ( DEV), Quality Assurance (QAS) and Production (PRD) systems. 

The SBX, DEV and QAS systems need to be available for 8 hrs per day, only on weekdays. The PRD system is expected to run 24*7. It is also expected that the size of the PRD system may increase after 8 months of usage as the company’s business grows.

Which of the following would be the most cost-efficient purchase option for the EC2 instances running the SAP systems?

A. Use On-demand billing for SBX, DEV, QAS systems and dedicated hosts for the PRD system. 

B. Use On-demand billing for SBX, DEV, QAS and Standard Reserved Instance for the PRD system. 

C. Use On-demand billing for SBX, DEV, QAS and Convertible Reserved Instance for the PRD system. 

D. Use On-demand billing for SBX, DEV, QAS and Compute Savings Plans for the PRD system. 

E. Use Spot instances for SBX, DEV, QAS and Standard Reserved Instance for the PRD system.

Answer: C

Explanation: Here the understanding of the AWS EC2 Pricing model is important. 

On-demand is based on the ‘pay what you use’ model and reserved instances & saving plans require a 1 or 3-year commitment and provide a significant amount of discount compared to the On-demand price. Dedicated hosts are used when the customer doesn’t want to share the underlying physical server. Compute Savings Plans also provide a significant discount on on-demand prices and usage commitment is calculated as $/hour.
The SBX, DEV & QAS have fixed availability hours and therefore only on-demand instances should be used. For the PRD system, since the availability is 24*7, the most cost-effective option would be a reserved instance. Also, since the size of the PRD system is expected to change after 8 months, a  Standard Reserved Instance won’t meet the requirements, therefore the customer must go for a Convertible Reserved Instance.

Option A is incorrect because dedicated hosts are physical servers with EC2 instance capacity fully dedicated to customers’ use, which is not the question’s requirement.

Option B is incorrect because although Standard Reserved Instance provides the best saving, it will not allow the EC2 instance to be resized later.

Option C is CORRECT because Convertible Reserved Instance provides saving and also provides the option to resize EC2 instances later.

Option D is incorrect because Compute Savings Plans similar to Standard Reserved Instance provide saving but do not provide options to resize later.

Option E is incorrect because both Spot instances and Standard Reserved Instance do not meet the respective requirements. 

Reference: 

SAP on AWS Pricing Fundamentals – General SAP Guides

Best Practice 18.1 – Understand the payment and commitment options available for Amazon EC2 – SAP Lens

Domain: Design of SAP workloads on AWS

Main Topic: Define optimized and cost-effective infrastructure solutions for SAP workloads on AWS

Question 35: A European automobile company has deployed their 5 TB of production HANA database on a single u-6tb1.metal server. On the upcoming Sunday, they plan to perform patching of their HANA database. The SAP Basis team has approval for the downtime of 6 hrs. 

The SAP Basis team is looking for a disk-based backup solution so that in case there are issues with patching, they can restore the database quickly. 

Which of the following storage solutions can help the Basis team meet their requirements with minimum cost? 

  1. Elastic Block Store (EBS) General Purpose SSD (gp2)
  2. AWS Simple storage service (S3)
  3. Elastic File System (EFS)
  4. Elastic Block Store (EBS) Throughput Optimized HDD (st1)

Answer:  D

Explanation: Since the requirement is for a disk based backup solution, there are only two valid options – EBS (gp2) and EBS (st1). Out of which EBS (st1) is the correct answer because it is a low cost HDD solution, good for disk based backup requirements. 

While EBS (gp2) is a performance oriented SSD disk, it can be used for database persistence which provides comparatively higher throughput. 

Elastic File System (EFS) is a file share and S3 is object based storage, both of which do not meet the requirement in question. 

Option A is incorrect because Elastic Block Store (EBS) General Purpose SSD (gp2) is a costlier option

Option B is incorrect because S3 is an object based storage and provides higher latency compared to disk based solutions

Option C is incorrect because EFS is a file share that can be used as shared storage

Option D is CORRECT because of all the available options EBS (st1) fits the requirement as it is a low cost disk based solution. 

Reference: 

Storage Configuration for SAP HANA

Amazon EBS volume types – Amazon Elastic Compute Cloud

Domain : Design of SAP workloads on AWS

Question 36 : A European OTT platform company is planning to deploy 24TB of SAP HANA database as a highly available system on AWS cloud. The primary and secondary SAP HANA databases are running in separate private subnets in different Availability Zones within an AWS Region. 
The company is looking for a database native solution for high availability. Which of the following options will provide the lowest possible recovery time objective (RTO)? 

A. Use SAP HANA system replication in synchronous mode with the preload option for data replication between primary and secondary. Use a smaller EC2 instance for the secondary database than the primary
B. Use SAP HANA system replication in a synchronous mode without the preload option for data replication between primary and secondary. Use a smaller EC2 instance for the secondary database than the primary
C. Use SAP HANA system replication in synchronous mode with the preload option for data replication between primary and secondary. Use the same sized EC2 instance for the secondary database as the primary
D. Use SAP HANA system replication in a synchronous mode without the preload option for data replication between primary and secondary. Use a same-sized EC2 instance for the secondary database as primary

Correct Answer: C

Explanation: 

Here we have 2 requirements, first is to use a database native solution which is SAP HANA system replication. This is provided in all options. Second, we have to choose the option with the lowest RTO. A HANA system replication with preload option enabled provides the lowest RTO, provided that the primary and secondary EC2 instances are sized equally.

A SAP HANA system replication without preload option enabled needs more time to launch the database instance as tables are loaded in the memory first. This increases the RTO.

Also, a smaller secondary instance during failover needs to be resized to the same size as the primary instance. This also increases the overall RTO.

Option A is incorrect because a smaller EC2 instance for the secondary database will increase the RTO.

Option B is incorrect because SAP HANA system replication in a synchronous mode without the preload option does not provide the lowest possible RTO. Also, a smaller secondary instance increases the RTO further. 

Option C is CORRECT because SAP HANA system replication in synchronous mode with the preload option and same-sized primary and secondary instances provides the lowest possible RTO.

Option D is incorrect because  SAP HANA system replication in a synchronous mode without the preload option does not provide the lowest possible RTO.

Reference: https://d1.awsstatic.com/enterprise-marketing/SAP/sap-hana-on-aws-high-availability-disaster-recovery-guide.pdf

Domain :  Implementation of SAP workloads on AWS

Question 37 : A customer is deploying SAP S/4 HANA landscape on AWS Cloud. The landscape consists of development (DEV), quality(QAS) and production(PRD) systems. The SAP applications are running on the Windows Server 2016  operating system. The DEV and QAS systems are located in a single AWS Account and the PRD system is in a different AWS Account. 
The customer is looking for a storage solution for the \usr\sap\trans directory which is scalable and highly available. 
Which of the following AWS storage services meet the customer requirement? 

A. Amazon Elastic File System (Amazon EFS)
B. Amazon Elastic Block Store (Amazon EBS) 
C. Amazon S3
D. Amazon FSx

Correct Answer: D

Explanation: 

Amazon EFS and FSx can be used as a shared file system for \usr\sap\trans directory. Both Amazon EFS and FSx are highly available and scalable storage types. Both are AWS managed. However, EFS is only supported for Linux based operating systems. For Windows, SAP recommends using Amazon FSx.

Elastic Block Store (EBS) is a block level storage and is not recommended for file shares. Amazon S3 is object based storage and is also not recommended to use as file shares in the SAP context.

Option A is incorrect because Amazon Elastic File System (Amazon EFS) is recommended for Linux based operating systems

Option B is incorrect because Amazon Elastic Block Store (Amazon EBS) is a block level storage and cannot be used as file shares.

Option C is incorrect because Amazon S3 is object based storage and cannot be used as file shares.

Option D is CORRECT because Amazon FSx is the correct choice and recommended storage type for shared file systems on Windows. 

References: Windows on AWS | AWS for SAP, How to setup SAP Netweaver on Windows MSCS for SAP ASCS/ERS on AWS using Amazon FSx  

Domain : Migration of SAP workloads to AWS

Question 38 : A European manufacturing company wants to migrate one of their production SAP HANA databases from on-premise environment to AWS cloud. The company’s on-premise datacenter is connected via an AWS Direct Connect connection to AWS Cloud. Due to the tight project deadlines, the company’s SAP solution architect would like to set up the target HANA database in a single EC2 instance in the shortest amount of time possible.
Which of the following solutions can meet the requirement? (Select TWO)

A. Use AWS CloudFormation script to provision the HANA database on the AWS cloud
B. Use AWS Quick Start to provision the HANA database on the AWS cloud
C. Use SAP Cloud Appliance Library (CAL) to provision the HANA database on AWS cloud
D. Use AWS Launch Wizard to provision the HANA database on the AWS cloud
E. Use Amazon S3 Transfer Acceleration to provision the HANA database on the AWS cloud

Correct Answers: B and D

Explanation: 

AWS Quick Start and AWS Launch Wizard are two ways in which we can automate the deployment of the SAP HANA database on the AWS cloud. Thus these two options will take the shortest amount of time possible. 

SAP Cloud Appliance Library (CAL) is used for test and demo systems. The instance types available with CAL are not sufficient to run a production workload. AWS CloudFormation script will take time to build and secondly it will only provision the infrastructure but not install the HANA database. Amazon S3 Transfer Acceleration is not for provisioning the required infrastructure. 

Option A is incorrect because AWS CloudFormation scripts take time to develop and therefore are not the fastest option.

Option B is CORRECT because AWS Quick Start is the correct choice as SAP HANA is a supported deployment option.

Option C is incorrect because SAP Cloud Appliance Library (CAL) cannot provision production size instances.

Option D is CORRECT because AWS Launch Wizard is the correct choice as SAP HANA is a supported deployment option.

Option E is incorrect because Amazon S3 Transfer Acceleration cannot be used for provisioning EC2 instances. 

References: Supported deployments and features of AWS Launch WizardMigration Tools and Methodologies – SAP HANA on AWS

Summary

Hope this blog gained you knowledge on what questions need to be focused more for the AWS Certified SAP on AWS Specialty exam. Definitely, this set of SAP on AWS Specialty Exam (PAS-C01) practice questions can be found helpful to prepare for passing the real exam.

The AWS Certified SAP on AWS exam is suggested mainly for the professionals who dealt with the SAP services. And thus it is quite difficult to find reliable and authentic learning resources. We at Whizlabs provide you the PAS-C01 exam training resources such as video courses, practice tests and hands-on labs & AWS sandbox for the real time experiment to pass the AWS Certified SAP on AWS – Specialty Exam (PAS-C01). 

If you have any further thoughts on this PAS-C01 exam questions, feel free to comment us!

About Pavan Gumaste

Pavan Rao is a programmer / Developer by Profession and Cloud Computing Professional by choice with in-depth knowledge in AWS, Azure, Google Cloud Platform. He helps the organisation figure out what to build, ensure successful delivery, and incorporate user learning to improve the strategy and product further.

Leave a Comment

Your email address will not be published. Required fields are marked *


Scroll to Top