AWS Security Specialty exam questions

Free Questions on AWS Certified Security Specialty Certification Exam

This blog post provides AWS Certified Security Specialty Exam Questions and guides you on AWS security concepts such as designing and implementing security solutions on AWS. A large number of organizations have been implementing AWS-based applications, and the security risks are increasing parallelly as well. And this is where the AWS Security specialists come into the picture. 

To learn more about this role and AWS Certified Security Specialty certification exam, we are providing you with free questions along with detailed answers in the next section. This helps you in understanding the certification and fastening your preparation for this SCS-C02 exam. 

Also, note that the exam syllabus covers questions from the following domains:

  • Domain 1: Incident Response 
  • Domain 2: Logging and Monitoring 
  • Domain 3: Infrastructure Security 
  • Domain 4: Identity and Access Management 
  • Domain 5: Data Protection 

Let’s get started!

Domain : Data Protection

Q1 : The cloud monitoring team is using AWS Config to perform security checks. One Config rule is to check if S3 buckets are encrypted using KMS. After the rule was executed, several S3 buckets were found to be non-compliant because they were not encrypted. To fix the non-compliance of these buckets, you have enabled the Default Encryption to be KMS using AWS Managed Key aws/s3. Your manager asked you how to manage the key rotation for this key. How should you answer this question?

A. You can enable or disable the automatic key rotation in the AWS console or CLI. The key rotation frequency is 1 year
B. AWS manages the key rotation, and the user cannot disable it. The key is rotated every 1 year
C. You can enable or disable the automatic key rotation in the AWS console or CLI. The key rotation frequency can also be configured as 1 month, 1 year or 3 years
D. The key rotation is managed by AWS. The key is automatically rotated every three years

Correct Answer: D

Explanation

AWS rotates the key material for the managed KMS keys:

  • for AWS managed KMS keys, the rotation is every three years;
  • for Customer managed KMS keys, the rotation is every year.

The KMS key in this question is “AWS Managed Key aws/s3”

Option A is incorrect because users cannot disable the key rotation for AWS-managed keys. Instead, users can configure the key rotation for customer-managed keys.
Option B is incorrect because the frequency of AWS managing key rotation is 3 years and not 1 year.
Option C is incorrect because users cannot configure the key rotation for AWS-managed keys.
Option D is CORRECT because AWS-managed keys users cannot manage the key rotation. And the key is automatically rotated every 3 years (1095 days).

For more information on AWS KMS key rotation, kindly refer to the URL provided below: https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html

Domain : Data Protection

Q2 : You have a Cron job that will run on the EC2 instance. The job calls a bash script that will encrypt a file whose size is about 5kb. You prefer that the encryption is performed through a Customer Master Key (CMK) in KMS. So, you have created a CMK for this task. The script uses AWS CLI to do the encryption. How do you encrypt the file using the CMK in the bash script?

A. Use “aws kms encrypt” to encrypt the file. No envelope encryption is required in this case
B. Use “aws kms generate-data-key” to generate a data key, then use the plain text data key to encrypt the file
C. Use “aws kms generate-data-key” to generate a data key, then use the encrypted data key to encrypt the file
D. Envelope encryption is required in this case. Use “aws kms encrypt” to generate a data key, then use the plain text data key to encrypt the file

Correct Answer: B

Explanation

Option A is incorrect because “aws kms encrypt” meets the requirement since the maximum size of the data it supports is greater than 4K. So envelope encryption must be used.
Option B is CORRECT because since the file size is larger than 4K, envelope encryption must be used. And for envelope encryption, the plain text data key is used.
Option C is incorrect because, for envelope encryption, the plain text data key is used.
Option D is incorrect because the “aws kms encrypt” command is not used to generate the data key.

For more information on KMS. kindly refer to the URL provided below: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/kms/encrypt.html

Domain : Data Protection

Q3 : As a DevOps engineer, you need to maintain Jenkins pipelines. Recently, you have created a new pipeline for a migration project. In one stage, you encrypted a file with below command.
aws kms encrypt \
    –key-id 1234abcd-fa85-46b5-56ef-1234567890ab \
    –plaintext fileb://ExamplePlaintextFile \
    –output text \
    –query CiphertextBlob | base64 \
    –decode > ExampleEncryptedFile
The symmetric CMK key was used in the encryption operation. Then in another stage, the encrypted file needs to be decrypted with “aws kms decrypt”. In terms of the decryption command, which statement is correct?

A. The CMK key ID is needed for “aws kms decrypt”
B. The CMK key ARN is needed for “aws kms decrypt”
C. The encrypted data key is needed for “aws kms decrypt”
D. There is no need to add the CMK to decrypt in the command

Correct Answer: D

Explanation

The parameter –key-id is required only when the ciphertext was encrypted under an asymmetric KMS key. If you use the symmetric KMS key, KMS can get the KMS key from metadata that it adds to the symmetric ciphertext blob.

The decryption can be done by the following CLI:

aws kms decrypt \
    –ciphertext-blob fileb://ExampleEncryptedFile \
    –output text \
    –query Plaintext | base64 –decode > ExamplePlaintextFile

Options A and B are incorrect: no encryption key information is required when decrypting with symmetric CMKs.
Option C is incorrect: Because the encrypted data key is not required for encryption or decryption.
Option D is CORRECT: The –key-id parameter is not required when decrypting with symmetric CMKs. AWS KMS can get the CMK that was used to encrypt the data from the metadata in the ciphertext blob.

Check the below links for how to use KMS encrypt/decrypt.

KMS encrypt: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/kms/encrypt.html
KMS decrypt: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/kms/decrypt.html#examples

 

Domain : Data Protection

Q4 : You have a Jenkins server deployed in EC2. One Jenkins pipeline is used to build artifacts. It needs to fetch some source files from an S3 bucket which is encrypted with a Customer Master Key (CMK) in KMS. The pipeline was working fine. However, it suddenly stopped working early this week. You have found that the Jenkins task failed to decrypt the S3 data using the CMK. Which one may be the cause of the failure?

A. The secret access key and access key token have expired for the Jenkins EC2 IAM role
B. The key policy of the CMK was added with a ViaService condition for EC2 service
C. The key policy of the CMK was recently modified with the addition of a deny for the IAM role that Jenkins EC2 is using
D. An SCP policy was added in the Organization which allows kms:encryption operation for EC2 resources

Correct Answer: C

Explanation

Users should check if the IAM role can use the CMK in both the IAM policy and key policy. At least there should be an allow in either policy, and there should not be any explicit deny.

Option A is incorrect because an IAM role doesn’t have a secret access key.
Option B is incorrect because the ViaService condition for EC2 service would allow the key usage for EC2 instances so that it cannot be the cause of the issue.
Option C is CORRECT: An explicit deny will disallow the key to be used by the Jenkins server. That may be the cause of the failure.
Option D is incorrect because the kms:encryption allows in-service control policy (SCP) cannot result in this highlighed failure.

 

Domain : Identity and Access Management

Q5 : In your AWS account A, there is an S3 bucket that contains artifacts that need to be fetched by an IAM user in another AWS account B. The S3 bucket has the below bucket policy:
{
    “Version”: “2012-10-17”,
    “Statement”: [
        {
            “Effect”: “Allow”,
            “Principal”: {
                “AWS”: “arn:aws:iam::AccountB:user/AccountBUserName”
            },
            “Action”: [
               “s3:GetObject”,
                “s3:PutObject”,
               “s3:PutObjectAcl”
            ],
           “Resource”: [
                “arn:aws:s3:::AccountABucketName/*”
            ]
        }
    ]
}
However, the IAM user in account B still cannot get objects in the S3 bucket. Which one may cause the failure?

A. The IAM user in account B does not have IAM permission to get an object in the particular S3 bucket
B. The Resource in bucket policy should include “arn:aws:s3:::AccountABucketName”
C. The Action in bucket policy should add the action of “s3:GetObjectACL”
D. The Principal in bucket policy should add a cross-account IAM role assumed by the IAM user in account B

Correct Answer: A

Explanation

To provide cross-account S3 permission, there are different approaches by IAM users, assuming a cross-account role, or bucket ACL.

For this specific case, it uses an IAM user. The IAM user must be granted the S3 permissions through an IAM policy. And the bucket owner must also grant permissions to the IAM user through a bucket policy. For details, please check the first resolution in the provided link below.

Option A is CORRECT because to provide the access, you need two things: S3 bucket policy on account A for account B IAM user to access and then in Account B, the IAM user needs to have an allow access for the S3 bucket in the IAM policy (This is missing).
Option B is incorrect because the resource “arn:aws:s3:::AccountABucketName” is unnecessary and would not provide the necessary permission.
Option C is incorrect because “s3:GetObjectACL” is not required. This case only needs “s3:GetObject”.
Option D is incorrect because there is no mention of an IAM user in account B to assume a cross-account role in this particular scenario. Instead, as long as the user is given an “s3:GetObject” permission to the S3 bucket, it should be able to get objects in the bucket.

For more information on cross-account S3 bucket object access, do refer to the URL provided below https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/

 

Domain : Logging and Monitoring

Q6 : Based on AWS security audit guidelines, which of the following is NOT a best practice for carrying out a security audit?

A. Conduct an audit once EC2 autoscaling happens
B. Conduct an audit if you’ve added or removed software in your accounts
C. Conduct an audit if you ever suspect that an unauthorized person might have accessed your account
D. When there are changes in your organization

Correct Answer: A

Explanation

Option A is CORRECT because it doesn’t meet the AWS security audit guidelines.
Options B, C and D are incorrect as they are the recommended best practices by AWS.

According to the AWS documentation, you should audit your security configuration in the following situations:

  1. On a periodic basis.
  2. If there are changes in your organization, such as people leaving.
  3. If you have stopped using one or more individual AWS services. This is important for removing permissions that users in your account no longer need.
  4. If you’ve added or removed software in your accounts, such as applications on Amazon EC2 instances, AWS OpsWorks stacks, AWS CloudFormation templates, etc.
  5. If you ever suspect that an unauthorized person might have accessed your account.

For more information on Security Audit guidelines, please visit the below URL: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html

 

Domain : Infrastructure Security

Q7 : A company is planning to create private connections from on-premises  Infrastructure to the AWS Cloud. They need to have a solution that would give core benefits of traffic encryption and ensure latency is kept to a minimum. Which of the following would help fulfill this requirement?

A. AWS VPN
B. AWS VPC Peering
C. AWS NAT gateways
D. AWS Direct Connect

Correct Answers: A and D

Explanation

The AWS Documentation mentions the following which supports the above requirements.AWS VPN Connections

With AWS Direct Connect + VPN, you can combine AWS Direct Connect dedicated network connections with the Amazon VPC VPN. AWS Direct Connect public VIF establishes a dedicated network connection between your network to public AWS resources, such as an Amazon virtual private gateway IPsec endpoint. Please note that you need to use public VIF for VPN encryption.AWS Direct Connect public VIF

Here is the diagram:

And you can find the details in the following link: https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html

Option B is incorrect because VPC peering is only used for connection between VPCs and cannot be used to connect On-premises infrastructure to the AWS Cloud.
Option C is incorrect because NAT gateways are used to connect instances in a private subnet to the Internet.

For more information on VPN Connections, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html

 

Domain : Identity and Access Management

Q8 : Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premises LDAP (Lightweight Directory Access Protocol) directory service for single sign-on access to AWS console?

A. Use an IAM policy that references the LDAP account identifiers and the AWS credentials
B. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP
C. Use AWS Security Token Service (AWS STS) to issue long-lived AWS credentials
D. Use IAM roles to rotate the IAM credentials when LDAP credentials are updated automatically

Correct Answer: B

Explanation

You can use SAML to provide your users with federated single sign-on (SSO) to the AWS Management Console or federated access to call AWS API operations.SAML - Security Assertion Markup Language

You can find the details in the following link: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html

Essentially, on the AWS side, you need to create an Identity Provider (IdP) that represents on-prem LDAP, then create a SAML role that trusts the IdP.

Options A, C, and D are all incorrect because all of these options cannot help you to enable single sign-on.

 

Domain : Identity and Access Management

Q9 : In order to meet data residency compliance requirements for a large bank, you must ensure that all S3 buckets are created in the EU-west-2 region. You plan to use SCP to enforce this rule. Which SCP will accomplish this?

A. {
  “Version”:”2012-10-17″,
  “Statement”:[
    {
      “Sid”:”DataGovernancePolicy1″,
      “Effect”:”Deny”,
      “Action”:[
        “s3:CreateBucket”
      ],
      “Resource”:[
        “arn:aws:s3:::*”
      ],
      “Condition”: {
        “StringNotEquals”: {
          “s3:LocationConstraint”: “eu-west-2”
        }
      }
    },
    {
      “Sid”:”DataGovernancePolicy2″,
      “Effect”:”Allow”,
      “Action”:[
        “s3:CreateBucket”
      ],
      “Resource”:[
        “arn:aws:s3:::*”
      ],
      “Condition”: {
        “StringEquals”: {
          “s3:LocationConstraint”: “eu-west-2”
        }
      }
    }
  ]
}
—————————————————————————-

B. {
  “Version”:”2012-10-17″,
  “Statement”:[
    {
      “Sid”:”DataGovernancePolicy1″,
      “Effect”:”Deny”,
      “Action”:[
        “s3:CreateBucket”
      ],
      “Resource”:[
        “arn:aws:s3:::*”
      ],
      “Condition”: {
        “StringLike”: {
          “s3:LocationConstraint”: “eu-west-2”
        }
      }
    },
    {
      “Sid”:”DataGovernancePolicy2″,
      “Effect”:”Allow”,
      “Action”:[
        “s3:CreateBucket”
      ],
      “Resource”:[
        “arn:aws:s3:::*”
      ],
      “Condition”: {
        “StringNotEquals”: {
          “s3:LocationConstraint”: “eu-west-2”
        }
     }
    }
  ]
}
—————————————————————————-

C. {
  “Version”:”2012-10-17″,
  “Statement”:[
    {
      “Sid”:”DataGovernancePolicy”,
      “Effect”:”Deny”,
      “Action”:[
        “s3:CreateBucket”
      ],
      “Resource”:[
        “arn:aws:s3:::*”
      ],
      “Condition”: {
        “StringNotLike”: {
          “s3:x-amz-region”: “eu-west-2”
        }
      }
    }
  ]
}
—————————————————————————-

D. {
  “Version”:”2012-10-17″,
  “Statement”:[
    {
     “Sid”:”DataGovernancePolicy”,
     “Effect”:”Allow”,
      “Action”:[
        “s3:CreateBucket”
      ],
      “Resource”:[
        “arn:aws:s3:::*”
      ],
      “Condition”: {
        “StringLike”: {
          “s3:x-amz-region”: “eu-west-2”
        }
      }
    }
  ]
}
—————————————————————————-

Correct Answer: A

Explanation

Option A is CORRECT because the statement “DataGovernancePolicy1” denies creating all S3 buckets in non-eu-west-2 regions, and the statement “DataGovernancePolicy2” allows creating S3 buckets in region eu-west-2.
Option B is incorrect because it does the opposite: “DataGovernancePolicy1” denies creating buckets in region EU-west-2, and the statement “DataGovernancePolicy2” allows creating buckets in regions other than EU-west-2. Please double-check the condition operators in SCP.
Option C is incorrect because s3:x-amz-region is not a valid condition key.
Option D is incorrect because s3:x-amz-region is not a valid condition key.

References: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html, https://docs.aws.amazon.com/AmazonS3/latest/dev/amazon-s3-policy-keys.html

 

Domain : Data Protection

Q10 : A company is using S3 to store data in the cloud, and they want to ensure that all the data in the bucket is encrypted. Which option meets this requirement with the least overhead?

A. All S3 data is encrypted by default
B. Use AWS SSE-S3
C. Enable AWS-KMS encryption and specify aws/s3 (AWS KMS-managed CMK) as the key for the Client-Side Encryption
D. Use Custom AWS KMS customer master key (CMK)

Correct Answers: B and D

Explanation

Option A is incorrect because S3 bucket encryption is not encrypted by default. You need to use AWS SSE-S3 or KMS for its encryption.
Option B is CORRECT because encryption on S3 bucket objects can be completed using Server Side Encryption SSE-S3 with AES-256(Encryption type).
Option C is incorrect because Server Side Encryption should be used instead of Client-Side Encryption.
Option D is CORRECT because the custom AWS KMS customer master key (CMK) provides encryption of S3 bucket objects and also allows managing the key policy and its rotation to the customer and satisfies the expectation as per the ask.

References:
For more information on AWS S3 Encryption options, refer to the URL provided below https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html
For information on Custom AWS KMS Customer Master Key (CMK) and AWS Managed CMK, refer to the URL below: https://aws.amazon.com/premiumsupport/knowledge-center/s3-object-encrpytion-keys/

 

Domain : Infrastructure Security

Q11 : You have a set of applications, databases and web servers hosted in AWS. The web servers are placed behind an ELB. There are separate security groups for the application, database and web servers. The security groups have been defined accordingly. There is an issue with the communication between the application and database servers. In order to troubleshoot the issue between just the application and database server, what is the ideal set of MINIMAL steps you would take?

A. Check the Inbound security rules for the database security group. Check the Outbound security rules for the application security group
B. Check the Outbound security rules for the database security group. Check the Inbound security rules for the application security group
C. Check both the Inbound and Outbound security rules for the database security group. Check the Inbound security rules for the application security group
D. Check both Outbound security rules for the database security group. Check both the Inbound and Outbound security rules for the application security group

Correct Answer: A

Explanation

In this case, the application server initializes the communication with the database server, we need to check whether the traffic can go out from the application server, hence we check the outbound rule in the application server security group.

For the database server to accept traffic from the application server, it needs an inbound rule to allow it, hence we need to check the inbound rules in its security group.

AWS security group is stateful, which means the returning traffic to the allowed inbound traffic is allowed regardless of the outbound rules. In general, we just need to check the rules along the initial traffic direction. That’s why we don’t need to check the inbound rule in the application server security group and the outbound rules in the database application security group.

Option B is incorrect because it says that we need to check the outbound security group for the database, which is unnecessary. 
Option C is incorrect because you do not need to check for the Outbound security rules for the database security group.
Option D is incorrect because you do not need to check for Inbound security rules for the application security group.

For more information on Security Groups, please refer to the below URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

 

Domain : Data Protection

Q12 : Your company hosts critical data in an S3 bucket. There is a requirement to ensure that all data is encrypted. The metadata about the information stored in the bucket needs to be encrypted as well. Which of the below measures would you take to ensure that the metadata is encrypted?

A. Enable S3 Server-side encryption on the metadata of each object
B. Put the metadata as object in the S3 bucket and then enable S3 Server side encryption
C. Put the metadata in a DynamoDB table and ensure the table is encrypted during creation time
D. Put the metadata in the S3 bucket itself

Correct Answer: C

Explanation

S3 object metadata is not encrypted.

Option A is incorrect because that’s not possible. There is object encryption key information in the metadata. If metadata itself is encrypted, S3 have to figure out the encryption for each metadata.
Options B is incorrect because when you put metadata as object in the S3 bucket, you actually creates the metadata of a metadata, which is still not encrypted
Option C is CORRECT because when the S3 bucket objects are encrypted, the metadata is not encrypted. So the best option is to store the metadata in the DynamoDB table and encrypt using AWS KMS during the table creation process.
Option D is incorrect because S3 object metadata is not encrypted.

For more information on using KMS encryption for S3, please refer to the below URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

 

Domain : Incident Response

Q13 : Which of the below services can the AWS Web Application Firewall (WAF) service help to protect?

A. AWS CloudFront
B. AWS Lambda
C. AWS Application Load Balancer
D. AWS Classic Load Balancer

Correct Answers: A and C

Explanation

AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. 

Option A is CORRECT because AWS WAF can be deployed on Amazon CloudFront. As part of Amazon CloudFront, it can be part of your Content Distribution Network (CDN), protecting your resources and content at the Edge locations.
Options B is incorrect because AWS WAF doesn’t protect Lambda directly, it can protect API Gateway instead.
Option C is CORRECT because AWS WAF can be deployed on Application Load Balancer (ALB). As a part of the Application Load Balancer, it can protect your origin web servers running behind the ALBs.
Option D is incorrect because AWS WAF can protect Application Load Balancer but not a Classic load balancer.

For more information on the web application firewall, kindly refer to the below URLs: https://aws.amazon.com/waf/

 

Domain : Data Protection

Q14 : You are designing a connectivity solution between on-premises infrastructure and Amazon VPC. Your server’s on-premises will be communicating with your VPC instances. You will be establishing IPSec tunnels over the internet. You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above?

A. Fully End-to-end protection of data in transit
B. Fully End-to-end Identity authentication
C. Data encryption across the Internet
D. Protection of data in transit over the Internet
E. Peer identity authentication between VPN gateway and customer gateway
F. Data integrity protection across the Internet

Correct Answers: C, D, E and F

Explanation

First of all, The end-to-end means on-prem application endpoint to the service endpoint in AWS cloud, not VPN gateway and customer gateway.

AWS VPN connection refers to the connection between your VPC and your own on-premises network. Site-to-Site VPN supports Internet Protocol security (IPsec) VPN connections.AWS VPN connection

As per the above diagram, the IPSec VPN Tunnel is a connection established between a Virtual Private Gateway and a Customer Gateway. This means that it is NOT established between the actual endpoints. So, the encryption between the gateway endpoints and the server or service endpoints may not be encrypted.

Option C is correct. Data that is transmitted through the IPSec tunnel is encrypted.
Option D is correct as it protects data in transit over the internet.
Option E is correct. Peer identity authentication between VPN gateway and customer gateway is required for implementing VPN IPSec tunnel.
Option F is correct. The integrity of data transmitted over the internet is also possible via IPSec tunnel.
Options A and B are incorrect because there is no complete guarantee of fully end-to-end date protection or identity authentication using IPSec.

For more information on IPSec, please visit the following URL: https://en.wikipedia.org/wiki/IPsec, https://docs.aws.amazon.com/vpc/latest/adminguide/Introduction.html

 

Domain : Data Protection

Q15 : An application is deployed as a docker container running on Amazon ECS. You use an Application Load Balancer to distribute the traffic to the ECS cluster. You want to terminate the SSL traffic in the ELB. How would you create and install the certificate for the Application Load Balancer?

A. Create the certificate in Amazon KMS and upload it to the ELB
B. Store the certificate and private key in the ELB through the ECS service
C. Use the OpenSSL command to generate a certificate, upload it to IAM and configure ELB to use the certificate
D. Request a certificate in ACM and configure the Application Load Balancer to use the certificate
E. Use Amazon Fargate as the container compute engine. It offers native TLS security in the Application Load Balancer

Correct Answers: C and D

Explanation

Essentially, this question asked for the metholds to create and set up certificate for ALB.AWS Secure Listener Settings

Based on the above screenshot, there are 3 different ways to configure a certificate on ALB:

  • ACM certificate
  • IAM certificate
  • Import

Note: Use IAM as a certificate manager only when you support HTTPS connections in a region that is not supported by ACM.

Option A is incorrect because KMS is used for the storage and management of data encryption keys and would not assist in creating a certificate in ELB.
Option B is incorrect because the certificate of ELB is not configured through the ECS service.
Option C is CORRECT because you can use OpenSSL to generate certificates and upload the certificates to IAM/ACM for ELB.
Option D is CORRECT because AWS Certificate Manager (ACM) can be used for creating and managing public SSL/TLS certificates.
Option E is incorrect because Amazon Fargate does not provide support for such functionality.

References: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html, https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html

 

Domain : Incident Response

Q16 : You set up various AWS resources in your AWS account, including EC2, RDS MySQL, DynamoDB, etc. Your billing alarm has been triggered and the AWS cost is increasing abnormally. You also get a notification from AWS that your AWS account may be compromised. As an AWS administrator, you need to take action immediately. Which of the following actions are appropriate?

A. Change your AWS account root user password
B. Delete all AWS EC2 resources even if you are unsure if they are compromised or not
C. Rotate access keys if they were authorized and are still needed, otherwise delete them
D. Respond to any notifications you received from AWS Support
E. Enable AWS Shield Advanced to protect the EC2 resources from DDoS attacks

Correct Answers: A, C and D

Explanation

Option​ ​A ​is​ CORRECT because the AWS root account has complete access to AWS resources, services, and billing. Changing your AWS account root user password is necessary to protect your AWS account from being compromised.
Option​ ​B ​is​ ​incorrect because it is not suitable to delete all resources at this stage. Instead, you should delete any resources on your account you didn’t create, such as EC2, EBS, IAM resources, etc.
Option​ ​C ​is​ ​CORRECT because

  • , with AWS access keys, anyone can have access to your AWS resources using AWS CLI and can compromise your account.
  • By “authorized”, the access keys were created by your company, not a third party. We need to delete unauthorized access keys immediately.
  • Rotating the used IAM access keys is considered the best practice by AWS. So rotate the access keys if they are still in use, otherwise, delete them.

Option​ ​D ​is​ CORRECT because you should sign in to the AWS Support Center, check the notification detail, and respond to it.
Option​ ​E ​is​ ​incorrect because protecting from DDoS attacks is not urgent when the AWS account is compromised. The account may be compromised by many other services such as IAM and S3 and not just by EC2 instances.

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/potential-account-compromise/

 

Domain : Logging and Monitoring

Q17 : Your company has defined a set of S3 buckets in AWS. They need to monitor the S3 buckets and know the source IP address and the person who make requests to the S3 bucket. How can this be achieved?

A. Enable VPC flow logs to know the source IP addresses
B. Monitor the S3 API calls by enabling S3 data event for all buckets in CloudTrail
C. Monitor the S3 API calls by using CloudWatch logging
D. Monitor the S3 API calls by S3 Inventory Configuration

Correct Answer: B

Explanation

The AWS Documentation mentions the following.

Amazon S3 is integrated with AWS CloudTrail. Once the S3 data event is enabled for the buckets, CloudTrail captures specific API calls made to Amazon S3 from your AWS account and delivers the log files to an Amazon S3 bucket that you specify. It captures API calls made from the Amazon S3 console or from the Amazon S3 API.

Using the information collected by CloudTrail, you can determine what request was made to Amazon S3, the source IP address from which the request was made, who made the request, when it was made, and so on.

Options A is incorrect because, with the VPC flow log, you can’t know who makes the API call.
Option C is incorrect because these services cannot be used to get the source IP address of the calls to S3 buckets.
Option D is incorrect because S3 Inventory helps you understand your storage usage on S3, but not API calls.

For more information on CloudTrail logging, please refer to the below Link: https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.htmlhttps://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html

 

Domain : Logging and Monitoring

Q18 : You enable the VPC flow logs in one subnet. You use the ping command from your machine (203.0.113.12) to your EC2 instance (IP address is 172.31.16.140). The ping has failed, and you find below VPC flow logs:
2 123456789010 eni-1235b8ca123456789 203.0.113.12 172.31.16.140 0 0 1 4 336 1432917027 1432917142 ACCEPT OK
2 123456789010 eni-1235b8ca123456789 172.31.16.140 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK
Which configurations may result in this result?

A. The EC2 security group and the ACL in the EC2 subnet allow the inbound traffic. The security group denies the outbound traffic
B. The EC2 security group and the ACL in the EC2 subnet allow the inbound traffic. The ACL denies the outbound traffic
C. The ACL in the EC2 subnet denies the inbound traffic. The EC2 security group allows the inbound traffic
D. The EC2 security group denies the inbound traffic. The ACL in the EC2 subnet allows both inbound and outbound traffic

Correct​ ​Answer:​ B

Explanation

Please note that your computer has IP address ‘203.0.113.12’, it’s not in the same VPC as the EC2 instance ‘172.31.16.140’. 

Option​ ​A ​is​ ​incorrect:​ Because the security group is stateful. As it allows the incoming traffic, the outbound traffic is automatically allowed.
Option​ ​B ​is​ CORRECT:​ This ensures that the inbound traffic is accepted. For the outbound traffic, as the ACL denies it, the message will be rejected. The configurations align with the flow logs.
Option​ ​C ​is​ ​incorrect:​ In this scenario, the incoming ping message is accepted. So the ACL rule for the inbound traffic should allow it.
Option​ ​D ​is​ ​incorrect:​ The inbound rule in the security group should allow the traffic since there is an ACCEPT for the incoming message.

Check https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-records-examples.html#flow-log-example-security-groups for the flow log record examples.

 

Domain : Data Protection

Q19 : A financial services company located in London (UK) wants to ensure the data stored in their test account S3 bucket should be a  copy of the data from the production account S3 bucket. Data sovereignty laws specify that the data must reside within the London region.
What steps implement the solution most effectively?

A. Configure S3 Bucket Lifecycle Policy
B. Configure S3 Bucket Versioning
C. Configure S3 Bucket Event Notification
D. Configure Cross-Region Replication
E. Configure Same-Region Replication
F. Configure an AWS Lambda function to replicate S3 objects

Correct Answers: B and E

Explanation

Option A is incorrect because S3 lifecycle policies allow you to automatically review objects within your S3 Buckets and have them moved to Glacier or have the objects deleted from S3. They are not responsible for data replication between AWS accounts.
Option B is CORRECT because S3 bucket versioning allows creating a version of objects or data stored in S3. This is one of the requirements of S3 replication. Please check the reference link.
Option C is incorrect because the Amazon S3 event notification feature enables you to receive notifications when certain events happen in your bucket. This does not provide a solution to replicate data from the production account to the test account.
Option D is incorrect because Cross Region replication allows S3 data to be copied from one AWS region to another. Since the ask is to keep the data in the London region, we cannot be using this option.
Option E is CORRECT because S3 Same-Region Replication can be configured on an S3 bucket to replicate objects to another bucket in the same region automatically.
Option F is incorrect because implementing a Lambda function to replicate S3 objects to another bucket is not the optimal solution as it requires creating and managing custom code. Also for very large objects, we need to consider the limit on the Lambda timeout setting, so we need split large objects, that’s a lot of work. 

References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.htmhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html#replication-requirements

 

Domain : Incident Response

Q20 : A company compliance policy mandates that all production account data must be stored across multiple geographically distant locations. 
In order to meet this requirement, they configured Amazon S3 Cross-Account Replication on their production account buckets. However, they find that S3 objects are not being replicated. 
What needs to be implemented to resolve this issue?

A. S3 buckets must enable server-side encryption with SSE-KMS
B. Bucket policy on the destination bucket must allow the source bucket owner to store the replicas
C. If the source bucket owner is not the object owner, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list
D. S3 Bucket Lifecycle Policy must be configured
E. S3 Bucket Event Notifications must be configured

Correct Answers: B and C

Explanation

Option A is incorrect because server-side encryption with SSE-KMS is not a requirement. Please find the S3 replication requirements in the referenced links.
Option B is CORRECT because the source bucket owner must have permission to replicate objects on the destination S3 bucket for replication to succeed.
Option C is CORRECT because the source bucket owner must have access permissions to objects being replicated for replication to succeed.  It is possible that IAM users other than the S3 bucket owner have permission to put objects in the source bucket. In that scenario, the object owner must grant access permissions on the objects to the bucket owner.
Option D is incorrect because S3 Lifecycle policies allow you to automatically review objects within your S3 Buckets and have them moved to Glacier or have the objects deleted from S3, but they are not responsible for data replication in S3.
Option E is incorrect because the Amazon S3 event notification feature enables you to receive notifications when certain events happen in your bucket. This does not provide a solution to cross-region replication.

References: https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-troubleshoot.html, https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html#replication-requirementshttps://docs.aws.amazon.com/AmazonS3/latest/userguide/setting-repl-config-perm-overview.html

 

Domain : Data Protection

Q21 : A company has a web application to distribute contents to their customers around the globe and wants to restrict access to contents that are intended for selected users. They’ve decided to use AWS CloudFront. Which of the following methods are suitable to achieve the requirement?

A. Create an Origin Access Identity
B. Create CloudFront signed URLs
C. Create IAM Resource policy granting CloudFront access
D. Create CloudFront signed cookies
E. Create CloudFront Service Endpoint

Correct Answers: B and D

Explanation

Option A is incorrect because origin access identity is used when content from Amazon S3 can only be served through CloudFront. We can’t deliver OAI to the customers for distribution, it is used on Cloud Front.
Option B is CORRECT because CloudFront signed URLs allow you to control who can access your content.
Option C is incorrect because IAM resource policy is not required in this scenario.
Option D is CORRECT because CloudFront signed URLs and signed cookies provide the same basic functionality: they allow you to control who can access your content.
Option E is incorrect because CloudFront does not have a service endpoint.

Reference: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-overview.html

 

Domain : Logging and Monitoring

Q22 : You work in a large organization as an AWS engineer. You create a private Certificate Authority in ACM which is used by multiple teams. The certificates issued from the private CA are for different entities such as web servers, VPN users and internal API endpoints. You need to quickly manage these certificates and get the details including the ARN, subject name and expiration date. Which of the below options is the easiest one?

A. Create a shell script to use AWS CLI acm-pca list-certificates to get the required certificate information for this particular private CA
B. In the AWS ACM console, you can easily get the certificates’ details for each private Certificate Authority. Make sure the IAM user has the list-certificates permissions
C. Edit a Python script to use Boto3 to retrieve the certificate details including the subject name, expiration date, etc
D. Create an audit report to list all of the certificates that the private CA has issued or revoked. Download the JSON-formatted report from the S3 bucket

Correct​ ​Answer:​ D

Explanation

Users can create an audit report for a private CA. The report is saved in an S3 bucket and contains the required information. The reference is in
https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaAuditReport.html.
Option​ ​A ​is​ ​incorrect:​ Because there is no list-certificates CLI for acm-pca. Check https://docs.aws.amazon.com/cli/latest/reference/acm-pca/index.html#cli-aws-acm-pca.
Option​ ​B ​is​ ​incorrect:​ Because there is no IAM permission for list-certificates. And you cannot easily get all the certificate details from the AWS console.
Option​ ​C ​is​ ​incorrect:​ This option may work. However, it is not as straightforward as option D. You have to maintain the Python script and use AWS SDK.
Option​ ​D ​is​ CORRECT:​ The audit report is the easiest way. The report contains the required details of CA issued or revoked certificates. Take the below as an example:

{
  “awsAccountId”: “123456789012”,
  “certificateArn”: “arn:aws:acm-pca:region:account:certificate-authority/CA_ID/certificate/e8cbd2bedb122329f97706bcfec990f8”,
  “serial”: “e8:cb:d2:be:db:12:23:29:f9:77:06:bc:fe:c9:90:f8”,
  “subject”: “1.2.840.113549.1.9.1=#161173616c6573406578616d706c652e636f6d,CN=www.example1.com,OU=Sales,O=Example Company,L=Seattle,ST=Washington,C=US”,
  “notBefore”: “2018-02-26T18:39:57+0000”,
  “notAfter”: “2019-02-26T19:39:57+0000”,
  “issuedAt”: “2018-02-26T19:39:58+0000”,
  “revokedAt”: “2018-02-26T20:00:36+0000”,
  “revocationReason”: “KEY_COMPROMISE”
}

 

Domain : Incident Response

Q23 : A company is building up an online shopping platform. Recently, the application has encountered frequent DDoS attacks such as UDP reflection attacks and SYN floods. The customer experience is impacted, and the cost increases sharply when servers scale up. Which of the following actions can reduce the attack surface?

A. Enable AWS Shield for cost protection that allows users to request a refund of scaling related costs that result from a DDoS attack
B. Configure Amazon CloudFront to distribute traffic to the application. Ensure that only the Amazon CloudFront distribution can forward requests to the origin
C. Configure AWS Firewall Manager to centrally configure and manage AWS WAF rules across the AWS Organization Create Firewall Manager policies using the AWS Organization master account
D. Collect VPC Flow Logs to identify network anomalies and DDoS attack vectors. Set up CloudWatch alarms based on the key operational CloudWatch Metrics such as CPUUtilization

Correct​ ​Answer: B

Explanation

The AWS best practices for DDoS resiliency can be found in the white paper https://d1.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf

Option​ ​A ​is​ ​incorrect:​ Cost protection is one feature of AWS Shield Advanced instead of normal AWS Shield. Users can claim a limited refund if servers scale up/down due to DDoS attacks for AWS Shield Advanced. Besides, this option is not an approach to reduce the attack surface.
Option​ ​B ​is​ CORRECT:​ This option improves the origin’s security as malicious users cannot bypass the Amazon CloudFront when accessing the web application. The attack surface is reduced.
Option​ ​C ​is​ ​incorrect:​ AWS Firewall Manager is a central management tool. This method does not reduce the attack surface.
Option​ ​D ​is​ ​incorrect:​ The methods in option D help to gain visibility into abnormal behaviors. However, the attack surface is not reduced.

 

Domain : Incident Response

Q24 : You are an AWS security specialist in a company. You manage multiple AWS accounts and hundreds of IAM users. You need to keep the AWS credentials (access key IDs and secret access keys) secure.
If certain access keys are exposed to the public or compromised, you should get a notification so that automatic actions can be taken. You need an alert system to keep monitoring the access keys.
Which of the following options can quickly achieve the requirements?

A. AWS provides a daily credential report to the security contact email of the AWS account.
B. In AWS Trusted Advisor, use the Exposed Access Keys check to identify leaked credentials, set up CloudWatch Event rule target to a Lambda function for remediation.
C. Create a Lambda function using the Exposed Access Keys blueprint to monitor the IAM credentials and notify an SNS topic.
D. Use an open-source tool to scan popular code repositories for access keys that have been exposed to the public. Configure an SQS queue to receive the security alerts.

Correct Answer: B

Explanation

Option​ ​A ​is​ ​incorrect because AWS does not provide a daily credential report about our AWS Infrastructure and services alerts.
Option​ ​B ​is​ CORRECT because the Exposed Access Keys check-in AWS Trusted Advisor can identify potentially leaked or compromised access keys. Setup a CloudWatch event rule to catch this event and trigger the Lambda function for remediation.
Option​ ​C ​is​ ​incorrect because you do not need to maintain a Lambda function for this, and there is no Exposed Access Keys blueprint available.
Option​ ​D ​is​ ​incorrect because AWS SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications, but this would not provide alerts in case of access keys being exposed.

Reference: https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor-check-reference.html

 

Domain : Logging and Monitoring

Q25 : An AWS security specialist has set up and activated several AWS Config managed rules. The rules are used to evaluate whether the AWS resources comply with company security policies.
For example, Amazon EBS volumes should be encrypted with Customer Managed Keys (CMKs). If one Config rule becomes non-compliant, the team should get near-real-time notifications.
Which of the following options is the easiest and a cost-effective solution to set up the notifications?

A. Configure AWS Config rules using Lambda functions. Whenever config rules become non-compliant, the Lambda functions send notifications to an SNS topic
B. Integrate AWS Config rules with an Amazon Kinesis stream to perform real-time analysis and notifications
C. Set remediation action to Systems Manager Automation “AWS-PublishSNSNotification”, add an SNS topic as the target to provide notifications
D. Configure each AWS Config rule with a CloudWatch alarm. Trigger the alarm if the rule becomes non-compliant

Correct Answer: C

Explanation

Option​ ​A ​is​ ​incorrect because creating, maintaining, and managing several config rules and integrating with lambda would not be an efficient solution for all the options.
Option​ ​B ​is​ ​incorrect because CloudWatch Events can deliver a near real-time stream of system events that describe changes in AWS resources, including AWS Config rules, and using Kinesis would not be the most efficient and cost-effective solution to provide the notification.
Option​ ​C ​is​ CORRECT because users can configure remediation action with Systems Manager Automation “AWS-PublishSNSNotification” to send notifications through an SNS topic. AWS Systems Manager Automation provides predefined runbooks for Amazon Simple Notification Service.
Option​ ​D ​is​ ​incorrect because AWS Config rules do not integrate with CloudWatch alarms to notify configuration changes.

References: https://docs.aws.amazon.com/config/latest/developerguide/encrypted-volumes.html,  https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-publishsnsnotification.html

 

Summary:

Hope you understood what is expected in the AWS Security Specialty certification exam and how to prepare for it. So ensure that you go through these AWS questions and the detailed answers to understand which domain the questions belong to and get yourself familiarised. Keep Learning !

About Dharmendra Digari

Dharmalingam carries years of experience as a product manager. He pursued his MBA, which honed his skills of seeing products differently than others perceive. He specialises in products from the information technology and services domain, with a proven history of expertise. His skills include AWS, Google Cloud Platform, Customer Relationship Management, IT Business Analysis and Customer Service Operations. He has specifically helped many companies in the e-commerce domain establish themselves with refined and well-developed products, carving a niche for themselves.

Leave a Comment

Your email address will not be published. Required fields are marked *


Scroll to Top