Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: bigdisc65

DOP-C02 Exam Dumps - Amazon Web Services AWS Certified Professional Questions and Answers

Question # 44

A company wants to build a pipeline to update the standard AMI monthly. The AMI must be updated to use the most recent patches to ensure that launched Amazon EC2 instances are up to date. Each new AMI must be available to all AWS accounts in the company's organization in AWS Organizations.

The company needs to configure an automated pipeline to build the AMI.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Create an AWS CodePipeline pipeline that uses AWS CodeBuild. Create an AWS Lambda function to run the pipeline every month. Create an AWS CloudFormation template. Share the template with all AWS accounts in the organization.

B.

Create an AMI pipeline by using EC2 Image Builder. Configure the pipeline to distribute the AMI to the AWS accounts in the organization. Configure the pipeline to run monthly.

C.

Create an AWS CodePipeline pipeline that runs an AWS Lambda function to build the AMI. Configure the pipeline to share the AMI with the AWS accounts in the organization. Configure Amazon EventBridge Scheduler to invoke the pipeline every month.

D.

Create an AWS Systems Manager Automation runbook. Configure the automation to run in all AWS accounts in the organization. Create an AWS Lambda function to run the automation every month.

Buy Now
Question # 45

A company is divided into teams Each team has an AWS account and all the accounts are in an organization in AWS Organizations. Each team must retain full administrative rights to its AWS account. Each team also must be allowed to access only AWS services that the company approves for use AWS services must gam approval through a request and approval process.

How should a DevOps engineer configure the accounts to meet these requirements?

Options:

A.

Use AWS CloudFormation StackSets to provision IAM policies in each account to deny access to restricted AWS services. In each account configure AWS Config rules that ensure that the policies are attached to IAM principals in the account.

B.

Use AWS Control Tower to provision the accounts into OUs within the organization Configure AWS Control Tower to enable AWS IAM identity Center (AWS Single Sign-On). Configure 1AM Identity Center to provide administrative access Include deny policies on user roles for restricted AWS services.

C.

Place all the accounts under a new top-level OU within the organization Create an SCP that denies access to restricted AWS services Attach the SCP to the OU.

D.

Create an SCP that allows access to only approved AWS services. Attach the SCP to the root OU of the organization. Remove the FullAWSAccess SCP from the root OU of the organization.

Buy Now
Question # 46

A company uses an Amazon API Gateway regional REST API to host its application API. The REST API has a custom domain. The REST API's default endpoint is deactivated.

The company's internal teams consume the API. The company wants to use mutual TLS between the API and the internal teams as an additional layer of authentication.

Which combination of steps will meet these requirements? (Select TWO.)

Options:

A.

Use AWS Certificate Manager (ACM) to create a private certificate authority (CA). Provision a client certificate that is signed by the private CA.

B.

Provision a client certificate that is signed by a public certificate authority (CA). Import the certificate into AWS Certificate Manager (ACM).

C.

Upload the provisioned client certificate to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the client certificate that is stored in the S3 bucket as the trust store.

D.

Upload the provisioned client certificate private key to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the private key that is stored in the S3 bucket as the trust store.

E.

Upload the root private certificate authority (CA) certificate to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the private CA certificate that is stored in the S3 bucket as the trust store.

Buy Now
Question # 47

A company is launching an application that stores raw data in an Amazon S3 bucket. Three applications need to access the data to generate reports. The data must be redacted differently for each application before

the applications can access the data.

Which solution will meet these requirements?

Options:

A.

Create an S3 bucket for each application. Configure S3 Same-Region Replication (SRR) from the raw data's S3 bucket to each application's S3 bucket. Configure each application to consume data from its own S3 bucket.

B.

Create an Amazon Kinesis data stream. Create an AWS Lambda function that is invoked by object creation events in the raw data's S3 bucket. Program the Lambda function to redact data for each application. Publish the data on the Kinesis data stream. Configure each application to consume data from the Kinesis data stream.

C.

For each application, create an S3 access point that uses the raw data's S3 bucket as the destination. Create an AWS Lambda function that is invoked by object creation events in the raw data's S3 bucket. Program the Lambda function to redact data for each application. Store the data in each application's S3 access point. Configure each application to consume data from its own S3 access point.

D.

Create an S3 access point that uses the raw data's S3 bucket as the destination. For each application, create an S3 Object Lambda access point that uses the S3 access point. Configure the AWS Lambda function for each S3 Object Lambda access point to redact data when objects are retrieved. Configure each application to consume data from its own S3 Object Lambda access point.

Buy Now
Question # 48

A DevOps team is deploying microservices for an application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The cluster uses managed node groups.

The DevOps team wants to enable auto scaling for the microservice Pods based on a specific CPU utilization percentage. The DevOps team has already installed the Kubernetes Metrics Server on the cluster.

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Edit the Auto Scaling group that is associated with the worker nodes of the EKS cluster. Configure the Auto Scaling group to use a target tracking scaling policy to scale when the average CPU utilization of the Auto Scaling group reaches a specific percentage.

B.

Deploy the Kubernetes Horizontal Pod Autoscaler (HPA) and the Kubernetes Vertical Pod Autoscaler (VPA) in the cluster. Configure the HPA to scale based on the target CPU utilization percentage. Configure the VPA to use the recommender mode setting.

C.

Run the AWS Systems Manager AWS-UpdateEKSManagedNodeGroup Automation document. Modify the values for NodeGroupDesiredSize, NodeGroupMaxSize, and NodeGroupMinSize to be based on an estimate for the required node size.

D.

Deploy the Kubernetes Horizontal Pod Autoscaler (HPA) and the Kubernetes Cluster Autoscaler in the cluster. Configure the HPA to scale based on the target CPU utilization percentage. Configure the Cluster Autoscaler to use the auto-discovery setting.

Buy Now
Question # 49

A company is migrating its web application to AWS. The application uses WebSocket connections for real-time updates and requires sticky sessions.

A DevOps engineer must implement a highly available architecture for the application. The application must be accessible to users worldwide with the least possible latency.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Deploy an Application Load Balancer (ALB). Deploy another ALB in a different AWS Region. Enable cross-zone load balancing and sticky sessions on the ALBs. Integrate the ALBs with Amazon Route 53 latency-based routing.

B.

Deploy a Network Load Balancer (NLB). Deploy another NLB in a different AWS Region. Enable cross-zone load balancing and sticky sessions on the NLBs. Integrate the NLBs with Amazon Route 53 geolocation routing.

C.

Deploy a Network Load Balancer (NLB) with cross-zone load balancing enabled. Configure the NLB with IP-based targets in multiple Availability Zones. Use Amazon CloudFront for global content delivery. Implement sticky sessions by using source IP address preservation on the NLB.

D.

Deploy an Application Load Balancer (ALB) for HTTP traffic. Deploy a Network Load Balancer (NLB) in each of the company's AWS Regions for WebSocket connections. Enable sticky sessions on the ALB. Configure the ALB to forward requests to the NLB.

Buy Now
Question # 50

A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline to deploy an application to Amazon Elastic Container Service (Amazon ECS) using the blue/green deployment model. The company wants to implement scripts to test the green version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If errors are discovered during these tests, the application must be rolled back.

Which strategy will meet these requirements?

Options:

A.

Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create a runtime environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.

B.

Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to invoke an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.

C.

Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to initiate rollback.

D.

Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.

Buy Now
Question # 51

A company is running a custom-built application that processes records. All the components run on Amazon EC2 instances that run in an Auto Scaling group. Each record's processing is a multistep sequential action that is compute-intensive. Each step is always completed in 5 minutes or less.

A limitation of the current system is that if any steps fail, the application has to reprocess the record from the beginning The company wants to update the architecture so that the application must reprocess only the failed steps.

What is the MOST operationally efficient solution that meets these requirements?

Options:

A.

Create a web application to write records to Amazon S3 Use S3 Event Notifications to publish to an Amazon Simple Notification Service (Amazon SNS) topic Use an EC2 instance to poll Amazon SNS and start processing Save intermediate results to Amazon S3 to pass on to the next step

B.

Perform the processing steps by using logic in the application. Convert the application code to run in a container. Use AWS Fargate to manage the container Instances. Configure the container to invoke itself to pass the state from one step to the next.

C.

Create a web application to pass records to an Amazon Kinesis data stream. Decouple the processing by using the Kinesis data stream and AWS Lambda functions.

D.

Create a web application to pass records to AWS Step Functions. Decouple the processing into Step Functions tasks and AWS Lambda functions.

Buy Now
Question # 52

A DevOps engineer needs to configure an AWS CodePipeline pipeline that publishes container images to an Amazon ECR repository. The pipeline must wait for the previous run to finish and must run when new Git tags are pushed to a Git repository connected to AWS CodeConnections. An existing deployment pipeline must run in response to new container image publications.

Which solution will meet these requirements?

Options:

A.

Configure a CodePipeline V2 type pipeline that uses QUEUED mode. Add a trigger filter to the pipeline definition that includes all tags. Configure an EventBridge rule that matches container image pushes to start the existing deployment pipeline.

B.

Configure a CodePipeline V2 type pipeline that uses SUPERSEDED mode. Add a trigger filter to the pipeline definition that includes all branches. Configure an EventBridge rule that matches container image pushes to start the existing deployment pipeline.

C.

Configure a CodePipeline V1 type pipeline that uses SUPERSEDED mode. Add a trigger filter to the pipeline definition that includes all tags. Add a stage at the end of the pipeline to invoke the existing deployment pipeline.

D.

Configure a CodePipeline V1 type pipeline that uses QUEUED mode. Add a trigger filter to the pipeline definition that includes all branches. Add a stage at the end of the pipeline to invoke the existing deployment pipeline.

Buy Now
Question # 53

A company has an AWS account named PipelineAccount. The account manages a pipeline in AWS CodePipeline. The account uses an IAM role named CodePipeline_Service_Role and produces an artifact that is stored in an Amazon S3 bucket. The company uses a customer managed AWS KMS key to encrypt objects in the S3 bucket.

A DevOps engineer wants to configure the pipeline to use an AWS CodeDeploy application in an AWS account named CodeDeployAccount to deploy the produced artifact.

The DevOps engineer updates the KMS key policy to grant the CodeDeployAccount account permission to use the key. The DevOps engineer configures an IAM role named DevOps_Role in the CodeDeployAccount account that has access to the CodeDeploy resources that the pipeline requires. The DevOps engineer updates an Amazon EC2 instance role that operates within the CodeDeployAccount account to allow access to the S3 bucket and the KMS key that is in the PipelineAccount account.

Which additional steps will meet these requirements?

Options:

A.

Update the S3 bucket policy to grant the CodeDeployAccount account access to the S3 bucket. Configure the DevOps_Role IAM role to have an IAM trust policy that allows the PipelineAccount account to assume the role. Update the CodePipeline_Service_Role IAM role to grant permission to assume the DevOps_Role role.

B.

Update the S3 bucket policy to grant the CodeDeployAccount account access to the S3 bucket. Configure the DevOps_Role IAM role to have an IAM trust policy that allows the PipelineAccount account to assume the role. Update the DevOps_Role IAM role to grant permission to assume CodePipeline_Service_Role role.

C.

Update the S3 bucket policy to grant the PipelineAccount account access to the S3 bucket. Configure the DevOps_Role IAM role to have an IAM trust policy that allows the PipelineAccount account to assume the role. Update the CodePipeline_Service_Role IAM to grant permission to assume the DevOps_Role role.

D.

Update the S3 bucket policy to grant the CodeDeployAccount account access to the S3 bucket. Configure the DevOps_Role IAM role to have an IAM trust policy that allows the CodeDeployAccount account to assume the role. Update the CodePipeline_Service_Role IAM role to grant permission to assume the DevOps_Role role.

Buy Now
Exam Code: DOP-C02
Exam Name: AWS Certified DevOps Engineer - Professional
Last Update: Nov 20, 2025
Questions: 366
DOP-C02 pdf

DOP-C02 PDF

$29.75  $84.99
DOP-C02 Engine

DOP-C02 Testing Engine

$33.25  $94.99
DOP-C02 PDF + Engine

DOP-C02 PDF + Testing Engine

$47.25  $134.99