Month End Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: Board70

SAP-C02 Exam Dumps - Amazon Web Services AWS Certified Professional Questions and Answers

Question # 19

A company is planning to migrate its on-premises transaction-processing application to AWS. The application runs inside Docker containers that are hosted on VMS in the company's data center. The Docker containers have shared storage where the application records transaction data.

The transactions are time sensitive. The volume of transactions inside the application is unpredictable. The company must implement a low-latency storage solution that will automatically scale throughput to meet increased demand. The company cannot develop the application further and cannot continue to administer the Docker hosting environment.

How should the company migrate the application to AWS to meet these requirements?

Options:

A.

Migrate the containers that run the application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon S3 to store the transaction data that the containers share.

B.

Migrate the containers that run the application to AWS Fargate for Amazon Elastic Container Service (Amazon ECS). Create an Amazon Elastic File System (Amazon EFS) file system. Create a Fargate task definition. Add a volume to the task definition to point to the EFS file system

C.

Migrate the containers that run the application to AWS Fargate for Amazon Elastic Container Service (Amazon ECS). Create an Amazon Elastic Block Store (Amazon EBS) volume. Create a Fargate task definition. Attach the EBS volume to each running task.

D.

Launch Amazon EC2 instances. Install Docker on the EC2 instances. Migrate the containers to the EC2 instances. Create an Amazon Elastic File System (Amazon EFS) file system. Add a mount point to the EC2 instances for the EFS file system.

Buy Now
Question # 20

A company is creating a REST API to share information with six of its partners based in the United States. The company has created an Amazon API Gateway Regional endpoint. Each of the six partners will access the API once per day to post daily sales figures.

After initial deployment, the company observes 1.000 requests per second originating from 500 different IP addresses around the world. The company believes this traffic is originating from a botnet and wants to secure its API while minimizing cost.

Which approach should the company take to secure its API?

Options:

A.

Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule lo block clients thai submit more than fiverequests per day. Associate the web ACL with the CloudFront distnbution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can run the POST method.

B.

Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than five requests per day. Associate the web ACL with the CloudFront distnbution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the POST method.

C.

Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method.

D.

Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.

Buy Now
Question # 21

A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.

A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.

Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

Options:

A.

Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.

B.

Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete.

C.

Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.

D.

Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.

Buy Now
Question # 22

IoT sensors are manufactured with certificates from a private CA. They must only connect to AWS after physical installation.

Options:

A.

Use Lambda as apre-provisioning hookto validate serial number before registration.

B.

Use Step Functions to validate before provisioning.

C.

Use Lambda hook but register CA and enable auto-registration.

D.

Use provisioning template and claim certificates without validation.

Buy Now
Question # 23

A company recently acquired several other companies. Each company has a separate AWS account with a different billing and reporting method. The acquiring company has consolidated all the accounts into one organization in AWS Organizations. However, the acquiring company has found it difficult to generate a cost report that contains meaningful groups for all the teams.

The acquiring company’s finance team needs a solution to report on costs for all the companies through a self-managed application.

Which solution will meet these requirements?

Options:

A.

Create an AWS Cost and Usage Report for the organization. Define tags and cost categories in the report. Create a table in Amazon Athena. Create an Amazon QuickSight dataset based on the Athena table. Share the dataset with the finance team.

B.

Create an AWS Cost and Usage Report for the organization. Define tags and cost categories in the report. Create a specialized template in AWS Cost Explorer that the finance department will use to build reports.

C.

Create an Amazon QuickSight dataset that receives spending information from the AWS Price List Query API. Share the dataset with the finance team.

D.

Use the AWS Price List Query API to collect account spending information. Create a specialized template in AWS Cost Explorer that the finance department will use to build reports.

Buy Now
Question # 24

A global company has a mobile app that displays ticket barcodes. Customers use the tickets on the mobile app to attend live events. Event scanners read the ticket barcodes and call a backend API to validate the barcode data against data in a database. After the barcode is scanned, the backend logic writes to the database's single table to mark the barcode as used. The company needs to deploy the app on AWS with a DNS name of api.example.com. The company will host the database in three AWS Regions around the world. Which solution will meet these requirements with the LOWEST latency?

Options:

A.

Host the database on Amazon Aurora global database clusters. Host the backend on three Amazon ECS clusters that are in the same Regions as the database. Create an accelerator in AWS Global Accelerator to route requests to the nearest ECS cluster. Create an Amazon Route 53 record that maps api.example.com to the accelerator endpoint.

B.

Host the database on Amazon Aurora global database clusters. Host the backend on three Amazon EKS clusters that are in the same Regions as the database. Create an Amazon CloudFront distribution with the three clusters as origins. Route requests to the nearest EKS cluster. Create an Amazon Route 53 record that maps api.example.com to the CloudFront distribution.

C.

Host the database on Amazon DynamoDB global tables. Create an Amazon CloudFront distribution. Associate the CloudFront distribution with a CloudFront function that contains the backend logic to validate the barcodes. Create an Amazon Route 53 record that maps api.example.com to the CloudFront distribution.

D.

Host the database on Amazon DynamoDB global tables. Create an Amazon CloudFront distribution. Associate the CloudFront distribution with a Lambda@Edge function that contains the backend logic to validate the barcodes. Create an Amazon Route 53 record that maps api.example.com to the CloudFront distribution.

Buy Now
Question # 25

A company is migrating an application from on-premises infrastructure to the AWS Cloud. During migration design meetings, the company expressed concerns about the availability and recovery options for its legacy Windows file server. The file server contains sensitive business-critical data that cannot be recreated in the event of data corruption or data loss. According to compliance requirements, the data must not travel across the public internet. The company wants to move to AWS managed services where possible.

The company decides to store the data in an Amazon FSx for Windows File Server file system. A solutions architect must design a solution that copies the data to another AWS Region for disaster recovery (DR) purposes.

Which solution will meet these requirements?

Options:

A.

Create a destination Amazon S3 bucket in the DR Region. Establish connectivity between the FSx for Windows File Server file system in the primary Region and the S3 bucket in the DR Region by using Amazon FSx File Gateway. Configure the S3 bucket as a continuous backup source in FSx File Gateway.

B.

Create an FSx for Windows File Server file system in the DR Region. Establish connectivity between the VPC in the primary Region and the VPC in the DR Region by using AWS Site-to-Site VPN. Configure AWS DataSync to communicate by using VPN endpoints.

C.

Create an FSx for Windows File Server file system in the DR Region. Establish connectivity between the VPC in the primary Region and the VPC in the DR Region by using VPC peering. Configure AWS DataSync to communicate by using interface VPC endpoints with AWS PrivateLink.

D.

Create an FSx for Windows File Server file system in the DR Region. Establish connectivity between the VPC in the primary Region and the VPC in the DR Region by using AWS Transit Gateway in each Region. Use AWS Transfer Family to copy files between the FSx for Windows File Server file system in the primary Region and the FSx for Windows File Server file system in the DR Region over the private AWS backbone network.

Buy Now
Question # 26

A company runs an ecommerce web application on AWS. The web application is hosted as a static website on Amazon S3 with Amazon CloudFront for content delivery. An Amazon API Gateway API invokes AWS Lambda functions to handle user requests and order processing for the web application. The Lambda functions store data in an Amazon RDS for MySQL DB cluster that uses On-Demand Instances. The DB cluster usage has been consistent in the past 12 months. Recently, the website has experienced SQL injection and web exploit attempts. Customers also report that order processing time has increased during periods of peak usage. During these periods, the Lambda functions often have cold starts. As the company grows, the company needs to ensure scalability and low-latency access during traffic peaks. The company also must optimize the database costs and add protection against the SQL injection and web exploit attempts. Which solution will meet these requirements?

Options:

A.

Configure the Lambda functions to have an increased timeout value during peak periods. Use RDS Reserved Instances for the database. Use CloudFront and subscribe to AWS Shield Advanced to protect against the SQL injection and web exploit attempts.

B.

Increase the memory of the Lambda functions. Transition to Amazon Redshift for the database. Integrate Amazon Inspector with CloudFront to protect against the SQL injection and web exploit attempts.

C.

Use Lambda functions with provisioned concurrency for compute during peak periods. Transition to Amazon Aurora Serverless for the database. Use CloudFront and subscribe to AWS Shield Advanced to protect against the SQL injection and web exploit attempts.

D.

Use Lambda functions with provisioned concurrency for compute during peak periods. Use RDS Reserved Instances for the database. Integrate AWS WAF with CloudFront to protect against the SQL injection and web exploit attempts.

Buy Now
Question # 27

A company runs an loT platform on AWS loT sensors in various locations send data to the company's Node js API servers on Amazon EC2 instances running behind an Application Load Balancer The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General Purpose SSD volume

The number of sensors the company has deployed in the field has increased over time and is expected to grow significantly The API servers are consistently overloaded and RDS metrics show high write latency

Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this platform cost-efficient? {Select TWO.)

Options:

A.

Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume's IOPS

B.

Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance andadd read replicas

C.

Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data

D.

Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load

E.

Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance

Buy Now
Question # 28

A global company runs an analytics application on Amazon EC2 for computing. The company uses Amazon EBS as primary storage for raw and processed data. Users manually upload raw data daily to Amazon EC2 by using SSH from a local on-premises storage computer. The analytics application processes the data and a user manually uploads the data to Amazon S3 for long-term storage.

The company wants to containerize the processing logic and migrate the processing logic to Amazon EKS. The company needs an automated solution to upload and move the processed data. The solution must have multiprotocol support and be usable from the EKS cluster.

Which solution meets these requirements with the LEAST operational effort?

Options:

A.

Use AWS DataSync to copy raw data to Amazon EFS. Mount Amazon EFS on Amazon EKS as a volume. Use AWS Transfer for SFTP to copy processed data from Amazon EFS to Amazon S3.

B.

Use AWS DataSync to copy raw data to Amazon FSx for Lustre. Mount FSx for Lustre on Amazon EKS as a volume. Use DataSync to copy processed data from FSx for Lustre to Amazon S3.

C.

Use AWS DataSync to copy raw data to Amazon FSx for NetApp ONTAP. Mount FSx for NetApp ONTAP on Amazon EKS as a volume. Use DataSync to copy processed data from FSx for NetApp ONTAP to Amazon S3.

D.

Use AWS DataSync to copy raw data to Amazon FSx for NetApp ONTAP. Mount FSx for NetApp ONTAP on Amazon EKS as a volume. Use AWS Transfer for SFTP to copy processed data from FSx for NetApp ONTAP to Amazon S3.

Buy Now
Question # 29

A company is migrating a legacy application from an on-premises data center to AWS. The application uses MongoDB as a key-value database According to the company's technical guidelines, all Amazon EC2 instances must be hosted in a private subnet without an internet connection. In addition, all connectivity between applications and databases must be encrypted. The database must be able to scale based on demand.

Which solution will meet these requirements?

Options:

A.

Create new Amazon DocumentDB (with MongoDB compatibility) tables for the application with Provisioned IOPS volumes. Use the instance endpoint to connect to Amazon DocumentDB.

B.

Create new Amazon DynamoDB tables for the application with on-demand capacity. Use a gateway VPC endpoint for DynamoDB to connect to the DynamoDB tables

C.

Create new Amazon DynamoDB tables for the application with on-demand capacity. Use an interface VPC endpoint for DynamoDB to connect to the DynamoDB tables.

D.

Create new Amazon DocumentDB (with MongoDB compatibility) tables for the application with Provisioned IOPS volumes Use the cluster endpoint to connect to Amazon DocumentDB

Buy Now
Question # 30

A company is using AWS to develop and manage its production web application. The application includes an Amazon API Gateway HTTP API that invokes an AWS Lambda function. The Lambda function processes and then stores data in a database.

The company wants to implement user authorization for the web application in an integrated way. The company already uses a third-party identity provider that issues OAuth tokens for the company's other applications.

Which solution will meet these requirements?

Options:

A.

Integrate the company's third-party identity provider with API Gateway. Configure an API Gateway Lambda authorizer to validate tokens from the identity provider. Require the Lambda authorizer on all API routes. Update the web application to get tokens from the identity provider and include the tokens in the Authorization header when calling the API Gateway HTTP API.

B.

Integrate the company's third-party identity provider with AWS Directory Service. Configure Directory Service as an API Gateway authorizer to validate tokens from the identity provider. Require the Directory Service authorizer on all API routes. Configure AWS IAM Identity Center as a SAML 2.0 identity provider. Configure the web application as a custom SAML 2.0 application.

C.

Integrate the company's third-party identity provider with AWS IAM Identity Center. Configure API Gateway to use IAM Identity Center for zero-configuration authentication and authorization. Update the web application to retrieve AWS STS tokens from IAM Identity Center and include the tokens in the Authorization header when calling the API Gateway HTTP API.

D.

Integrate the company's third-party identity provider with AWS IAM Identity Center. Configure IAM users with permissions to call the API Gateway HTTP API. Update the web application to extract request parameters from the IAM users and include the parameters in the Authorization header when calling the API Gateway HTTP API.

Buy Now
Question # 31

A company is building a software-as-a-service (SaaS) solution on AWS. The company has deployed an Amazon API Gateway REST API with AWS Lambda integration in multiple AWS Regions and in the same production account.

The company offers tiered pricing that gives customers the ability to pay for the capacity to make a certain number of API calls per second. The premium tier offers up to 3,000 calls per second, and customers are identified by a unique API key. Several premium tier customers in various Regions report that they receive error responses of 429 Too Many Requests from multiple API methods during peak usage hours. Logs indicate that the Lambda function is never invoked.

What could be the cause of the error messages for these customers?

Options:

A.

The Lambda function reached its concurrency limit.

B.

The Lambda function its Region limit for concurrency.

C.

The company reached its API Gateway account limit for calls per second.

D.

The company reached its API Gateway default per-method limit for calls per second.

Buy Now
Question # 32

A company uses AWS Organizations with all features enabled to manage its accounts. The company has configured AWS Backup to run every 4 hours on several Amazon EFS mount points in the eu-west-2 Region. The backups are stored in the default vault. The company needs a disaster recovery (DR) plan that restores into the eu-west-1 Region and a specific recovery account. The backups must be encrypted at all times. Which solution will meet these requirements?

Options:

A.

Configure AWS Resource Access Manager (AWS RAM) to share the backup vault with the recovery account. Create a new backup vault in the recovery account. Encrypt the data by using an AWS managed KMS key. Schedule a copy job in the recovery account to copy the backup vault to the new vault.

B.

Create a new backup vault in the source account and a new backup vault in the recovery account. Encrypt the data by using a multi-Region customer managed KMS key. Redirect the backups to the new backup vault. Configure a key policy statement to allow access to the key from the recovery account. Schedule a cross-account backup plan to the recovery account.

C.

Create an Amazon S3 bucket. Create a new multi-Region customer managed KMS key to encrypt the S3 bucket data. Schedule a copy job from the backup vault that copies the data to the S3 bucket. Configure cross-account access for the recovery account to the S3 bucket. Schedule a second copy job in the recovery account to copy the data from the S3 bucket into the default vault.

D.

Configure AWS DataSync to copy the EFS data to eu-west-1 in the source account. In the recovery account, create a new backup vault. Encrypt the data by using an AWS managed KMS key. In the source account, schedule a cross-account backup plan to the recovery account's vault in eu-west-1.

Buy Now
Question # 33

A solutions architect needs to migrate an on-premises legacy application to AWS. The application runs on two servers behind a bad balancer. The application requires a license file that is associated with the MAC address of the server's network adapter. It takes the software vendor 12 hours to send new license files. The application also uses configuration files with a static IP address to access a database host names are not supported.

Given these requirements. which combination of steps should be taken to implement highly available architecture for the application servers in AWS? (Select TWO.)

Options:

A.

Create a pool of ENIs. Request license files from the vendor for the pool, and store the license files in Amazon $3. Create a bootstrap automation script to download a license file and attach the corresponding ENI to anAmazon EC2 instance.

B.

Create a pool of ENIs. Request license files from the vendor for the pool, store the license files on an Amazon EC2 instance. Create an AMI from the instance and use this AMI for all future EC2

C.

Create a bootstrap automation script to request a new license file from the vendor. When the response is received, apply the license file to an Amazon EC2 instance.

D.

Edit the bootstrap automation script to read the database server IP address from the AWS Systems Manager Parameter Store. and inject the value into the local configuration files.

E.

Edit an Amazon EC2 instance to include the database server IP address in the configuration files and re-create the AMI to use for all future EC2 instances.

Buy Now
Exam Code: SAP-C02
Exam Name: AWS Certified Solutions Architect - Professional
Last Update: Jan 31, 2026
Questions: 607
SAP-C02 pdf

SAP-C02 PDF

$25.5  $84.99
SAP-C02 Engine

SAP-C02 Testing Engine

$28.5  $94.99
SAP-C02 PDF + Engine

SAP-C02 PDF + Testing Engine

$40.5  $134.99