We have a team of experts curating the real SAP-C02 questions and answers for the end users. We are always working on updating the latest SAP-C02 questions and providing the correct SAP-C02 answers to all of our users. We provide free updates for one year from the date of purchase. You can benefit from the updates SAP-C02 Preparation material, and you will be able to pass the SAP-C02 exam in the first attempt.

Amazon SAP-C02 Exam Syllabus Topics:

TopicDetails
Topic 1
  • Determine a strategy to improve performance
  • Continuous Improvement for Existing Solutions
Topic 2
  • Determine opportunities for modernization and enhancements
  • Select existing workloads and processes for potential migration
Topic 3
  • Determine a cost optimization strategy to meet solution goals and objectives
  • Determine security controls based on requirements
Topic 4
  • Determine cost optimization and visibility strategies
  • Architect network connectivity strategies
Topic 5
  • Design a solution to meet performance objectives
  • Design a deployment strategy to meet business requirements
Topic 6
  • Design a multi-account AWS environment
  • Determine a new architecture for existing workloads
Topic 7
  • Determine the optimal migration approach for existing workloads
  • Accelerate Workload Migration and Modernization
Topic 8
  • Determine a strategy to improve overall operational excellence
  • Identify opportunities for cost optimizations
Topic 9
  • Determine a strategy to improve reliability
  • Determine a strategy to improve security

>> Latest SAP-C02 Cram Materials <<

Splendid SAP-C02 Exam Materials: AWS Certified Solutions Architect - Professional (SAP-C02) Present You a brilliant Training Dump - PrepAwayTest

Everyone has their own characteristics when they start to study our SAP-C02 exam questions. In order for each user to find a learning method that suits them, we will provide you with a targeted learning version and study plan. There are three versions of the SAP-C02 Practice Engine for you to choose: the PDF, Software and APP online. And further more, we have free demos of the SAP-C02 learning guide on the website for you to download before you make the purchase.

Amazon AWS Certified Solutions Architect - Professional (SAP-C02) Sample Questions (Q255-Q260):

NEW QUESTION # 255
During an audit, a security team discovered that a development team was putting IAM user secret access keys in their code and then committing it to an AWS CodeCommit repository . The security team wants to automatically find and remediate instances of this security vulnerability Which solution will ensure that the credentials are appropriately secured automatically?

  • A. Configure Amazon Macie to scan for credentials in CodeCommit repositories If credentials are found, trigger an AWS Lambda function to disable the credentials and notify the user
  • B. Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit If credentials are found, generate new credentials and store them in AWS KMS
  • C. Run a script nightly using AWS Systems Manager Run Command to search for credentials on the development instances If found use AWS Secrets Manager to rotate the credentials.
  • D. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials If credentials are found, disable them in AWS IAM and notify the user.

Answer: D

Explanation:
CodeCommit may use S3 on the back end (and it also uses DynamoDB on the back end) but I don't think they're stored in buckets that you can see or point Macie to. In fact, there are even solutions out there describing how to copy your repo from CodeCommit into S3 to back it up: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-event-driven-backups-from-codecommit-to-amazon-s3-using-codebuild-and-cloudwatch-events.html


NEW QUESTION # 256
A company has migrated an application from on premises to AWS. The application frontend is a static website that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB). The application backend is a Python application that runs on three EC2 instances behind another ALB. The EC2 instances are large, general purpose On-Demand Instances that were sized to meet the on-premises specifications for peak usage of the application.
The application averages hundreds of thousands of requests each month. However, the application is used mainly during lunchtime and receives minimal traffic during the rest of the day.
A solutions architect needs to optimize the infrastructure cost of the application without negatively affecting the application availability.
Which combination of steps will meet these requirements? (Choose two.)

  • A. Change all the EC2 instances to compute optimized instances that have the same number of cores as the existing EC2 instances.
  • B. Move the application frontend to a static website that is hosted on Amazon S3.
  • C. Deploy the backend Python application to general purpose burstable EC2 instances that have the same number of cores as the existing EC2 instances.
  • D. Deploy the application frontend by using AWS Elastic Beanstalk. Use the same instance type for the nodes.
  • E. Change all the backend EC2 instances to Spot Instances.

Answer: B,E

Explanation:
Moving the application frontend to a static website that is hosted on Amazon S3 will save cost as S3 is cheaper than running EC2 instances.
Using Spot instances for the backend EC2 instances will also save cost, as they are significantly cheaper than On-Demand instances. This will be suitable for the application, as it has minimal traffic during the rest of the day, and the availability of spot instances will not negatively affect the application's availability.
Reference:
Amazon S3 pricing: https://aws.amazon.com/s3/pricing/
Amazon EC2 Spot Instances documentation: https://aws.amazon.com/ec2/spot/ AWS Elastic Beanstalk documentation: https://aws.amazon.com/elasticbeanstalk/ Amazon Elastic Compute Cloud (EC2) pricing: https://aws.amazon.com/ec2/pricing/


NEW QUESTION # 257
A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.
After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs.
While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors.
Which combination of steps will meet this requirement with the LEAST amount of operational overhead?
(Choose two.)

  • A. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
  • B. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.
  • C. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
  • D. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
  • E. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server.

Answer: B,C

Explanation:
Explanation
"Save your custom error pages in a location that is accessible to CloudFront. We recommend that you store them in an Amazon S3 bucket, and that you don't store them in the same place as the rest of your website or application's content. If you store the custom error pages on the same origin as your website or application, and the origin starts to return 5xx errors, CloudFront can't get the custom error pages because the origin server is unavailable."
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GeneratingCustomErrorResponses.htm


NEW QUESTION # 258
A company is migrating applications from on premises to the AWS Cloud. These applications power the company's internal web forms. These web forms collect data for specific events several times each quarter.
The web forms use simple SQL statements to save the data to a local relational database.
Data collection occurs for each event, and the on-premises servers are idle most of the time. The company needs to minimize the amount of idle infrastructure that supports the web forms.
Which solution will meet these requirements?

  • A. Create Docker images for each server of the legacy web form applications. Create an Amazon Elastic Container Service (Amazon ECS) cluster on AWS Fargate. Place an Application Load Balancer in front of the ECS cluster. Use Fargate task storage to store the web form data.
  • B. Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web form's data storage. Use Amazon API Gateway and an AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint.
  • C. Create one Amazon DynamoDB table to store data for all the data input Use the application form name as the table key to distinguish data items. Create an Amazon Kinesis data stream to receive the data input and store the input in DynamoDB. Use Amazon Route 53 to point the DNS names of the web forms to the Kinesis data stream's endpoint.
  • D. Use Amazon EC2 Image Builder to create AMIs for the legacy servers. Use the AMIs to provision EC2 instances to recreate the applications in the AWS.
    Cloud. Place an Application Load Balancer (ALB) in front of the EC2 instances. Use Amazon Route 53 to point the DNS names of the web forms to the ALB.

Answer: B

Explanation:
Explanation
Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web forms data storage. Use Amazon API Gateway and an AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint.


NEW QUESTION # 259
A company is using Amazon OpenSearch Service to analyze dat
a. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only analysis. After 1 month, the company deletes the index that contains the data from the cluster. For compliance purposes, the company must retain a copy of all input data.
The company is concerned about ongoing costs and asks a solutions architect to recommend a new solution.
Which solution will meet these requirements MOST cost-effectively?

  • A. Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy.
  • B. Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.
  • C. Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Add cold storage nodes to the cluster Transition the indexes from UltraWarm to cold storage. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle policy.
  • D. Replace all the data nodes with UltraWarm nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.

Answer: A

Explanation:
By reducing the number of data nodes in the cluster to 2 and adding UltraWarm nodes to handle the expected capacity, the company can reduce the cost of running the cluster. Additionally, configuring the indexes to transition to UltraWarm when OpenSearch Service ingests the data will ensure that the data is stored in the most cost-effective manner. Finally, transitioning the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy will ensure that the data is retained for compliance purposes, while also reducing the ongoing costs.


NEW QUESTION # 260
......

Getting Amazon certification is a good way for you to access to IT field. But you may find that real test questions are difficult and professional and you have no time to prepare the SAP-C02 valid test. So it is time that our latest dumps torrent and training materials help you get high passing score in the process of SAP-C02 practice test at your first attempt.

Test SAP-C02 Score Report: https://www.prepawaytest.com/Amazon/SAP-C02-practice-exam-dumps.html

ExolTechUSexo_4a9330c7cc41d2dcc6dec9a24575b2f9.jpg