Realexamdumps.us

Sunday, January 20, 2019

Easy and Guaranteed AWS Certified DevOps Engineer Professiona Dumps

QUESTION 6

You work for an insurance company and are responsible for the day-to-day operations of your company's online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior.
Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future.
You have been tasked with designing a log management system with the following requirements:
- All log entries must be retained by the system, even during unplanned instance failure. - The customer insight team requires immediate access to the logs from the past seven days. - The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available.

How would you meet these requirements in a cost-effective manner? Choose 3 answers

A. Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performance. Create a script that moves the logs from the instance to Amazon 53 once an hour.
B. Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
C. Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days.
D. Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exists. Create a script that moves the logs from the instance to Amazon 53 once an hour.
E. Configure your application to write logs to a separate Amazon EBS volume with the "delete ontermination" field set to false. Create a script that moves the logs from the instance to Amazon S3 once an hour.
F. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files.

Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.

Answer: CEF

QUESTION 7

You have an application running on Amazon EC2 in an Auto Scaling group. Instances are being bootstrapped dynamically, and the bootstrapping takes over 15 minutes to complete. You find that instances are reported by Auto Scaling as being In Service before bootstrapping has completed.
You are receiving application alarms related to new instances before they have completed bootstrapping, which is causing confusion.
You find the cause: your application monitoring tool is polling the Auto Scaling Service API for instances that are In Service, and creating alarms for new previously unknown instances.
Which of the following will ensure that new instances are not added to your application monitoring tool before bootstrapping is completed?

A. Create an Auto Scaling group lifecycle hook to hold the instance in a pending: wait state until your bootstrapping is complete.
   Once bootstrapping is complete, notify Auto Scaling to complete the lifecycle hook and move the instance into a pending: complete state.Once bootstrapping is complete, notify Auto Scaling to complete the lifecycle hook and move the instance into a pending: complete state.
B. Use the default Amazon CloudWatch application metrics to monitor your application's health. Configure an Amazon SNS topic to send these CloudWatch alarms to the correct recipients.
C. Tag all instances on launch to identify that they are in a pending state.
   Change your application monitoring tool to look for this tag before adding new instances, and the use the Amazon API to set the instance state to 'pending' until bootstrapping is complete.
D. Increase the desired number of instances in your Auto Scaling group configuration to reduce the time it takes to bootstrap future instances.

Answer: A

QUESTION 8

You have been given a business requirement to retain log files for your application for 10 years.
You need to regularly retrieve the most recent logs for troubleshooting.
Your logging system must be cost-effective, given the large volume of logs.
What technique should you use to meet these requirements?

A. Store your log in Amazon CloudWatch Logs.
B. Store your logs in Amazon Glacier.
C. Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier.
D. Store your logs in HDFS on an Amazon EMR cluster.
E. Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.

Answer: C

QUESTION 9

You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application clue to the increased load.
Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS CloudFormation.
Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 because of the high CPU utilization of the instances.
After some analysis, you are confident that the performance issues stem from a constraint in CPUcapacity, although memory utilization remains low.
You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances.

How would you deploy this change while minimizing any interruption to your end users?

A. Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instances.
   Update the Auto Scaling group with the new launch configuration.
   Auto Scaling will then update the instance type of all running instances
B. Sign into the AWS Management Console, and update the existing launch configuration with the new C3 instance type.
   Add an UpdatePolicy attribute to your Auto Scaling group that specifies
   AutoScalingRollingUpdate.
C. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type.
   Run a stack update with the new template.
   Auto Scaling will then update the instances with the new instance type
D. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type.
   Also add an UpdatePolicy attribute to your Auto Scaling group that specifies
   AutoScalingRollingUpdate.
   Run a stack update with the new template.

Answer: D

QUESTION 10

You've been tasked with implementing an automated data backup solution for your application servers that run on Amazon EC2 with Amazon EBS volumes. You want to use a distributed data store for your backups to avoid single points of failure and to increase the durability of the data. Daily backups should be retained for 30 days so that you can restore data within an hour. How can you implement this through a script that a scheduling daemon runs daily on the application servers?

A. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date time group, and copy backup data to a second Amazon EBS volume.
   Use the ec2-describe-volumes API to enumerate existing backup volumes.
   Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-tine group older than 30 days
B. Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time group.
   Use the list vaults API to enumerate existing backup archives Call the delete vault API to prune backup archives that are tagged with a date-time group older than 30 days.
C. Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time group.
   Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshots.
   Call the ec2-delete-snapShot API to prune Amazon EBS snapshots that are tagged with a datetime group older than 30 days
D. Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-copy-snapshot API to back up data to the new Amazon EBS volume. Use the ec2- describe-snapshot API to enumerate existing backup volumes.
   Call the ec2-delete-snaphot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days.

Answer: C

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.