Oct 18, 2023

Skill Assessment - AWS Q60

Q60. What option is best for Auto Scaling your EC2 instances for predictable traffic patterns?

l  scale based on a schedule

l  manual scaling

l  scale based on demand

l  maintain current levels at all times

 

ChatGPT also says ..

The best option for auto-scaling your EC2 instances for predictable traffic patterns is "scale based on a schedule."

 

Scaling based on a schedule allows you to pre-define when and how your auto-scaling group should increase or decrease the number of instances. This is particularly useful when you have predictable traffic patterns, such as increased traffic during specific hours or days. With scheduled scaling, you can configure your scaling actions to align with your anticipated traffic fluctuations.

 

The other options mentioned may not be as effective for predictable traffic patterns:

 

l  "Manual scaling" requires you to manually intervene to add or remove instances, which is not ideal for predictable, scheduled changes.

l  "Scale based on demand" typically uses triggers like CPU utilization or other metrics to scale in response to real-time changes in demand, making it less suitable for predictable traffic patterns.

l  "Maintain current levels at all times" implies keeping a static number of instances and is not suitable for handling changes in traffic, whether they are predictable or not.

 

In summary, "scale based on a schedule" is the most appropriate choice for managing EC2 instances when you expect predictable traffic patterns.

 

 

Q61. You are migrating an on-premise RabbitMQ cluster into AWS.

Which migration path should you choose for ease of both maintenance and deployment?

 

l  Rewrite the parts of your application that use RabbitMQ to use SQS.

l  Launch a RabbitMQ cluster with EC2 instances using a supported AMI.

l  Rewrite the parts of your application that use RabbitMQ to use Kinesis.

l  Rewrite the parts of your application that use RabbitMQ to use Amazon MQ.

 

Wikipedia によるRabbitMQの説明

RabbitMQ(ラビットエムキュー)は、Advanced Message Queuing Protocol (AMQP) を使用した、オープンソースのメッセージ指向ミドルウェアである。

RabbitMQ Serverは、Erlang言語で記述されており、クラスタリングとフェイルオーバーのためにOpen Telecom Platformフレームワークで構築されている。

当初の開発元はRabbit Technologies社 ..中略.. 2019年にVMware社がPivotal Softwareを吸収合併し、今はVMware社がサポートを行っている。

Ref. https://ja.wikipedia.org/wiki/RabbitMQ

 

ChatGPT also says ..

For ease of both maintenance and deployment when migrating an on-premise RabbitMQ cluster into AWS, you should choose the option: "Rewrite the parts of your application that use RabbitMQ to use Amazon MQ."

 

Amazon MQ is a managed message broker service on AWS that is compatible with several messaging protocols, including RabbitMQ. By rewriting the parts of your application to use Amazon MQ, you can seamlessly transition from an on-premise RabbitMQ cluster to a managed RabbitMQ solution in AWS. This minimizes the operational overhead, as Amazon MQ handles many of the maintenance and deployment tasks for you.

 

The other options have their own considerations:

 

1.      "Rewrite the parts of your application that use RabbitMQ to use SQS" would require significant changes to your application code and may not be a direct replacement if you need the features and capabilities of RabbitMQ.

 

2.      "Launch a RabbitMQ cluster with EC2 instances using a supported AMI" would mean managing the RabbitMQ cluster yourself, which can increase maintenance and operational complexity compared to a managed service like Amazon MQ.

 

3.      "Rewrite the parts of your application that use RabbitMQ to use Kinesis" is a viable option if you have specific use cases that align with Kinesis, but it may not be a straightforward replacement for RabbitMQ.

 

In summary, Amazon MQ is designed to ease the migration of RabbitMQ workloads to AWS, making it a convenient choice for both maintenance and deployment.

 

 

Q62. When creating a new RDS instance, what does the Multi-AZ option do?

 

l  replicates backups of your database to S3 and makes them available across regions to prevent against any data loss

l  creates a second passive database instance within the same region that will become the primary database during a failover

l  creates a highly available database cluster that will host your database cluster in at least two regions

l  creates another database instance in another region and keeps a hot standby active to failover to during regional failures

 

ChatGPT also says ..

The Multi-AZ (Availability Zone) option for Amazon RDS (Relational Database Service) creates a second passive database instance within the same region that will become the primary database during a failover.

 

It is designed to enhance the availability and fault tolerance of your RDS database by replicating your primary database to a standby database in a different Availability Zone within the same AWS region. This standby database is kept in sync with the primary, and in the event of a failure or maintenance event on the primary database, Amazon RDS will automatically promote the standby to become the new primary database. This process is known as a failover, and it is done to minimize downtime and ensure high availability.

 

The other options mentioned are not accurate descriptions of what the Multi-AZ option does:

 

1.      Replicating backups to S3 is not the primary function of Multi-AZ, and it does not directly relate to the failover process.

2.      Multi-AZ doesn't create a highly available database cluster across regions; it operates within the same region.

3.      Creating another database instance in another region with hot standby active is not part of Multi-AZ; it is more related to cross-region replication and disaster recovery strategies, which are separate from Multi-AZ.

 

 

Q63. What is the best EC2 instance class for a server that continuously has a heavy CPU load?

 

l  C5

l  T2

l  R5

l  H1

 

ChatGPT also says ..

For a server that continuously has a heavy CPU load, the best EC2 instance class is the "C5" instance class.

 

The C5 instance class is designed for compute-intensive workloads and offers a good balance of CPU resources. It features a high level of compute capacity and is optimized for applications that require high and sustained CPU performance. These instances are ideal for workloads such as data analytics, scientific modeling, and high-performance web servers, which need a consistently high level of CPU power.

 

The other instance classes mentioned have different characteristics:

 

l  "T2" instances are burstable performance instances with a limited CPU baseline, making them suitable for workloads that have periodic bursts of activity but may not be ideal for continuous heavy CPU loads.

 

l  "R5" instances are memory-optimized instances and may not provide the same level of sustained CPU performance as the "C5" class.

 

l  "H1" instances are designed for high storage capacity and are not optimized for CPU-intensive workloads.

 

So, if your server experiences a heavy and continuous CPU load, the "C5" instance class is your best choice for performance and cost-effectiveness.

 

 

Q64. Your application performance management (APM) system can read the status of your CloudWatch monitors and perform scripted actions.

When the CloudWatch metric StatusCheckFailed enters a failed state (a value of 1), you would like your APM to automatically repair the instance. Which script do you use?

  

A

aws ec2 stop-instances --instance-ids i-0b263919b6498b123

aws ec2 start-instances --instance-ids i-0b263919b6498b123

 

B

aws ec2 reboot-instances --instance-ids i-0b263919b6498b123

 

C

aws ec2 reboot-instances --instance-ids i-0b263919b6498b123

wait 30

aws ec2 start-instance --instance-ids i-0b263919b6498b123

 

D

aws ec2 reboot-instances --instance-ids i-0b263919b6498b123

aws ec2 start-instances --instance-ids i-0b263919b6498b123

 

ChatGPT has a different opinion ..

To automatically repair an EC2 instance when the CloudWatch metric StatusCheckFailed enters a failed state (a value of 1), you should use script option B:

 

bash

aws ec2 reboot-instances --instance-ids i-0b263919b6498b123

 

This command will reboot the specified EC2 instance. Rebooting the instance can help resolve certain issues that may cause the StatusCheckFailed metric to indicate a failure.

 

Option A is not ideal because stopping and starting an instance may not be sufficient to resolve the issue causing the failed status check, and it may result in more downtime.

 

Option C is a combination of rebooting and starting the instance with an unnecessary wait, which makes it less efficient than option B.

 

Option D involves both rebooting and starting the instance, but it's not the most concise way to achieve the desired result.

 

 

Q65. What is wrong with the third incoming security group rule,

which allows all traffic from sg-269afc5e to go to an Ubuntu EC2 instance configured as a web server?

 

See image at https://user-images.githubusercontent.com/33999631/179728393-8a2636ea-04e7-4597-b0cc-8150e2bc91de.png

 

l  All traffic on all ports is being denied into this instance, which overwrites the HTTP rule and makes it redundant.

l  The instance was launched with the default security group, but there is no way for an administrator to SSH into the instance. Add another rule that allows for SSH access from a secured source, such as a single IP or a range of managed IP addresses.

l  There is nothing wrong with this security group rule. Assuming that sg-269afc5e is applied to other resources that are properly secured, this rule allows all traffic to pass through that is also assigned security group sg-269afc5e.

l  All traffic on all ports are allowed into this instance. This exposes the instance to all public internet traffic and overwrites the incoming HTTP rule.

 

 

Q66. You have a VPC that has a public and private subnet.

There is a NAT gateway in the public subnet that allows instances in the private subnet to access the internet without having public exposure outside of the VPC. What should the routing tables be for the private subnet?

 

See image at https://github.com/Ebazhanov/linkedin-skill-assessments-quizzes/blob/main/aws/images/003.png

 

A

Destination 1: 10.0.0.0/16, Target 1: local;

Destination 2: 0.0.0.0/0, Target 2: nat-09b4832

 

B

Destination 1: 10.0.0.0/24, Target 1: local;

Destination 2: 0.0.0.0/0, Target 2: igw-b2ff47d6

 

C

Destination 1: 10.0.0.0/24, Target 1: subnet-1948ba2;

Destination 2: 0.0.0.0/0, Target 2: nat-09b4832

 

D

Destination 1: 10.0.0.0/16, Target 1: vpc-12bd09ac2;

Destination 2: 0.0.0.0/0, Target 2: igw-b2ff47d6

 

 

Q67. To comply with auditing requirements of some compliance standards,

which AWS tool can be enabled to maintain an audit log of access and changes to your AWS infrastructure?

 

l  CloudTrail

l  CloudWatch

l  AWS Audit and Compliance Tool

l  GuardDuty

 

ChatGPT also says ..

To comply with auditing requirements and maintain an audit log of access and changes to your AWS infrastructure, you can enable AWS CloudTrail. AWS CloudTrail is the service specifically designed to track and record all API activities and changes made to your AWS resources. It provides a detailed history of events related to your AWS account, helping you meet auditing requirements and enhancing security by monitoring who has access to your resources and what actions they perform.

 

The other options mentioned serve different purposes:

 

l  CloudWatch is used for monitoring and collecting operational data and metrics, but it doesn't focus on auditing and compliance log management.

l  AWS Audit and Compliance Tool is not a specific AWS service; it may refer to the combination of services and practices used to achieve auditing and compliance.

l  GuardDuty is a threat detection service that monitors for malicious activity within your AWS environment but does not primarily focus on audit logging.

 

 

Q68. You have an application that generates long-running reports,

stores them in an S3 bucket, and then emails the user who requested the report with a link to download it. What is the best practice for storing the report data in S3?

 

l  Create a public S3 bucket. When your application creates the report object in S3, generate two randomly generated long folder names and place the file within the deepest subfolder. Set the retention policy on the object to one hour and email this link to the user. The link will be active for one hour.

l  Create a public S3 bucket. Use a hash of the user’s email address and the date and time the report was requested to generate a unique object name. Email this link to the user and have a scheduled task run within your application to remove objects that are older than seven days.

l  Create a private S3 bucket. The link in the email should take the user to your application, where you can verify the active user session or force the user to log in. After verifying the user has rights to access this file, have the application retrieve the object from S3 and return it in the HTTP response. Delete the file from the S3 bucket after the request is completed.

l  Create a private S3 bucket. The link in the email should take the user to your application, where you can verify the active user session or force the user to log in. Set the report object in S3 to public. Show the user a “Download” button in the browser that links to the public object.

 

ChatGPT also says ..

Here's why this option is the recommended approach:

 

Private S3 Bucket: Storing the report in a private S3 bucket ensures that only authorized users can access the content. This is crucial for security and privacy.

 

User Authentication: Requiring user authentication or session verification before granting access ensures that the right user is accessing the report and that unauthorized users cannot access it.

 

Dynamic Content Delivery: By serving the content through your application, you can dynamically control access and enforce permissions. This allows you to implement additional security measures or custom logic as needed.

 

Cleanup: Deleting the file from the S3 bucket after the request is completed is a good practice to manage storage costs and ensure that the data is not unnecessarily retained.

 

Options that involve public S3 buckets or short-lived retention policies may compromise security and privacy, and options that rely on hashing the user's email address for object names may not provide the same level of access control and flexibility as the recommended approach.

 

 

Q69. When sending a large volume of email through SES, what is the most important set of metrics to monitor?

 

l  your complaint and bounce rates

l  opens and clicks

l  clicks and deliveries

l  sending volume over the past 15 minutes and over one day to watch for billing spikes

 

ChatGPT also says ..

When sending a large volume of email through Amazon Simple Email Service (SES), the most important set of metrics to monitor includes:

 

Your complaint and bounce rates

 

Monitoring complaint and bounce rates is critical for maintaining the reputation of your email sender. High complaint rates can lead to your emails being marked as spam, and bounce rates indicate issues with the delivery of your emails. By keeping a close eye on these metrics, you can identify and address any issues that may affect your email deliverability and sender reputation.

 

While opens, clicks, and deliveries are important for assessing the engagement and effectiveness of your email campaigns, complaint and bounce rates are more crucial for ensuring that your emails are delivered and do not negatively impact your sender reputation. Monitoring sending volume is important but not as critical as tracking complaint and bounce rates for email deliverability.

 

 

出典、過去問

https://www.chase2learn.com/amazon-web-services-linkedin-assessment/

 

No comments: