Oct 20, 2023

Skill Assessment - AWS Q70

Q70. You are going to host an application that uses a MySQL database.

Which database should you select if you don't want to manage scaling or database administration tasks?

 

l  Launch an AMI image from the marketplace containing a preconfigured MySQL server.

l  Aurora

l  RDS for MySQL

l  Redshift

 

ChatGPT has a different opinion ..

If you want to host an application that uses a MySQL database and you don't want to manage scaling or database administration tasks, the best choice among the options you provided is Amazon RDS (Relational Database Service) for MySQL.

 

Amazon RDS is a managed database service that takes care of many administrative tasks for you, such as database backups, software patching, monitoring, and automatic scaling. With Amazon RDS for MySQL, you can focus on your application's development and not worry about the day-to-day management of the database infrastructure. It provides features like automatic backups, high availability, and the ability to easily scale your database resources when needed. This makes it a suitable choice for applications that require a MySQL database without the need for manual management tasks.

 

The other options mentioned in your list are not specifically designed for MySQL databases and may not provide the same level of automated management and scalability:

l  Launching an AMI image from the marketplace would require you to manage the database yourself.

l  Aurora is a highly scalable and available database service, but it's a different database engine (compatible with MySQL). It offers benefits in terms of performance and scalability but might be more than what you need if you don't want to manage scaling and administration tasks.

l  Redshift is a data warehousing service designed for analytics workloads and is not suitable for hosting a traditional MySQL database.

So, for a MySQL database with minimal management tasks, Amazon RDS for MySQL is the recommended choice.

 

 

Q71. A form in web application is sending sign-up data

to "http://example.com/signup/new?source=web"

and this data needs to be handled by an ECS service behind Application Load Balancer (ALB). Which ALB rule will route this request?

 

See Image source at https://raw.githubusercontent.com/Ebazhanov/linkedin-skill-assessments-quizzes/main/aws/images/004.png

 

A

IF (all match)

    Path is /signup*

    Query string is signup:new

Then

    Forward to ecs-cluse-service <there is a typo - yes!

 

B

IF (all match)

    Path is /sign/new/&

    Query request method is POST

Then

    Forward to ecs-cluse-service

 

C ...One more with POST

 

D ...Only one with Get

 

ChatGPT has a different opinion ..

This rule specifies a different path and checks for a POST request method. It may not be a correct match for the scenario you described.

 

Without seeing the image, I can't make a precise determination, but based on the descriptions alone, Rule A appears to be the one that matches the URL "http://example.com/signup/new?source=web" with the required path and query string. Rule B seems to have a different path and is specifically looking for POST requests.

 

Please verify the actual ALB rule settings in your AWS environment to ensure the correct routing of requests.

 

Option C and Option D are more generic descriptions rather than specific ALB rules. Based on the information provided:

 

Option C ("One more with POST") is not a complete ALB rule description and lacks specific conditions for routing.

 

Option D ("Only one with Get") also lacks specific ALB rule conditions.

 

Given this, Option A from the original question appears to be the most relevant ALB rule based on the given rule description. It specifies conditions for matching the path and query string, making it the best choice for the scenario described in the question.

 

If you have specific ALB rule descriptions or configurations you'd like to discuss, please provide them, and I'll be happy to assist further.

 

 

Q72. Which AWS service can host the web application server for a WordPress site?

 

l  S3

l  Elastic BeanStalk

l  ElasticCache

l  CloudFront

 

ChatGPT also describes details ..

 

To host the web application server for a WordPress site, you typically need a web server environment along with a database. Among the options you've listed:

 

1.      S3 (Amazon Simple Storage Service): S3 is primarily an object storage service, not a hosting service for web applications. While you can host static website content (e.g., HTML, CSS, images) on S3, it doesn't provide the server-side capabilities required for a WordPress site.

 

2.      Elastic Beanstalk: Elastic Beanstalk is a Platform-as-a-Service (PaaS) that can host web applications, including WordPress. It simplifies the deployment and management of web applications. You can create an Elastic Beanstalk environment for your WordPress site, and it will handle the underlying infrastructure for you.

 

3.      ElasticCache: ElasticCache is a managed caching service, not a hosting service for web applications. While caching can be beneficial for performance, it's not the primary service for hosting a WordPress site.

 

4.      CloudFront: Amazon CloudFront is a Content Delivery Network (CDN) service that can be used to distribute and accelerate the delivery of static assets (e.g., images, CSS, JavaScript) for a WordPress site. However, it does not host the web application server itself. You would use CloudFront in conjunction with other services like Elastic Beanstalk or EC2 for the web application server.

 

For hosting a WordPress site's web application server, Elastic Beanstalk is the most suitable option among those listed, as it provides a platform for hosting web applications, simplifying deployment, scaling, and management.

 

 

Q73. What does the following AWS CLI create-service command for ECS do?

 

aws ecs create-service \

--cluster production \

--service-name rest-api \

--task-definition rest-api:1 \

--desired-count 2 \

--launch-type "FARGATE" \

--network-configuration \

"awsvpcConfiguration={subnets=[subnet-0b29129ab],securityGroups=[sg-0b29129ab]}"

 

l  changes the security groups of the running rest-api task

l  creates a cluster called production and launches two containers into Fargate with the rest-api task definition

l  launches two containers onto Fargate into the existing production cluster using the rest-api task definition

l  creates a service definition for the rest-api task; put two containers on the production cluster when launched ecs-cli up command

 

ChatGPT explains further ..

The provided AWS CLI command creates a service in Amazon Elastic Container Service (ECS) with specific settings. Let's break down the command:

 

Here's what this command does:

 

l  --cluster production: Specifies the ECS cluster in which you want to create the service, and it is named "production."

l  --service-name rest-api: Sets the name of the ECS service to "rest-api."

l  --task-definition rest-api:1: References the task definition to use for this service. It uses the "rest-api" task definition with revision 1.

l  --desired-count 2: Specifies that you want to run 2 instances (containers) of your task.

l  --launch-type "FARGATE": Specifies that you want to launch your containers using the AWS Fargate launch type. This means you don't need to manage the underlying infrastructure; AWS handles it for you.

l  --network-configuration: Specifies the network configuration for the task. It uses Amazon VPC (Virtual Private Cloud) settings:

Ø  awsvpcConfiguration: Indicates that you are using the AWS VPC for networking.

Ø  subnets: Specifies the subnets where the Fargate tasks should run.

Ø  securityGroups: Specifies the security groups to associate with the Fargate tasks.

 

So, the correct interpretation of the command is: "launches two containers onto Fargate into the existing 'production' cluster using the 'rest-api' task definition, specifying the desired count, and configuring the VPC network settings."


 

Q74. You want to make your public API quickly accessible from all regions.

What is the best way to do this?

 

l  Create a single API gateway endpoint in a central region.

l  Create a private API gateway endpoint for each region.

l  Create a regional API gateway endpoint for each region.

l  Create edge-optimized API gateway endpoints and deploy them to a CloudFront network.

 

ChatGPT explains further ..

The best way to make your public API quickly accessible from all regions is to create edge-optimized API gateway endpoints and deploy them to a CloudFront network.


Here's why:

l  Edge-optimized API Gateway: When you create an edge-optimized API Gateway, it is automatically distributed to AWS CloudFront edge locations. Edge-optimized endpoints are designed to accelerate content delivery and reduce latency by routing requests to the nearest CloudFront edge location. This means that your API will be readily available and responsive to users in multiple regions.

l  CloudFront: AWS CloudFront is a Content Delivery Network (CDN) service with a global network of edge locations. When you deploy your API with an edge-optimized API Gateway, it leverages the CloudFront network to distribute your API's content. This results in faster response times and lower latency for users in various regions.

By choosing edge-optimized API gateway endpoints and deploying them through CloudFront, you ensure that your public API is quickly accessible from all regions, providing a responsive and efficient user experience. This is particularly useful when you have a global user base and want to reduce the time it takes for users from different regions to access your API.

 

Q75. What type of data solution should you use for data coming from nonrelational and relational data from IoT devices, websites, mobile apps, etc.?

 

l  Amazon DynamoDB

l  AWS Lake Formation

l  Amazon Redshift

l  Amazon Aurora

 

ChatGPT also adds ..

The type of data solution you should use for data coming from non-relational and relational data from IoT devices, websites, mobile apps, etc., can vary depending on your specific requirements and use case. Here are some considerations for each of the options mentioned:

 

1.      Amazon DynamoDB:

Amazon DynamoDB is a NoSQL database service that is suitable for handling non-relational data, such as data from IoT devices and mobile apps.

It provides high scalability, low-latency access, and automatic sharding of data for fast and efficient storage and retrieval.

It is a good choice when you need a highly available and fully managed NoSQL database for flexible data storage.

 

2.      AWS Lake Formation:

AWS Lake Formation is a service designed for building and managing data lakes, which are repositories that can store vast amounts of structured and unstructured data, including data from IoT devices, websites, and mobile apps.

It is not a database but a tool for organizing and preparing data for analytics.

 

3.      Amazon Redshift:

Amazon Redshift is a fully managed data warehousing service designed for running complex analytical queries on large datasets.

It's more suitable for storing and analyzing structured data and is a good choice when you need to perform data analytics, reporting, and business intelligence on your data.

 

4.      Amazon Aurora:

Amazon Aurora is a relational database service and is best suited for applications that require a traditional relational database management system (RDBMS).

If you have structured data that requires relational capabilities, such as ACID compliance, then Amazon Aurora could be a suitable choice.

Ultimately, the choice of data solution depends on your specific data storage and analysis needs. For a mix of non-relational and relational data from various sources like IoT devices, websites, and mobile apps, you may even choose to use a combination of these services. For example, you could use DynamoDB for non-relational data and Redshift for data analytics and reporting. It's important to carefully evaluate your use case and requirements to determine the best approach.

 

 

Q76. You have an analytics suite that produces reports about the usage patterns of your web application.

After completing your migration to AWS and using Application Load Balancer to balance the load across your web application, your marketing department noticed that location-based reports on the web traffic only show traffic originating from a single location. What is the problem?

 

l  Use a Classic Load Balancer, not Application Load Balancer.

l  Application Load Balancer does not preserve the original source IP address. The analytics software needs to be configured to look at the ‘X-Forwarded-For’ HTTP request header for the correct source IP address.

l  Application Load Balencer has to be configured to retain the source IP address of the traffic it is forwarding. Create a policy that enables ProxyProtocol support and attach it to the ALB using the AWS CLI.

l  Configure the web server EC2 instances to only have private IP addresses. The public IP addresses of the instances are being recorded into the web server logs, bug only ALB should have a public interface and it will route traffic to instances via the private interface.

 

ChatGPT also adds..

The problem you're experiencing with location-based reports showing traffic originating from a single location when using an Application Load Balancer (ALB) is likely related to how the ALB handles the source IP address. The correct solution is:

 

Application Load Balancer does not preserve the original source IP address. The analytics software needs to be configured to look at the ‘X-Forwarded-For’ HTTP request header for the correct source IP address.

 

Explanation:

l  Application Load Balancers, by default, use the private IP address of the load balancer as the source IP for forwarding traffic to your web application instances. This can cause issues with location-based reporting as it doesn't accurately represent the true source of the traffic. 

l  The 'X-Forwarded-For' HTTP request header contains the original source IP address of the client before it passed through the load balancer. Your analytics software should be configured to extract the source IP address from this header to accurately determine the location of the client.

l  It's a best practice to configure your web application and analytics software to use the 'X-Forwarded-For' header when dealing with requests through a load balancer to ensure accurate reporting.

The other options mentioned don't address the root cause of the issue or involve unnecessary changes, such as switching to a Classic Load Balancer or configuring instances with only private IP addresses. The key is to correctly handle the 'X-Forwarded-For' header in your analytics software.

 

 

Q77. What is not a default user of a common Linux instance launched from an AMI?

 

l  ubuntu

l  system-user

l  ec2-user

l  admin

 

Ref. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connection-prereqs.html

 

 

Q78. You have replicated the infrastructure

that serves the backend API for your web application across regions to better serve your customers in the US and the EU. What is the best way to direct your web application at the nearest data center?

 

l  Use Route 53 with geolocation lookups to direct traffic between the two regions.

l  Create a WAF redirection rule that redirects traffic at the EU data center if the source IP comes from certain countries.

l  Purchase a country domain extension and direct your users to the correct site, such as example.com and example.co

l  Have your front-end application test the latency between each data center and use the data center that is responding the fastest.

 

Ref. https://aws.amazon.com/premiumsupport/knowledge-center/geolocation-routing-policy/

 

 

Q79. You have recently launched your new web product and are expecting 1,000 new users each month.

However, you have just received word from the CEO that your product will be featured at an upcoming conference covered by several media outlets, and this could lead to 20,000 new users over the next week. How do you plan for a sudden increase in traffic?

 

l  Replicate your infrastructure across two regions. You will harden the application to a regional failure and you will double your capacity.

l  Take an AMI image of a front-end server to save your configuration and then add more servers to your cluster pror to the conference. Remove the servers from the cluster after the spike from the conference.

l  Test to determine your throughput and how many users you can support. Develop a scaling plan for your front end, microservices, and database based on CloudWatch metrics that align with the tested bottlenecks.

l  Use Auto Scaling groups to create more front-end servers when the CloudWatch metrics for CPU usage on a single instance rise above 80% for five minutes.

 

Ref. https://aws.amazon.com/ec2/autoscaling/


 

出典、過去問

https://www.chase2learn.com/amazon-web-services-linkedin-assessment/