Oct 17, 2023

Skill Assessment - AWS Q50

Q50. Your web application is getting a suspicious amount of bad requests from foreign IP addresses.

Your business is operating in only a few countries and you would like to block any other traffic. What is the best practice for limiting access to your web application by country?

 

l  Use Web Application Firewall and create a geo match condition to drop all requests from countries that aren’t on your allow list.

l  Use Application Load Balancer to create a new routing rule that looks at source IP address. Add an IP block for the countries that have access.

l  Host the front end of your website in CloudFront and configure a geo restriction on the distribution.

l  Use CloudTrail to monitor the IP addresses of the bad requests. Use Lambda to add these IP addresses to an Application Load Balancer rule that blocks the IPs.

 

 

Q51. What is the best practice for maintaining Windows EC2 instances and applying updates?

 

l  Turn on auto update in Windows Update on each EC2 that is launched, or create your own AMI with this feature enabled and launch all of your EC2 instances from this AMI.

l  Create a maintenance schedule that an employee must fill out each week confirming a visual inspection of each instance was conducted and which patches were applied.

l  Use AWS Systems Manager Patch Manager to find an patch instances that require updates during a set maintenance window.

l  Install Window Server Update Services on your primary Active Directory controller.

 

 

Q52. In addition to CloudFormation, you can use other orchestration tools to automate server formation and maintenance.

Which tool is not an efficient choice for the orchestration of a large infrastructure?

 

l  Chef

l  Ansible

l  Puppet

l  Vagrant

 

ChatGPT also says …

Vagrant is not an efficient choice for the orchestration of a large infrastructure. Vagrant is primarily a tool for managing development environments, making it more suitable for setting up and managing local virtualized development environments on a developer's machine. While it can be used to provision and manage VMs in a limited fashion, it's not designed for large-scale infrastructure orchestration or automation.

 

In contrast, Chef, Ansible, and Puppet are configuration management and automation tools that are better suited for managing and orchestrating larger and more complex infrastructures, including server provisioning, configuration, and maintenance. These tools are designed for automating tasks across multiple servers and are commonly used in production environments for infrastructure automation.

 

 

Q53. What happens to a SQL Server RDS instance if the databases increase in size and go over the allocated space?

 

l  RDS will automatically increase the allocated space by 10% and will send the AWS root account an email with resolution steps. Allocate more space to avoid overage charges.

l  The database instance will report a STORAGE_FULL status and become inaccessible if the instance does not have enough remaining storage to operate. Allocate more space to the instance.

l  SQL Server will close all existing connections to the databases and attempt to shrink its log files to reclaim storage space.

l  RDS will automatically increase the allocated space by 5% and will continue to allocate new space up to 50% of the orginal allocated space. When storage space has increase 50%, RDS will automatically stop the instance to preserve data integrity.

 

 

Q54. You have a fleet of IoT devices

that send telemetry to a server-side application provided by your IoT vendor for decoding a proprietary messaging format. The devices are provisioned to send telemetry reports to your server via UDP on port 6339. What is the best way scale this server as more Iot devices are added to your fleet?

 

l  Use a Network Load Balancer to distribute the traffic across your servers. Use UDP health checks to determine if the server is available to receive traffic.

l  Use Route 53 with HTTP health checks. Create an application on the server to report the readiness status of the vendor-provided server software to Route 53 via HTTP.

l  Use Route 53 with UDP health checks. As you scale up, Route 53 wiwll route the traffic to the new servers if they pass the health checks.

l  Use Application Load Balancer to distribute the traffic across your servers.

 

ChatGPT answered differently ..

The best way to scale the server as more IoT devices are added to your fleet and they send telemetry via UDP on port 6339 is to use:

 

Option A: Use a Network Load Balancer to distribute the traffic across your servers. Use UDP health checks to determine if the server is available to receive traffic.

 

A Network Load Balancer (NLB) is designed for handling TCP/UDP traffic and is a suitable choice for load balancing UDP traffic like telemetry reports. You can configure UDP health checks to determine the availability of the servers, and NLB will distribute the incoming traffic to healthy servers. This approach ensures high availability and scalability for your server infrastructure as the number of IoT devices increases.

 

Option B and Option C mention using Route 53, but Route 53 is typically used for DNS and routing DNS requests rather than load balancing UDP traffic.

 

Option D mentions using an Application Load Balancer, which is designed for HTTP/HTTPS traffic and is not the ideal choice for UDP-based telemetry.

 

さらに聞いてみた→

In the previous question, if you focus on the option B stating "HTTP health checks", the options B seems great, isn't it?と。

 

ChatGPT:

Option B does mention using Route 53 with HTTP health checks, which can be an effective solution for managing DNS-based routing, but it's important to note that the initial question specified that telemetry data from IoT devices is sent via UDP on port 6339.

 

HTTP health checks are typically used to check the health and availability of web services over HTTP or HTTPS, not for assessing the availability of services or servers handling UDP traffic. Using HTTP health checks for UDP-based telemetry might not be suitable because HTTP health checks won't directly monitor or assess the health of services listening on a non-HTTP/HTTPS UDP port, such as port 6339 in this case.

 

For UDP-based telemetry, the most appropriate solution remains Option A: Use a Network Load Balancer with UDP health checks because it is designed to handle UDP traffic and can effectively distribute the incoming traffic to healthy servers while using UDP health checks to determine server availability for UDP-based services.

 

※今回はChatGPTの題意理解と解答が正しいように思える。

下記Ref1から『Route 53のヘルスチェックは、重み付けラウンドロビンやDNSフェールオーバーで、どのIPアドレスを回答に含めるかを判断するために用いられ』るので。

またUDPでのヘルスチェックは出来ない模様(少なくとも設定可との明記は無い)。

Ref1. https://dev.classmethod.jp/articles/health-check-spec-elb-route53/

 

 

Q56. An EC2 instance running a WordPress site keeps getting hacked,

even though you have restored the server several times and have patched WordPress.

What AWS service can help you detect and prevent further attacks?

 

l  CloudWatch

l  GuardDuty

l  Shield

l  Security Advisor

 

ChatGPT also says ..

GuardDuty is a managed threat detection service that continuously monitors for malicious activity and unauthorized behavior within your AWS environment.

 

GuardDuty can detect various types of attacks and threats, including those targeting your EC2 instances. It uses machine learning and threat intelligence to identify potentially malicious activities and provides alerts to help you respond to security threats.

 

While AWS WAF (Web Application Firewall) and AWS Shield can help protect against certain types of attacks, GuardDuty is more focused on overall threat detection and monitoring, making it a valuable service for identifying and preventing further attacks on your EC2 instance running a WordPress site.

 

 

Q57. A nontechnical client wants to migrate a WordPress site to AWS from a private server managed by a third-party hosting company.

Which AWS service should you recommend to migrate the site to?

 

l  CloudFront

l  An EC2 instance launched from the official WordPress AMI

l  S3

l  Lightsail

 

ChatGPT also says ..

For a non-technical client looking to migrate a WordPress site to AWS from a private server managed by a third-party hosting company, the most user-friendly and straightforward option is to recommend AWS Lightsail.

 

AWS Lightsail is a simplified compute service that provides pre-configured instances with a user-friendly management console. It's designed for users who want to quickly and easily set up web applications, websites, and blogs without needing in-depth technical knowledge. Lightsail offers pre-configured WordPress instances and a one-click migration option for WordPress sites, making it an ideal choice for non-technical users looking for a hassle-free migration experience.

 

While other AWS services like EC2 instances or S3 could be used for WordPress hosting, they require more technical expertise and may involve more configuration and management. CloudFront, on the other hand, is a content delivery network (CDN) service and not typically used for directly hosting WordPress sites. Lightsail is the most user-friendly option in this context.

 

 

Q58. Your company has on-premise servers with an existing onsite backup solution

that also replicates backups to another campus on the other side of the country with its own on-site backup solution. You have been asked to create a third level of redundancy by also storing these backups in the cloud. In the event of a primary and secondary backup failure, your boss wants to know that the cloud backups can be accessible as fast as possible to reduce downtime during the recovery. What S3 storage class do you recommend for cost and performance?

 

l  S3 Standard

l  S3 Intelligent-Tiering

l  S3 Glacier

l  S3 One Zone-Infrequent Access

 

ChatGPT answered quite differently ..

For achieving fast accessibility and minimizing downtime during recovery, you should consider using the S3 Standard storage class for your cloud backups.

 

l  S3 Standard is designed for high-performance and low-latency access to your data. It provides the best performance and immediate access to your data when needed. While it may be a bit more expensive compared to some other storage classes, it offers the lowest retrieval times and is the ideal choice when you need your data to be readily accessible for quick recovery.

l  S3 Intelligent-Tiering, S3 Glacier, and S3 One Zone-Infrequent Access are cost-effective storage classes, but they may have longer retrieval times, which can introduce delays in the event of a primary and secondary backup failure. They are better suited for data that is not needed immediately and can tolerate longer retrieval times.

 

※過去問解答と異なるため、さらに聞いてみた。

In the previous question, if you have only these 3 options, which do you recommend for cost and performance?

Is there obvious difference among them?

 

l  S3 Intelligent-Tiering

l  S3 Glacier

l  S3 One Zone-Infrequent Access

 

ChatGPT:

Among the three options you've provided (S3 Intelligent-Tiering, S3 Glacier, and S3 One Zone-Infrequent Access), the most suitable choice for a balance between cost and performance while still considering data access speed would be S3 Intelligent-Tiering.

 

Here's why:

1.      S3 Intelligent-Tiering automatically moves objects between two access tiers: one for frequent access and another for infrequent access. This means that it optimizes costs by storing data in the most cost-effective storage class based on your usage patterns. It's designed to provide low-latency access when data is needed, which aligns with your requirement for fast accessibility in the event of a primary and secondary backup failure.
2.      S3 Glacier and S3 One Zone-Infrequent Access are not as suitable in this context because they are intended for longer-term data archiving and cost optimization at the expense of retrieval times. While they offer significant cost savings, their retrieval times are longer and less predictable, which may not align with your goal of minimizing downtime during recovery.

So, if you want to balance cost and performance while ensuring that data is accessible as fast as possible in case of a backup failure, S3 Intelligent-Tiering would be the better choice among these three options. It's worth noting that the exact cost differences between these options can depend on various factors, including your data access patterns and the specific usage volumes, so you should consider your usage in the decision-making process.

 

※どれが最適解か不明である。設問に曖昧さ(コスト、パフォーマンのバランス評価方法が未詳)があるため。記述式で論理的な解答なら全てOKとするような設問である。

 

Q59. Which big data store will let you store large streams of user activity data coming from both web and mobile applications?

 

l  Neptune

l  Aurora

l  RDS for SQL Server

l  Redshift

 

ChatGPT also says ..

Amazon Redshift is a fully managed, petabyte-scale data warehouse service that is designed for high-performance analysis and querying of large datasets. It's well-suited for storing and analyzing large volumes of data, including user activity data from various sources, and it can handle data streaming and real-time analytics use cases effectively. Redshift is optimized for analytical workloads and supports various data formats and data ingestion methods, making it a good choice for big data analytics.

 

Neptune, Aurora, and RDS for SQL Server, on the other hand, are not typically used for storing and analyzing large streams of user activity data. Neptune is a graph database, Aurora is a high-performance relational database, and RDS for SQL Server is a managed SQL Server database service, all of which have different use cases compared to data warehousing and big data analytics with Redshift.

 

 

・出典、過去問

https://www.chase2learn.com/amazon-web-services-linkedin-assessment/


No comments: