Nov 3, 2023

Skill Assessment - AWS Q80

Q80. How do you connect via SSH to a Linux EC2 instance with an EBS volume if you lost your key pair?

l  Stop the instance and create an AMI image. Launch the image using a new key pair.

l  Contact AWS support. A support specialist can remotely restore access to your instance and send you a new key pair.

l  You can not connect to this EC2 instance. The key pair is displayed only one time. If you lose it, you have lost all access to this instance. Connect the EBS volume to another instance to recover your files.

l  Attach the EBS volume to a temporary instance launched with a new key pair, and overwrite ~/.ssh/authorized_keys using the same file from the new instance.

 

ChatGPT also adds ..

If you've lost the key pair to your Linux EC2 instance, you can follow these steps to regain access:

 

1.      Stop the instance and create an AMI image: This is a good approach to preserve your instance's data. Here are the steps:

a. Stop the EC2 instance from the AWS Management Console or using the aws ec2 stop-instances command if you're using the AWS CLI.
b. Once the instance is stopped, create an Amazon Machine Image (AMI) from the instance. This will essentially take a snapshot of your instance.
c. Launch a new instance from the AMI, and during the launch process, choose a new key pair.
d. Start the new instance, and you'll have access to it with the new key pair.

2.      Contact AWS support: This is an option, but it may not be the fastest solution. AWS support can help you regain access to your instance, but you might experience some delays.

3.      You can not connect to this EC2 instance: This statement is not entirely accurate. While it's true that you can't recover the lost key pair, you can still regain access to your instance using other methods, such as the one mentioned in option 4:

4.      Attach the EBS volume to a temporary instance: You can attach the EBS volume from your inaccessible EC2 instance to a new temporary instance launched with a new key pair. Here are the steps:

a. Launch a new temporary EC2 instance with a new key pair.

b. Attach the EBS volume from the inaccessible instance to the temporary instance.

c. Mount the EBS volume to the temporary instance.

d. Edit the ~/.ssh/authorized_keys file on the EBS volume to add the new public key from your temporary instance. You can use a text editor or command-line tools like sed or echo.

e. Safely unmount the EBS volume and detach it from the temporary instance.

f. Attach the EBS volume back to your original instance.

g. Start your original instance.

 

You should now be able to SSH into your original instance using the new key pair that you added to the ~/.ssh/authorized_keys file. This method allows you to regain access to your instance without losing your data.

 

 

Q81. Your on-premise data center (172.16.128.0/24) is already connected to your AWS VPC (10.0.0.0/16) by a customer gateway.

You wish to connect another data center for a company you just acquired (172.16.130.0/24) to your VPC as shown in the image. What is the best way to create this link?

 

See the image at https://github.com/Ebazhanov/linkedin-skill-assessments-quizzes/blob/main/aws/images/Q80.png

 

l  Establish a connection between your two data centers and connect the second data center to the first through a private tunnel. Traffic will flow from the second data center and then through the first data, and then into AWS.

l  Create a second customer gateway and configure your VPN client at your second data center to connect to the virtual private gateway.

l  Create a second virtual private gateway (VPG) and attach it to the VPC. Create a customer gateway for the new virtual private gateway and use your VPN client at your second data center to establish a connection to the VPG.

l  You can not have more than one customer gateway per VPC, so the proposed solution will not work. Create a second VPC with a virtual private gateway and a customer gateway. Bridge the two VPCs using VPC peering.

 

 

Q82. You are migrating a 200 GB database from an on-premise SQL Server to RDS for SQL Server.

The database needs to have minimal downtime during the migration. What is the best practice for migrating this database?

 

l  Close all existing connections to the SQL Server database and use Database Migration Service to transfer the data to RDS.

l  Use Database Migration Service to replicate the database to RDS and keep it in sync during the migration. Repoint your applications to use the new RDS endpoint.

l  Detach the SQL Server database during the migration. Take a backup of the database and use SQ with Accelerated Transfer to upload the backups to S3. Restore the backups to the RDS instance.

l  Use the Import and Export wizard within SQL Server Enterprise Manager to create an export task and export the tables to the RDS instance.

 

 

Q83. You have enabled Multi-Factor Authentication (MFA) for your AWS root account and you lost your MFA device.

What do you need to do to recover access to your account?

 

l  You cannot recover access to your AWS root account. Contact AWS support.

l  An email will be sent to the email address on file to verify ownership of the account. You will then need to provide the phone number on the account.

l  An email will be sent to the email address on file. After clicking in the link in your email, provide one of the MFA recovery codes that were created when MFA was enabled.

l  Use the AWS CLI with the root account access token to disable MFA on the root account. Then use the CLI to set a new password on the root account.

 

ChatGPT has a different opinion ..

If you've enabled Multi-Factor Authentication (MFA) for your AWS root account and you've lost your MFA device, you can recover access to your account by using MFA recovery codes, not the AWS CLI. Here's the correct process:

 

An email will be sent to the email address on file. After clicking the link in your email, provide one of the MFA recovery codes that were created when MFA was enabled.

 

When you initially set up MFA on your AWS root account, you should have received recovery codes. These recovery codes are meant to be used in situations where you've lost access to your MFA device. You can use one of these recovery codes to regain access to your AWS account.

 

Using the AWS CLI to disable MFA on the root account is not the recommended approach in this situation. Instead, follow the procedure provided by AWS, which typically involves using recovery codes or contacting AWS support for assistance.

 

Amazon also says at the reference page ..

If your AWS account root user multi-factor authentication (MFA) device is lost, damaged, or not working, you can sign in using another MFA device registered to the same AWS account root user. If the root user only has one MFA device enabled, you can use alternative methods of authentication. This means that if you can't sign in with your MFA device, you can sign in by verifying your identity using the email and the primary contact phone number registered with your account.

 

注、アマゾンのユーザガイドの説明が納得感ある。

①まず重要なアカウント(root user)には複数の(予備の)MFAデバイスを登録しておく。

MFAデバイスのどれでもサインインできない場合には、模範解答通りアカウント登録時に添えたメールと電話番号で身分証明してサインインできる。

 

Ref. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.html

 

 

Q84. How do you assign an Elastic IP to several EC2 instances?

 

l  In the VPC dashboard, click Elastic IPs. Select the Elastic IP and click Associate Address. Select each EC2 instance you wish to assign this address to.

l  In the EC2 dashboard, click on EC2 instance. Under Actions, select networking > Manage IP Addresses. click to add a new IP address and type in the address of the Elastic IP. Repeat the process for each EC2 instance you want to assign this Elastic IP to.

l  Use the AWS CLI and pass in several '--instance-id' options to hte aws ec2 assosiate-address command.

l  An elastic IP cannot be assigned to multiple EC2 instances. It can only be assosiated to a single EC2 instance.

 

 

Q85. You created a VPC that has a public subnet and a private subnet.

A web server was placed in the public subnet and a database server was placed in the private subnet. The web server is able to connect to the database server; however, the database server at 10.0.1.2 is unable to get software updates. What is the cause of this issue?

 

l  There is no NAT gateway for the private subnet, so the database server has no routes that give it public internet access to download software updates.

l  The database server needs to be assigned a public address from the pool, or assigned an Elastic IP similar to the instance 10.0.0.2.

l  The router is not configured properly on the VPC. Add a route to route table for the VPC that routes all traffic for 0.0.0.0/0 to the ID of the internet gateway.

l  There is no egress-only internet gateway attached to the private subnet of the VPC.

 

 

Q86. Benefit of using S3 Glacier?

 

l  access time

l  store for small duration

l  cost

l  cross-region

 

 

Q87. After installing AWS Amplify's CLI, what command allows the user to connect an AWS account with the local install?

 

l  amplify admin

l  amplify configure

l  amplify connect

l  amplify init

 

ChatGPT also says ..

This command initiates the configuration process, enabling the user to connect the AWS Amplify CLI with their AWS account by providing necessary access credentials and setting up the local environment.

 

 

Q88. How do you limit access to an S3 bucket by source IP address?

 

l  Create the S3 bucket as a target for Application Load Balancer. Use Web Application Firewall (WAF) to create a rule to limit access to the S3 bucket by source IP.

l  You can not limit access to an S3 bucket by IP address.

l  In the properties of the S3 bucket, add an access control option that limit access to the bucket by source IP address. Input the list of IPs into dialog.

l  Create a bucket policy with a condition that limit access to a list of approved IP addresses. Attach this policy to the bucket.

 

ChatGPT also says ..

The correct way to limit access to an S3 bucket by source IP address is by creating a bucket policy with a condition that restricts access to a specific list of approved IP addresses. This is achieved by defining a condition within the bucket policy that allows access only from certain IP addresses.

 

The correct answer is: Create a bucket policy with a condition that limits access to a list of approved IP addresses. Attach this policy to the bucket.

 

By creating a bucket policy, you can specify a condition using the "IpAddress" condition key, allowing or denying access based on the incoming request's source IP address. Here's an example of a bucket policy that restricts access to a specific IP range:

 

json

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Deny",

            "Principal": "*",

            "Action": "s3:*",

            "Resource": "arn:aws:s3:::bucket-name/*",

            "Condition": {

                "NotIpAddress": {

                    "aws:SourceIp": [

                        "192.0.2.0/24",    // Example IP range 1

                        "10.0.0.1/32"      // Example IP 2

                    ]

                }

            }

        }

    ]

}

This policy denies access to the S3 bucket for any requests that do not originate from the specified IP addresses (in this case, "192.0.2.0/24" and "10.0.0.1/32").

 

Remember to replace "bucket-name" with the name of your S3 bucket and modify the IP addresses or IP ranges according to your requirements.

 

 

Q89. What is the benefits of using S3 Glacier for storage?

 

l  granual fetching

l  cross-region caching

l  speed of access

l  cost


No comments: