Least operational overhead to handle monolithic app

0

A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic. A solutions architect needs to implement a solution so that the app can handle the new and varying load. Which solution will meet these requirements with the LEAST operational overhead?

A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API. Most Voted

B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.

C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record. Most Voted

D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.

2 Answers
1

Hello.

I think option A, which eliminates the need to manage the OS, will have the least operational burden.
In the case of API Gateway, even if the number of requests becomes too large, it is easy to adjust the settings on the API Gateway side, so I think it would be suitable for your use case.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html

profile picture
EXPERT
answered a month ago
profile picture
EXPERT
reviewed a month ago
  • wouldn't putting an auto scaling group easier then separating it into multiple lambda functions?

  • I think you misunderstand the operational burden and migration burden. What you are currently worried about is the hurdle for migration. Operational burden refers to the necessary tasks for managing an application, such as OS updates, monitoring, and security measures. If you are asking about the hurdles for migration, it's easier to use AutoScaling.

0

The correct answer is D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.

This solution meets the requirements with the least operational overhead because it leverages AWS managed services like the Application Load Balancer (ALB) and Route 53 to distribute traffic across the EC2 instances hosting the API. Additionally, moving the EC2 instances to private subnets enhances security by isolating them from direct access from the internet.

Here's why the other options are not optimal:

A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.

While this solution can help handle varying loads by leveraging the scalability of AWS Lambda, it requires significant refactoring of the existing monolithic API into individual Lambda functions. This can introduce operational overhead and complexity, especially if the API is not designed for serverless architectures.

B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.

While containerizing the API and using Amazon EKS can provide scalability and load balancing, it introduces additional operational overhead in managing and maintaining the Kubernetes cluster. This solution may be more complex than necessary for the given requirements.

C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.

This solution can help handle varying loads by automatically scaling the EC2 instances based on CPU utilization. However, it requires additional operational overhead in managing the Auto Scaling group and the Lambda function for updating Route 53 records. The Application Load Balancer solution (option D) provides a more streamlined and managed approach to load balancing and scaling.

AWS
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions