- Newest
- Most votes
- Most comments
Hello.
I think option A, which eliminates the need to manage the OS, will have the least operational burden.
In the case of API Gateway, even if the number of requests becomes too large, it is easy to adjust the settings on the API Gateway side, so I think it would be suitable for your use case.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
The correct answer is D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
This solution meets the requirements with the least operational overhead because it leverages AWS managed services like the Application Load Balancer (ALB) and Route 53 to distribute traffic across the EC2 instances hosting the API. Additionally, moving the EC2 instances to private subnets enhances security by isolating them from direct access from the internet.
Here's why the other options are not optimal:
A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
While this solution can help handle varying loads by leveraging the scalability of AWS Lambda, it requires significant refactoring of the existing monolithic API into individual Lambda functions. This can introduce operational overhead and complexity, especially if the API is not designed for serverless architectures.
B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
While containerizing the API and using Amazon EKS can provide scalability and load balancing, it introduces additional operational overhead in managing and maintaining the Kubernetes cluster. This solution may be more complex than necessary for the given requirements.
C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
This solution can help handle varying loads by automatically scaling the EC2 instances based on CPU utilization. However, it requires additional operational overhead in managing the Auto Scaling group and the Lambda function for updating Route 53 records. The Application Load Balancer solution (option D) provides a more streamlined and managed approach to load balancing and scaling.
Relevant content
- Accepted Answerasked 4 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
wouldn't putting an auto scaling group easier then separating it into multiple lambda functions?
I think you misunderstand the operational burden and migration burden. What you are currently worried about is the hurdle for migration. Operational burden refers to the necessary tasks for managing an application, such as OS updates, monitoring, and security measures. If you are asking about the hurdles for migration, it's easier to use AutoScaling.