AWS Backup vs global table/database for cost-effective RTO,RPO requirements

0

A company implements a containerized application by using Amazon Elastic Container Service (Amazon ECS) and Amazon API Gateway The application data is stored in Amazon Aurora databases and Amazon DynamoDB databases. The company automates infrastructure provisioning by using AWS CloudFormation. The company automates application deployment by using AWS CodePipeline.

A solutions architect needs to implement a disaster recovery (DR) strategy that meets an RPO of 2 hours and an RTO of 4 hours.

Which solution will meet these requirements MOST cost-effectively?

A. Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario.

B. Use AWS Database Migration Service (AWS DMS), Amazon EventBridge, and AWS Lambda to replicate the Aurora databases to a secondary AWS Region. Use DynamoDB Streams, EventBridge. and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.

C. Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.

D. Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.

2 Answers
1
Accepted Answer

Option C: Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.

Explanation:

Cost-Effectiveness: AWS Backup is generally more cost-effective than running continuous replication setups like Aurora Global Database or DynamoDB Global Tables. It allows you to create point-in-time backups and restore them in a secondary region, meeting the RPO requirement of 2 hours and RTO requirement of 4 hours.

RPO and RTO Requirements: A lthough this approach might take slightly longer to restore, it can still meet the required RPO of 2 hours and RTO of 4 hours if the backup and restore processes are well-optimized. Regular backups can be scheduled within the 2-hour RPO window, and the infrastructure can be restored within the 4-hour RTO window.

**Failover Routing: **Route 53 failover routing ensures that traffic is directed to the secondary region in the event of a disaster, and the use of Regional endpoints for API Gateway adds another layer of resiliency.

Why Not the Other Options:

**Option A & D: **While Aurora Global Database and DynamoDB Global Tables offer near-instant replication and can meet stricter RPO and RTO requirements, they are generally more expensive because of the continuous replication and storage costs in multiple regions.

Option B: This option involves more complexity and components like AWS DMS, EventBridge, and Lambda, which may increase operational overhead and costs. It also doesn't significantly improve RPO/RTO compared to using AWS Backup in this scenario.

EXPERT
answered a month ago
profile picture
EXPERT
reviewed a month ago
EXPERT
reviewed a month ago
  • The answer to this question is more nuanced. It is not correct to state unequivocally that AWS Backup is always the most cost effective DR strategy here. Fundamentally, it comes down to the fact the in both DDB and Aurora, snapshots are always "full" snapshots. I will post a response that helps to illustrate the factors to consider.

0

Hello... from the pure database perspective, it is important to understand that both Aurora and DynamoDB (DDB) only support normal point-in-time recovery within their respective primary regions. Any cross-region support for backups / snapshots are "full" copies of the Aurora cluster's storage volume or DDB's table. AWS Backup can orchestrate creating these full copies and copying them between regions but at the end of the day, the backup / snapshot is a full copy as of a the point in time the snapshot creation started. Because of this behavior, the conclusion of whether a snapshot-based approach like AWS Backup is more or less cost-effective effective than a change-data-capture (CDC) based approach like Aurora Global Database or DDB Global Tables will depend on many factors. Some factors will be technical (e.g., write volume per unit of time) while others will be more business requirements (e.g., RPO and RTO). I work in the Aurora engineering team so I can speak more authoritatively about the Aurora case but I suspect DDB is likely very similar given it too only supports "full" on-demand backups.

I suspect Aurora Global Database replication is the most cost effective option compared to full snapshot copies across region, especially if the RPO requirement for DR is "low". AWS Backup charges the same data transfer costs as Aurora Global Database. In your case with a 2 hour RPO, this implies you have to create and copy a full snapshot across regions at least every 2 hours... How could the sum of all the transaction log writes (sized at 2.75 Kb per write for billing purposes) over that same time period be the same? I doubt it in virtually all cases. In Aurora, the biggest way to reduce costs is by running the secondary regions as "headless" (i.e., no database compute instance running) or using Aurora Serverless v2 with a low minimum ACU capacity.

In summary, I'll bet if you do the math from a paper exercise of copying an X GB (or TB!) database across regions every 2 hours to meet your RPO plus the storage costs of keeping some number of copies - one may not be enough to meet business requirements to go back in time in case of some type of ransomware or cyber attack, I suspect you will find CDC-based Aurora Global Database or DDB Global Tables is likely more cost effective in almost all scenarios.

AWS
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions