En utilisant AWS re:Post, vous acceptez les AWS re:Post Conditions d’utilisation

Is DynamoDB really a sensible option for any business that is write heavy?

0

Hi, we are building an application that have a write heavy task like points table that needs to be updated based on actions from users. Unfortunately, while I was doing a cost estimation, I noticed that irrespective of the size of the update/put item action (even if a few bytes for the update), I am being charged on a flat rate of 1WCU which is equal to 1kb. How can this be justified as this can significantly hike the price of the DB for an application use case like ours where point changes are to be updated instantly for reflecting on user accounts? Is DynamoDB not the right DB for our use case? How does the big players deal with DynamoDB in a cost-efficient manner? Any ideas or alternative approach suggestion is welcomed. Currently using DynamoDB on demand for meeting the demand spikes. But don't really feel is a cost effective approach for scaling the business.

2 réponses
1

You're right that the minimum cost for writes is 1WCU (1KB), however, if you update an item which is 10KB in size with a couple of additional bytes, you are charged 11WCU. You pay for the entire item to be updated.

In terms of DynamoDB being good a heavy write throughput, the answer is absolutely yes. For costs, you can use provisioned capacity mode, which can scale up and down as your demands change. You can also take 1-3 year reservations, offering 50-70% discount on costs respectively.

With DynamoDB free tier you can also have up to 200M requests per month for free.

profile pictureAWS
EXPERT
répondu il y a 2 mois
0

Hello Indie,

You're correct that DynamoDB's pricing is based on the number of write capacity units (WCU), and as you've noted, one WCU can handle a write of up to 1KB in size. If your updates are smaller than 1KB, you are still charged for the full WCU, which may seem inefficient for small frequent writes like updating points.

Try the below strategies which I think could help reduce costs or make DynamoDB more suitable for your use case:

1. Use Provisioned Capacity Instead of On-Demand

  • On-demand pricing is more expensive per request compared to provisioned capacity, especially when traffic becomes predictable. If you can estimate your write throughput needs, switching to provisioned capacity can significantly reduce costs. You can also enable Auto Scaling to adjust capacity based on demand spikes, reducing the risk of under- or over-provisioning.

2. Batch Writes

  • You can reduce the number of write requests by batching updates using the BatchWriteItem API. This allows you to group multiple small write requests into a single request, which reduces the overall number of WCUs consumed.

3. Optimize Data Structure

  • Evaluate your data structure. If each write involves updating multiple attributes, consider restructuring the data model to minimize unnecessary writes. For example, keep point updates in a denormalized format or store fewer attributes per item.

4. Use DynamoDB Streams for Asynchronous Processing

  • Consider decoupling the immediate need for point updates to be reflected in real time. You could implement DynamoDB Streams to process and apply points asynchronously. This would allow you to batch multiple updates into one write operation, reducing the WCU consumption.

5. Use Global Secondary Indexes (GSIs) Wisely

  • Avoid overusing GSIs unless necessary, as every write to a table with a GSI will also consume additional WCUs for the GSI's write operations.

6. Leverage Conditional Updates

  • If you're updating only when certain conditions are met (e.g., when a user's points change exceeds a certain threshold), you can reduce the number of unnecessary writes by using DynamoDB's ConditionExpression.

7. Consider DynamoDB TTL for Archival Data

  • If you're maintaining a large volume of historical data, consider setting up Time-to-Live (TTL) on items that can be deleted after a certain period to save space and write capacity for essential data.

8. Explore Alternatives

If cost continues to be a concern, you could explore alternative databases:

  • Amazon Aurora (Serverless): It allows you to scale based on demand and provides efficient scaling with SQL capabilities.
  • Amazon RDS: If you are fine with using a relational database, RDS offers predictable pricing.
  • Redis + DynamoDB (Hybrid): You could cache frequently changing point data in Redis and then periodically sync it to DynamoDB. This reduces the frequency of writes and keeps the costs down.

For large-scale applications, big players usually optimize by using a combination of provisioned capacity, efficient data modeling, batch operations, and caching layers to reduce the frequency and size of writes. Balancing real-time updates with background synchronization is key to managing costs in write-heavy applications like yours.

Please let me know if any of these suggestions work for your application

profile picture
répondu il y a 2 mois
AWS
MODÉRATEUR
vérifié il y a 2 mois
  • Many of the things in this answer is incorrect, such as batching items using BatchWriteItems and conditional writes to decrease cost.

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions