1 Answer
- Newest
- Most votes
- Most comments
2
Transactions wouldn't be required. The pooling method looks like over kill to me. If you have a multi-tenant table, you can simply keep the billing information for storage on a table using tenantID.
PK | SK | Data |
---|---|---|
Tenant1 | Item1 | Data |
Tenant1 | Item2 | Data |
Tenant1 | Item3 | Data |
Tenant3 | Item1 | Data |
Tenant3 | Item1 | Data |
Now every time you add or remove an item, you gain its consumed capacity which can give you an indication of how much data is being stored, rounded up to the nearest 1KB. So for every 1WCU, you will attribute that to 1KB. Then you can update your billing table asynchronously as you feel fit.
PK | SK | Storage |
---|---|---|
Tenant1 | Billing | 3KB |
Tenant2 | Billing | 1KB |
Relevant content
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 6 months ago
Thanks Leeroy, but the method you described is a simple accumation, which unfortunately under the presence of failures is not accurate. As we all know, one of maxims of AWS is to embrace failures, because they are everywhere in the cloud. To protect against failures, we have to wrap the operations you described with a transaction, which brings us back to my question.
Is there an AWS native way to do multitenant metering in DynamoDB?
Typically metering for storage does not need to be so accurate. You can make approximations and then once per month run a sweeper on those approximations to correct the estimate. Another solution would be to listen to the stream changes for inserts/deletes, if all your items are approximately the same size you can keep an approximate count without the need for transactions. Unfortunately there is no native solution.