- Newest
- Most votes
- Most comments
Hi @nikos64,
The volume and instance limits are separate. The throughput bottleneck will be dependent on which limit is reach first.
For example :
- 1/ A db.r6g.8xlarge instance with limit of 1125 MB/s with 1(single) gp2 volume attached to it, will have a 250 MB/s throughput limit.
- 2/ A db.r6g.8xlarge instance with limit of 1125 MB/s with 4 gp2 volumes attached to it, will have a limit of 1000MBs ( 4 x 250MB/s).
- 3/ A db.r6g.8xlarge instance with limit of 1125 MB/s with 6 gp2 volumes attached to it, will have a limit of 1125MBs as it is less than the aggregated volume throughput limit ( 6 x 250MB/s)
Note: 250MB/s throughput limit is for large gp2 volumes. Smaller ones will have a lower limit depending on size.
How many volumes are attached to your RDS db.r6g.8xlarge instance type?
In addition to the previous answer by my colleague, hopefully these points will be helpful:
- The documented EC2 limits on networking and IO (EBS Optimization) for each instance class/size do apply to RDS.
- The documented EBS limits on throughput and IOPS for each volume do apply to RDS. However, when necessary (and quoting RDS docs) "Depending on the amount of storage requested, Amazon RDS automatically stripes across multiple Amazon EBS volumes to enhance performance."
Most of these limits are evaluated/measured on very small time intervals, so we normally recommend that RDS customers do enable and set RDS Enhanced Monitoring to a one-second data gathering interval, too - as your workload might reach the limits only briefly due to other constraints like locking, consistency or durability features from the database engine. By the way, Enhanced Monitoring will also capture data for each EBS volume on the instance.
And, if the instance is not reaching one of its limits, keep in mind that the client machine(s) might be the bottleneck, too.
Lastly: In a recently restored RDS instance, lazy loading (A.K.A. EBS Initialization) is another factor that affects IO performance.
Thank you for your comments. I will try Enhanced Monitoring. To improve the speed of bulk write operations, I am planning to configure my parameter group according to this guide: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html. But it's also crucial to understand whether I'm hitting any bottlenecks.
Relevant content
- Accepted Answerasked 6 months ago
- Accepted Answerasked 3 years ago
- asked a year ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 2 years ago
Thank you for your informative reply. I have selected 5000 GiB gp2 storage for my RDS instance, with option to autoscale at max 6000 GiB. How can I see how many volumes are attached? Is it part of enhanced monitoring?
Hi Nikos,
This is indeed part of enhanced monitoring. RDS does not directly expose the individual volumes. If you look in the enhanced monitoring metrics in Cloudwatch, there is a "Physical Device I/O" category. From this you can select reads/s, writes/s. This will show per device metrics from which you can infer the number of volumes in your RAID0 stripe. (credit for the answer goes to Phil - see https://forums.aws.amazon.com/thread.jspa?threadID=300605).
From my own tests, it looks like a 5000 GiB RDS MySQL database instance will spread data across 4 volumes. This correlates with the numbers you are seeing - peak total throughput around 1000 MB/s.
Thank you kdavyd, very informative. So, according to this: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html , if I select Provisioned IOPS and provision e.g. 33000 IOPS I should expect a max throughput of 1000 MiB/s, regardless the number of volumes, right?