- Newest
- Most votes
- Most comments
I eventually raised an AWS support case, and here is their explanation. Marking this as answered:
You're absolutely correct in your understanding that typically, high CPU usage would be attributed to USER CPU. However, in the case of RDS PostgreSQL, the behavior you're seeing is actually expected: In RDS, CPU utilization is categorized differently compared to a standard PostgreSQL installation on an EC2 instance or on-premises server. The 'nice%' metric in RDS represents the CPU time used by your database workload, which is given higher priority over other system tasks.
Both user% and nice% represent user-level processes. The key difference is that nice% is used for processes with adjusted (usually higher) priority. In RDS, your database workload is given this higher priority to ensure optimal performance. When analyzing your RDS instance's performance, you can consider the sum of user% and nice% as your effective database user-related CPU usage.
The high nice% you're seeing in Performance Insights and CloudWatch metrics is a reflection of your database workload, it shows that your database is utilizing the CPU resources as intended. You're correct that if you were to run VMSTAT on a standard server, you would see an increase in user CPU for serving user requests. The difference in RDS is due to how AWS prioritizes and categorizes these processes to optimize performance.
Relevant content
- asked a year ago
- asked 9 months ago
- asked a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 9 months ago