What could explain the difference in physical memory usage reported by Solr and AWS Fargate/Cloudwatch?

0

I have a Solr instance running as an AWS Fargate instance. The Solr instance has been configured with Xmx set to 1G. The remaining memory should be available to MMapDirectory.

Looking at AWS, the memory usage reported by Fargate is about 35%. Solr reports Physical Memory usage to be 4GB, which is 100%.

I wanted to see if the memory reported by Solr somehow was off, so I increased the total memory for Fargate instance to 24GB. In that case, Solr reported a physical memory usage of about 7.5GB.

What could explain the reported memory usage by Fargate (4GB : 100% vs 35% and 24GB: 30% vs 4%))? Regardless of the available memory to the Fargate task, the memory usage reported by Fargate seems to only reflect the memory used by Solr itself (max 1GB reserved for the process), and not the Physical Memory used.

4GB RAM available to the Fargate task Enter image description here

Enter image description here

24GB RAM available to the Fargate task

Enter image description here

In Fargate container:

cat /proc/meminfo 
MemTotal:        7910348 kB
MemFree:         1370148 kB
MemAvailable:    6047192 kB
Buffers:           56336 kB
Cached:          4771752 kB
SwapCached:            0 kB
Active:           326432 kB
Inactive:        5970884 kB
Active(anon):        424 kB
Inactive(anon):  1469224 kB
Active(file):     326008 kB
Inactive(file):  4501660 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:               552 kB
Writeback:             0 kB
AnonPages:       1469304 kB
Mapped:          1222612 kB
Shmem:               412 kB
KReclaimable:     139492 kB
Slab:             180956 kB
SReclaimable:     139492 kB
SUnreclaim:        41464 kB
KernelStack:        5312 kB
PageTables:        14836 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     3955172 kB
Committed_AS:    3050700 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       12548 kB
VmallocChunk:          0 kB
Percpu:             1216 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      100264 kB
DirectMap2M:     5971968 kB
DirectMap1G:     2097152 kB
已提问 3 个月前129 查看次数
1 回答
0

I think that memory usage reported by Fargate and Solr can differ because Fargate abstracts away the underlying infrastructure. When checking memory usage from within the container, tools like free and top report usage are based on the entire EC2 instance memory, not just the task memory limit.

  • The memory value set in the task definition is used by AWS to allocate resources and enforce limits for the task.
  • When checking memory usage from inside the container, you see total instance memory rather than task limit.
  • For accurate task memory metrics, use CloudWatch or container-level monitoring tools designed for containers.
  • By default, Fargate uses CloudWatch for logging which sends logs directly without storing on disks. So disk usage would not be impacted by logs.
  • To view memory metrics for Fargate tasks, you can enable CloudWatch logs and monitor metrics there, or use Container Insights which is designed for containerized environments.
profile picture
专家
已回答 3 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则