I have a Solr instance running as an AWS Fargate instance. The Solr instance has been configured with Xmx set to 1G. The remaining memory should be available to MMapDirectory.
Looking at AWS, the memory usage reported by Fargate is about 35%. Solr reports Physical Memory usage to be 4GB, which is 100%.
I wanted to see if the memory reported by Solr somehow was off, so I increased the total memory for Fargate instance to 24GB. In that case, Solr reported a physical memory usage of about 7.5GB.
What could explain the reported memory usage by Fargate (4GB : 100% vs 35% and 24GB: 30% vs 4%))? Regardless of the available memory to the Fargate task, the memory usage reported by Fargate seems to only reflect the memory used by Solr itself (max 1GB reserved for the process), and not the Physical Memory used.
4GB RAM available to the Fargate task
24GB RAM available to the Fargate task
In Fargate container:
cat /proc/meminfo
MemTotal: 7910348 kB
MemFree: 1370148 kB
MemAvailable: 6047192 kB
Buffers: 56336 kB
Cached: 4771752 kB
SwapCached: 0 kB
Active: 326432 kB
Inactive: 5970884 kB
Active(anon): 424 kB
Inactive(anon): 1469224 kB
Active(file): 326008 kB
Inactive(file): 4501660 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 552 kB
Writeback: 0 kB
AnonPages: 1469304 kB
Mapped: 1222612 kB
Shmem: 412 kB
KReclaimable: 139492 kB
Slab: 180956 kB
SReclaimable: 139492 kB
SUnreclaim: 41464 kB
KernelStack: 5312 kB
PageTables: 14836 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3955172 kB
Committed_AS: 3050700 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 12548 kB
VmallocChunk: 0 kB
Percpu: 1216 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 100264 kB
DirectMap2M: 5971968 kB
DirectMap1G: 2097152 kB