My website was suddenly taken down and the root cause was clearly that it ran out of memory, as revealed by the graph below and the logs from /var/log/syslog
on the server that I found from the time when the website went down:
Jun 1 14:15:57 mfh-elsalvador kernel: [914994.237820] Out of memory: Killed process 114677 (apache2) total-vm:359436kB, anon-rss:41688kB, file-rss:3100kB, shmem-rss:85448kB, UID:1000 pgtables:480kB oom_score_adj:0
Jun 1 14:15:58 mfh-elsalvador systemd[1]: apache2.service: A process of this unit has been killed by the OOM killer.
This is a WordPress website with Ubuntu and Apache.
This situation had happened to me when I only had 1 GB or RAM on that server. The consumption was at around 80% almost always. Then I thought the problem was that I needed to increase the RAM. I increased it to 2 GB and now it is at around 40% of consumption all the time. I feel that if I increase it to 4 GB the average consumption will be at around 20% or less, but still the website will go down at some point when some process or multiple processes reach the peak of available RAM. I have the feeling that what I really need is not to add RAM, but to attack the root cause from the Apache configurations. What would you recommend in terms of limiting maximum number of requests or memory that a process can consume on Apache, so that I do not have to run out of memory on this server again?
I am thinking that maybe I have to configure MaxRequestWorkers (https://httpd.apache.org/docs/2.4/mod/mpm_common.html), for maximum number of connections that will be processed simultaneously. Or something related to limiting the maximum amount of RAM memory that can be given to a process or request. Any ideas?