MySQL keeps crashing (out of memory)
Since a few days I have 2 Nano Linode's running. One has 5 WordPress sites on it, the other one has 7. On both Linode's MySQL keeps crashing every now and then, randomly. I don't understand, since all these websites used to be together on a shared budget hosting server, with even 13 other websites (so 25 in total) and running smoothly.
After doing some research I tried to change and tweak my MySQL settings (etc/mysql/my.cnf), but no success so far. It keeps crashing and although they are right back up after manually restarting MySQL through the console, it's still very inconventient. Especially when it happens in the middle of the night.
Is there anything else I can do?
Or do I need to edit a different file? I also see "etc/mysql/mysql.cnf" and "etc/mysql.conf.d/mysqld.cnf".
[mysqld]
#max_allowed_packet = 1M
thread_stack = 128K
max_connections = 75
table_open_cache = 64M
key_buffer_size = 64M
key_buffer = 1600M
max_allowed_packet = 64M
sort_buffer_size = 512M
net_buffer_length = 80K
read_buffer_size = 256M
innodb_buffer_pool_size = 800M
Feedback from the console:
Starting mysql (via systemctl): mysql.service.
root@localhost:/var/www/html# [937959.001510] Out of memory: Kill process 21637 (mysqld) score 133 or sacrifice child
[937959.003617] Killed process 21637 (mysqld) total-vm:1099924kB, anon-rss:18720kB, file-rss:0kB, shmem-rss:0kB
sudo /etc/init.d/mysql start
Starting mysql (via systemctl): mysql.service.
root@localhost:/var/www/html# [950101.974846] Out of memory: Kill process 22743 (mysqld) score 137 or sacrifice child
[950101.976273] Killed process 22743 (mysqld) total-vm:1097784kB, anon-rss:13184kB, file-rss:0kB, shmem-rss:0kB
[950143.309747] Out of memory: Kill process 23948 (mysqld) score 118 or sacrifice child
[950143.319419] Killed process 23948 (mysqld) total-vm:1097380kB, anon-rss:92036kB, file-rss:0kB, shmem-rss:0kB
free -m
total used free shared buff/cache available
Mem: 982 609 258 9 115 238
Swap: 511 476 35
top -bn 1 -o %MEM | head -n15
top - 14:24:27 up 11 days, 0 min, 1 user, load average: 0.41, 0.38, 0.19
Tasks: 134 total, 1 running, 123 sleeping, 10 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 982.9 total, 479.7 free, 382.1 used, 121.2 buff/cache
MiB Swap: 512.0 total, 194.0 free, 318.0 used. 462.0 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23595 www-data 20 0 336024 27648 7872 S 0.0 2.7 0:07.04 apache2
22814 www-data 20 0 333780 26468 7688 S 0.0 2.6 0:07.85 apache2
19701 www-data 20 0 334168 25756 7660 S 0.0 2.6 0:17.23 apache2
23110 www-data 20 0 331200 25252 7812 S 0.0 2.5 0:08.95 apache2
19667 www-data 20 0 330728 23880 7660 S 0.0 2.4 0:15.18 apache2
20419 www-data 20 0 329440 23384 7660 S 0.0 2.3 0:14.30 apache2
22815 www-data 20 0 329132 23344 7660 S 0.0 2.3 0:08.03 apache2
21802 www-data 20 0 334008 23080 7688 S 0.0 2.3 0:12.81 apache2
3 Replies
It looks like Apache is taking over the resources on this Linode and killing off MySQL, so I would look to tune your Apache settings, rather than MySQL.
Our Tuning Your Apache Server guide provides a lot of suggestions on how to get the web server running better.
I checked all settings which are recommended here: https://www.linode.com/docs/websites/hosting-a-website-ubuntu-18-04/#optimize-apache-for-a-linode-2gb
I divided all numbers by 2 since I'm having problems with a 1GB Nanode. It's getting pretty frustrating since MySQL keeps crashing and it's taking all websites down multiple times a day.
Also I can't imagine how 25 of these websites were running smoothly on a 1 core 1GB shared hosting server while this Nanode can't seem to handle just 5 of those.
Would upgrading to a 2GB help?
I also have another Nanode running with a fairly large website and a smaller one. No issues at all while they all use the same setup, software and plugins. The only difference is that this Nanode is running NGINX instead of Apache2. Although I prefer Apache, would switching to NGINX be better?