Need help with MySQL using all memory, starting to swap
I used to run a 1024 Linode i.e. 1GB ram for my MySQL instance that serves a fairly busy website.
When I recently, this weekend, made an upgrade to the website I took the opportunity to install MySQL on a 2048 Linode instead.
The major architectural change made is that I now use only InnoDb tables instead of MyISAM that we used before.
Also, the 1024 linode I used before had 8 CPUs and now there's only 2, as I'm sure you are all aware.
So, before - in the busy hours - I could see cpu spikes but there was never a memory problem. Ever.
Now, I just had a auto-notification mail saying how my Linode instance had started using the filesystem inordinately much and I logged on and saw that MySQL was using 86% of the memory (which I guess could be OK since InnoDb is configured to use 1324MB for the buffer pool). However, it's using max swap and the instance is really struggling to handle the load, which is not really near what peak hour will look like.
Any thoughts? Any help at all would be much appreciated.
my.cnf looks like so
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
explicit_defaults_for_timestamp
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = my_machine_name
log-error = /var/log/mysql/error.log
# MyISAM #
key-buffer-size = 32M
myisam-recover = FORCE,BACKUP
# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
# DATA STORAGE #
datadir = /var/lib/mysql/
# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 14
sync-binlog = 1
# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 128M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 1456M
# LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 0
slow-query-log = 2
slow-query-log-file = /var/lib/mysql/mysql-slow.log
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
Yes, it's a percona tool standard file. I really have very little knowledge of what does what in MySQL. The change to InnoDb was made to avaoid table locks and to actually be able to NOT have to upgrade the machine too soon.
19 Replies
echo 1 > /proc/sys/vm/swappiness
The following link explains why:
total used free shared buffers cached
Mem: 1995 1894 100 0 62 26
-/+ buffers/cache: 1806 188
Swap: 255 255 0
Note! The columns get skewed but you probably can deduce which heading belongs to what column
The really weird thing is that the other machine, with 1024 didn't do this. Unless Linode has totally nerfed these machines I really don't see what else it can be other than my running InnoDb now and used to run MyISAM
Have you tried running mysqltuner?
@dasand:
I see that you have max-connections set to 500. Do you need that many?
Have you tried running mysqltuner?
I get the following info from mysqlreport on the connections. 214? Where does it come up with that figure. Anyhow, it seems 500 might be a bit high. Is that relevant? Please excuse my ignorance.
– Connections ---------------------------------------------------------
Max used 52 of 214 %Max: 24.30
Total 60.20k 12.0/s
On the "have you tried running mysqltuner" the answer is yes.
Nothing revolutionary there except it said I should really have 3.4 GB in buffer pool instead of 1.4 (at least I interpreted it that way).
Still, the most weird part here is that I've been running this on a Linode 1024 with MyISAM tables for eons and not a hickup.
I'm missing something here or going with InnoDb was such a looser thing to do.
Thanks again for taking the time to help me with this. I just had to reboot the 2048 in order to free up memory and have it start again.
I guess I need to go with a bigger server if this keeps up.
@adergaard:
@dasand:I see that you have max-connections set to 500. Do you need that many?
Have you tried running mysqltuner?
I get the following info from mysqlreport on the connections. 214? Where does it come up with that figure. Anyhow, it seems 500 might be a bit high. Is that relevant? Please excuse my ignorance.
– Connections ---------------------------------------------------------
Max used 52 of 214 %Max: 24.30
Total 60.20k 12.0/s
On the "have you tried running mysqltuner" the answer is yes.
Nothing revolutionary there except it said I should really have 3.4 GB in buffer pool instead of 1.4 (at least I interpreted it that way).
Still, the most weird part here is that I've been running this on a Linode 1024 with MyISAM tables for eons and not a hickup.
I'm missing something here or going with InnoDb was such a looser thing to do.
Thanks again for taking the time to help me with this. I just had to reboot the 2048 in order to free up memory and have it start again.
I guess I need to go with a bigger server if this keeps up.
My wild guess is that even though you have 500 connections defined, only 214 where are actually created due to memory constraints.
I have a very low usage environment, so I only have 32 connections defined. From mysqltuner:
[--] Total buffers: 205.0M global + 2.7M per thread (32 max threads)
[OK] Maximum possible memory usage: 291.0M (14% of installed RAM)
When I first started using Linode a few years ago (512MB if I recall correctly), I had problems with memory and discovered that the MySql install defaulted to 150 connections. That, combined with Apache defaults, required a bit of tuning.
The output of ps -aux is
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mysql 3985 62.0 83.0 2541404 1696604 ? Sl 18:39 119:18 /usr/sbin/mysqld –basedir=/usr --datadir=/var/lib/mysql/ --plugin-dir=/
root 5254 0.0 0.2 54576 5196 -------------------------------
I removed the rest since everything else is below 1.0 in memory and cpu.
As you can see, mysql is really working hard on the machine.
Thoughts? Given the my.cnf I've posted here, can I actually tweak something here or need I start thinking about 4096 or even 8192 linode?
Also, given your experience - is this a MyISAM/InnoDb difference or is it that i actually had 8 CPU's working on the other machine, and here 1 logical with 2 virtual cores i.e. decisively lower power? It's not uber important, I still like Linode a lot, but I just want to understand what might have caused this big change just moving from an old 1024 Linode to a new 2048 linode and switching to InnoDb.
Thanks. Again.
[--] Data in InnoDB tables: 3G (Tables: 164)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[!!] Total fragmented tables: 94
-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned.
-------- Performance Metrics -------------------------------------------------
[--] Up for: 3h 25m 57s (979K q [79.265 qps], 140K conn, TX: 1B, RX: 100M)
[--] Reads / Writes: 93% / 7%
[--] Total buffers: 1.5G global + 1.1M per thread (214 max threads)
[!!] Maximum possible memory usage: 1.7G (89% of installed RAM)
[OK] Slow queries: 0% (264/979K)
[OK] Highest usage of available connections: 24% (52/214)
[OK] Key buffer size / total MyISAM indexes: 32.0M/99.0K
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 8% (2K temp sorts / 26K sorts)
[!!] Temporary tables created on disk: 29% (7K on disk / 24K total)
[OK] Thread cache hit rate: 99% (52 created / 140K connections)
[OK] Table cache hit rate: 98% (374 open / 381 opened)
[OK] Open file limit used: 4% (48/1K)
[OK] Table locks acquired immediately: 100% (961K immediate / 961K locks)
[!!] InnoDB buffer pool / data size: 1.4G/3.8G
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
MySQL started within last 24 hours - recommendations may be inaccurate
Reduce your overall MySQL memory footprint for system stability
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Variables to adjust:
query_cache_size (>= 8M)
tmp_table_size (> 32M)
max_heap_table_size (> 32M)
innodb_buffer_pool_size (>= 3G)
It's only there for the starting period, in case I had to switch back, after some changes was made.
In any case, my question is whether or not this "eats" memory even though it's idle, so to speak?
Let me remind you how little I know of what a DBA does.
Thanks.
Are you sure you're swapping and it's not high disk usage from mysql itself? If Mysql is using 83% of the availble memory and the rest are using 1.0 or less then you'd need at least 17 proceses using 1.0 to hit 100% which seems a lot if only mysql is installed.
Are you sure you ran the ps command when the server was having problems?
To me, it seems like the server is balancing on a very thin edge all the time i.e. the problem comes when I suddenly get a traffic spike.
I usually average around 80 q/s, as you can see from the mysqltuner, but sometimes that would go up to 120-150 queries/s and that's when it just starts swapping and consequently the load goes up, more swapping, and flatline – reboot needed from the linode manager.
Again, the older linode handled this beautifully. It never died. Ever.
EDIT: Btw, I've now removed the other database. Can I "free" up that memory somehow or is that handled by itself?
If this was my situation, I would reduce max-connections to 150 or maybe event 100 and see what happens. That will reduce the memory pressure from MySql. It is possible that due to the constrained memory, it is actually the web server, assuming it is on the same VPS, that is causing the swapping.
What I find interesting that both mysqltuner and mysqlreport both show a value of 52 for threads/connections. This implies to me that upstream you have a limit of 52 requests being passed to MySQL at any point in time.
So by problems I mean when it starts swapping.
If you've removed the database restart mysql (or reboot) and it'll clear the buffer pool.
since removing the extra database from the mysql instance, mysqltuner said I hade 1.9GB of data and ideally the innodbbufferpool should be set to or above that number (which in itself is really weird if you ask me since that in effect would make it an in-memory db). Long story short, I resized my linode to a 4096 and set the innodbbufferpool to 2,4GB i.e. 500MB above what mysqltuner said the data size was.
The result so far: MySQL ate 2,1GB for the buffer pool almost instantaneously after reboot and is slowly (my traffic hours haven't started yet as the americas is just now waking up) eating more and more memory. That said I don't know if it will stop or what will happen.
Free, right now, gives this:
total used free shared buffers cached
Mem: 4002 3450 552 0 69 458
-/+ buffers/cache: 2922 1080
Swap: 255 0 255
ps -aux gives this:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mysql 5083 21.5 70.7 3714472 2898088 ? Sl 06:59 78:18 /usr/sbin/mysql
So, 70% of the memory (and counting) is now eaten by MySQL.
As for webserver. I'm having the webserver on another Linode altogether so this is a dedicated MySQL. I know I'm sounding like a broken record here but this setup - with the same data!!! - worked beautifully on an old-style Linode 1024. The irony…
Anyway, I'm considering moving back to MyISAM if this move to 4096 isn't enough.
About setting the MAX open connections, I don't know - why would that make a difference? The MAX isn't the same as the actual, is it? As far as I can see, the actuals has never been higher than 100 or thereabouts, so quite in the vicinity of what you suggested.
In any case, I'm very happy that you're still trying to help out. It's much appreciated.
The server didn't experience any problems this time i.e. 4096 seems to do the trick, at least after 24 hours. Time will tell if it holds.
So what I ended up with was an innodbbufferpool that has a size that is a little bit bigger than the size of the data in the whole database, leaving 1.6 GB for MySQL to operate, using 4 virtual CPU cores.
The traffic to the server has been peaking at 100 queries/sec for a couple of hours today, with a 24 hour average of 53 queries/second.
I'm not sure how well it would hold for higher traffic than that and your mileage will of course vary with the kind of queries you have, using index etc.
A couple of things that is quite amazing in all of this is how little certain information there is available on this (how to configure mysql properly). Sure, there's the MySQL documentation but honestly who can really say they can make heads or tails out of that? Also, it feels like it was written 10 years ago. I get a feeling that MySQL is on the skids. It may very well be the case that it's time to look elsewhere for storage options.
@adergaard:
A couple of things that is quite amazing in all of this is how little certain information there is available on this (how to configure mysql properly). Sure, there's the MySQL documentation but honestly who can really say they can make heads or tails out of that?
I highly recommend the book "High Performance MySQL". It fills in a lot of the holes in the documentation when it comes to performance tuning.