Can I do better with my linode?
I am running on a Linode 768 with apache2, mysql and a wordpress with 20.000 pageviews/day
The number of MaxClients allowed by Apache is 20. If I move this number to 25 the server starts swapping.
My problem is that apache easily reaches the maximum number of clients, the TCP queue grows and the site starts behaving slowly…however increasing MaxClients increase the swap (and I understood the the server should never swap…)
can I do better? Or I just reached the limit?
`[time] Output: 6 requests currently being processed, 12 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 12 requests currently being processed, 8 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 8 requests currently being processed, 10 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 20 requests currently being processed, 0 idle workers
[time] Output: 17 requests currently being processed, 3 idle workers`
`~~[code]~~ total used free shared buffers cached
Mem: 750 716 34 0 20 323
-/+ buffers/cache: 371 378
Swap: 255 15 240
<e>[/code]</e>`
`~~[code]~~ <ifmodule mpm_prefork_module="">StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 20
ServerLimit 20
MaxRequestsPerChild 0</ifmodule>
<e>[/code]</e>`
`~~[code]~~USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 2736 764 ? Ss 01:18 0:00 /sbin/init
root 2 0.0 0.0 0 0 ? S 01:18 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 01:18 0:00 [ksoftirqd/0]
root 4 0.0 0.0 0 0 ? S 01:18 0:01 [kworker/0:0]
root 5 0.0 0.0 0 0 ? S 01:18 0:00 [kworker/u:0]
root 6 0.0 0.0 0 0 ? S 01:18 0:00 [migration/0]
root 7 0.0 0.0 0 0 ? S 01:18 0:00 [migration/1]
root 8 0.0 0.0 0 0 ? S 01:18 0:00 [kworker/1:0]
root 9 0.0 0.0 0 0 ? S 01:18 0:00 [ksoftirqd/1]
root 10 0.0 0.0 0 0 ? S 01:18 0:00 [migration/2]
root 11 0.0 0.0 0 0 ? S 01:18 0:00 [kworker/2:0]
root 12 0.0 0.0 0 0 ? S 01:18 0:00 [ksoftirqd/2]
root 13 0.0 0.0 0 0 ? S 01:18 0:00 [migration/3]
root 14 0.0 0.0 0 0 ? S 01:18 0:00 [kworker/3:0]
root 15 0.0 0.0 0 0 ? S 01:18 0:00 [ksoftirqd/3]
root 16 0.0 0.0 0 0 ? S< 01:18 0:00 [khelper]
root 17 0.0 0.0 0 0 ? S 01:18 0:00 [kworker/u:1]
root 21 0.0 0.0 0 0 ? S 01:18 0:00 [xenwatch]
root 22 0.0 0.0 0 0 ? S 01:18 0:00 [xenbus]
root 148 0.0 0.0 0 0 ? S 01:18 0:00 [sync_supers]
root 150 0.0 0.0 0 0 ? S 01:18 0:00 [bdi-default]
root 152 0.0 0.0 0 0 ? S< 01:18 0:00 [kblockd]
root 162 0.0 0.0 0 0 ? S< 01:18 0:00 [md]
root 246 0.0 0.0 0 0 ? S< 01:18 0:00 [rpciod]
root 248 0.0 0.0 0 0 ? S 01:18 0:00 [kworker/0:1]
root 279 0.0 0.0 0 0 ? S 01:18 0:06 [kswapd0]
root 280 0.0 0.0 0 0 ? SN 01:18 0:00 [ksmd]
root 281 0.0 0.0 0 0 ? S 01:18 0:00 [fsnotify_mark]
root 285 0.0 0.0 0 0 ? S 01:18 0:00 [ecryptfs-kthrea]
root 287 0.0 0.0 0 0 ? S< 01:18 0:00 [nfsiod]
root 290 0.0 0.0 0 0 ? S 01:18 0:00 [jfsIO]
root 291 0.0 0.0 0 0 ? S 01:18 0:00 [jfsCommit]
root 292 0.0 0.0 0 0 ? S 01:18 0:00 [jfsCommit]
root 293 0.0 0.0 0 0 ? S 01:18 0:00 [jfsCommit]
root 294 0.0 0.0 0 0 ? S 01:18 0:00 [jfsCommit]
root 295 0.0 0.0 0 0 ? S 01:18 0:00 [jfsSync]
root 296 0.0 0.0 0 0 ? S< 01:18 0:00 [xfs_mru_cache]
root 297 0.0 0.0 0 0 ? S< 01:18 0:00 [xfslogd]
root 298 0.0 0.0 0 0 ? S< 01:18 0:00 [xfsdatad]
root 299 0.0 0.0 0 0 ? S< 01:18 0:00 [xfsconvertd]
root 300 0.0 0.0 0 0 ? S< 01:18 0:00 [glock_workqueue]
root 301 0.0 0.0 0 0 ? S< 01:18 0:00 [delete_workqueu]
root 302 0.0 0.0 0 0 ? S< 01:18 0:00 [gfs_recovery]
root 303 0.0 0.0 0 0 ? S< 01:18 0:00 [crypto]
root 865 0.0 0.0 0 0 ? S 01:18 0:00 [khvcd]
root 979 0.0 0.0 0 0 ? S< 01:18 0:00 [kpsmoused]
root 980 0.0 0.0 0 0 ? S 01:18 0:01 [kworker/1:1]
root 1005 0.0 0.0 0 0 ? S 01:18 0:00 [kworker/2:1]
root 1008 0.0 0.0 0 0 ? S 01:18 0:00 [kjournald]
root 1012 0.0 0.0 0 0 ? S 01:18 0:01 [kworker/3:1]
root 1033 0.0 0.0 2368 0 ? S 01:18 0:00 upstart-udev-bridge --daemon
root 1035 0.0 0.0 2236 4 ? S~~[/code]~~`~~[/time][/time][/time][/time][/time][/time][/time][/time][/time][/time][/time][/time][/time][/time][/time]~~
10 Replies
There are more efficient ways to run Apache. If you don't want to migrate to lighttpd or nginx, you can try setting up Apache to run mpmworker instead, as it's much more efficient (no longer need to load PHP into memory to serve static requests). It can be fairly involved, though, because it also means reconfiguring PHP to use FPM or fastcgi or something similar rather than modphp.
Also,
What's your KeepAlive setting? It should be very low, like 2-5.
Are you using a caching plugin with WordPress?
-Chris
mysqltuner.pl
My usual MySQL advice is to move as much as possible (i.e. everything but the mysql.* database) to InnoDB and crank the buffer pool; I find it's easier to tune for InnoDB, and not having to deal with writes locking entire tables can make some requests finish faster. This probably won't help your immediate problems, though.
(I'm a bit of a database snob. And yes, Wordpress is happy as a clam with InnoDB.)
@caker:
MaxClients of 20 and KeepAlives OFF can serve an insane amount of traffic. Turn off KeepAlives and see how it goes.
+1. Keepalives are right up there with MaxClients on the list of tragic mpm_prefork defaults. KeepAliveTimeout 1 gives me this for a somewhat busy vBulletin 3 forum (Alexa top 100,000, baby!), a handful of Wordpresses, and the rest of our PHP flotilla:
~~![](<URL url=)http://drop.hoopycat.com/app1_apache_processes-week.png
~~![](<URL url=)http://drop.hoopycat.com/app1_apache_accesses-week.png
It's a 2 GB instance, so we've got mod_prefork at MaxClients 75, but we could crank that down quite a ways and not feel a thing. Perhaps the most telling thing is that we have 1.02 GB of completely unused (no buffers, no cache, no nothin') RAM on there. I should try setting KeepAliveTimeout back to the default sometime just to see what'd happen…
(Also, we're 64 MB into swap. Linux memory management is fun.) -rt~~~~
@hybinet:
As @Guspaz said, using a bit of swap is OK. Thrashing swap is bad. Unless you're using an image gallery plugin, it should be safe to increase MaxClients a little bit.
Thank you Guspaz and hybinet. Yes, I also have a small forum and gallery3 installed. How could I distinguish between system "normal" swap and trashing? I have seen my swap going up when I increased the MaxClients directive but I cannot know what is the reason (is it because of wordpress? is it because of the gallery3 subsection?)
@hybinet:
- What's your KeepAlive setting? It should be very low, like 2-5.
It was On with the default Timeout setting. I turned it Off following your suggestions, tomorrow I'll let you know:)
@hybinet:
- Are you using a caching plugin with WordPress?
Sure, W3 Total Cache
@hoopycat:
My usual MySQL advice is to move as much as possible (i.e. everything but the mysql.* database) to InnoDB and crank the buffer pool; I find it's easier to tune for InnoDB, and not having to deal with writes locking entire tables can make some requests finish faster. This probably won't help your immediate problems, though.
When I have to evaluate this solution? Right now it seems to me that mysql is not taking too much memory. Together with wordpress I have a small forum and gallery3
@mottalrd:
How could I distinguish between system "normal" swap and trashing?
In normal situations, the kernel moves unused data to swap, and just leaves it sitting there until it is needed. So you're using swap but not much data is being transferred in and out of swap. On the other hand, when you're thrashing, you transfer a lot of data repeatedly in and out of swap. So the amount of used swap is less important than the amount of data that is being transferred in and out of swap at any given time.
Run "vmstat 1" (without quores), let it print a few lines, and hit Ctrl+C to halt. Check the "si" and "so" columns. They represent the number of blocks per second that are swapped in and out, respectively. Lower is better, but you're probably OK as long as they're not in the thousands.
Alternatively, you could just check the I/O graph on your dashboard.
> Run "vmstat 1" (without quores), let it print a few lines, and hit Ctrl+C to halt. Check the "si" and "so" columns. They represent the number of blocks per second that are swapped in and out, respectively. Lower is better, but you're probably OK as long as they're not in the thousands.
Amazing. Thank you
> On the other hand, when you're thrashing, you transfer a lot of data repeatedly in and out of swap. So the amount of used swap is less important than the amount of data that is being transferred in and out of swap at any given time.
This starts happening only when I am hitting the RAM limit, or this could happen also because of same insane I/O application?
`[time] Output: 7 requests currently being processed, 9 idle workers
[time] Output: 3 requests currently being processed, 14 idle workers
[time] Output: 2 requests currently being processed, 9 idle workers
[time] Output: 2 requests currently being processed, 11 idle workers
[time] Output: 2 requests currently being processed, 9 idle workers
[time] Output: 1 requests currently being processed, 9 idle workers
[time] Output: 9 requests currently being processed, 6 idle workers`
however the server is swapping with some peaks even if I have a lot of free memory in RAM. I suspect this may be related of some intensive I/O (maybe http downlods of site contents), is it possible?
`~~[code]~~ total used free shared buffers cached
Mem: 750 650 100 0 27 401
-/+ buffers/cache: 221 529
Swap: 255 17 238
<e>[/code]</e>`
`~~[code]~~ top - 06:18:43 up 1 day, 4:59, 1 user, load average: 0.43, 0.36, 0.44
Tasks: 84 total, 1 running, 83 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 768512k total, 665116k used, 103396k free, 28248k buffers
Swap: 262140k total, 18148k used, 243992k free, 396048k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1952 mysql 20 0 134m 37m 2716 S 0 5.0 10:57.56 mysqld
17736 www-data 20 0 50684 20m 3620 S 0 2.7 0:02.99 apache2
17741 www-data 20 0 50200 19m 3612 S 0 2.6 0:00.59 apache2
17713 www-data 20 0 49692 19m 3748 S 0 2.6 0:10.05 apache2
17735 www-data 20 0 49688 19m 3620 S 0 2.6 0:03.48 apache2
17720 www-data 20 0 49064 18m 3704 S 0 2.5 0:02.83 apache2
17724 www-data 20 0 49972 18m 4044 S 0 2.5 0:04.57 apache2
17679 www-data 20 0 49564 18m 4200 S 0 2.5 0:13.81 apache2
17716 www-data 20 0 49552 18m 4120 S 0 2.4 0:06.37 apache2
17726 www-data 20 0 46816 16m 3748 S 0 2.2 0:05.82 apache2
17699 www-data 20 0 46808 16m 3756 S 0 2.2 0:09.06 apache2
17740 www-data 20 0 46816 16m 3568 S 0 2.2 0:01.18 apache2
11829 root 20 0 35808 4468 4304 S 0 0.6 0:09.39 apache2
<e>[/code]</e>`
![](http://img210.imageshack.us/img210/8249/catturabt.png)~~[img]~~<url url="http://img210.imageshack.us/img210/8249/catturabt.png">http://img210.imageshack.us/img210/8249/catturabt.png</url><e>[/img]</e>
![](http://img444.imageshack.us/img444/192/catturahi.png)~~[img]~~<url url="http://img444.imageshack.us/img444/192/catturahi.png">http://img444.imageshack.us/img444/192/catturahi.png</url><e>[/img]</e>[/time][/time][/time][/time][/time][/time][/time]
@mottalrd:
however the server is swapping with some peaks even if I have a lot of free memory in RAM. I suspect this may be related of some intensive I/O (maybe http downlods of site contents), is it possible?
Wait until you get another peak like that, and run iotop to find out which processes are hitting the disk. (Install it if needed.) If it's MySQL, see MySQL tuning tips offered by other users above.