Memory Distribution for a Linode 360
My Linode runs sshd, Postfix, openvpn, squid, nginx, php-(fast)cgi, and mysql. My site is lightly loaded, and consists primarily of image hosting using Gallery2.
My memory graphs are here:
ps auwx reports the main users of memory as (everything before this is using less than 10MB of RSS):
$ ps awux --sort:rss | tail -n 5
proxy 11221 0.0 5.0 42196 17756 ? S Jan28 1:55 (squid) -D -YC
www-data 26656 0.1 9.5 310376 33556 ? S 09:29 0:02 /usr/bin/php-cgi -q -b localhost:9000 -c /etc/php5/cgi/php.ini
www-data 26240 0.0 10.1 312660 35780 ? S 09:18 0:01 /usr/bin/php-cgi -q -b localhost:9000 -c /etc/php5/cgi/php.ini
www-data 18987 0.1 10.5 313048 37260 ? S 07:06 0:11 /usr/bin/php-cgi -q -b localhost:9000 -c /etc/php5/cgi/php.ini
mysql 25907 0.8 28.8 278908 101960 ? Sl Jan27 57:27 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock
MySQL seems to be tuned quite well; mysqltuner and tuning-primer.sh are both happy, and all of my queries (>99.99%) are handled within 1 second.
I have vm.swappiness set to 10.
I'm trying to decide whether or not I should run more php instances. I don't really need it to handle the load, but some gallery operations can take a few seconds to process, and I like to be able to open many tabs at once, without worrying about a gateway timeout.
If I sum up the RSS values, I usually get something like:
$ ps auwx --sort:rss | awk 'BEGIN { TOTAL = 0} {TOTAL+=$6} END { print TOTAL }'
353248
The total varies between 295000 and 370000. My PHP processes are using the APC cache, with a 36MB cache allocated (
Any advice?
13 Replies
i don't know if you should increase the number of php processes, you're already using a lot of swap memory (> 100MB) and the iostats are kinda high so i'd rather try reducing the amount of memory used.
do you really need 100MB for the squid cache ?
Where did you see 100MB for squid? It appears to only be using ~20MB to me.
The swap seems to be historical; I have very pages going into/out of swap, if I'm reading it correctly
cache used: 94MB
3 php processes seems very little, i currently run 12 processes and i'm not serving a lot of PHP contents.
how many nginx worker processes do you have up ?
I have the default 256 NGINX connections, in one process. This us probably too high, as my entire site is PHP based. The problem is that I seem to have some script somewhere that sends PHP memory (RSS, that is) up to 30-50MB (although maybe part of this is the apc cache?)
try more worker processes. i currently run about 20 nginx workers and 12 php fcgi processes (i serve a lot of non-php files so more nginx processes than php fcgi)
that should help making the site more snappy.
Regarding NGINX processes / connections; right now, I have one process and 256 connections. Doesn't that mean that it can handle up to 256 simultaneous connections once they've been passed off to PHP?
it sounds like a worker can process more than one connection at a time (via async io) but if one connection blocks it everyone else has to wait.
i'd try a few more processes, maybe 2 or 4 to begin with and see how it behaves.
…not sure about the APC memory usage…
e.g.
1 parent process with 11 child processes = 12 processes
3 parent processes with 3 child processses each = 12 processes
These two cases might look the same, but in fact the second setup will use a lot more RAM than the first setup. It will also probably have a lower performance. Why? Because each parent process keeps its own APC cache which it shares with its children. If you have more than one opcode cache for the same stuff, you waste RAM.
Run "ps auxf" to find out which of your php-cgi processes are children of which.
I just copied it into my www directory (/var/www/, in my case)
@oliver:
check the docs about worker_processes:
~~[http://wiki.codemongers.com/NginxHttpMainModule#workerprocesses " target=" blank">](http://wiki.codemongers.com/NginxHttpMa … _processes">http://wiki.codemongers.com/NginxHttpMainModule#worker_processes ](it sounds like a worker can process more than one connection at a time (via async io) but if one connection blocks it everyone else has to wait.
i'd try a few more processes, maybe 2 or 4 to begin with and see how it behaves.
…not sure about the APC memory usage…
Just a note. If a connection blocks nginx it should move on to the next, unless there is a bug. If the server is really loaded it might be worth starting other workers so that they can each run on a separate core, but I've never seen nginx use more than a few percent of CPU.