Optimize nginx with many subdomains
I have a Linode 512 running nginx with fast-cgi and mysql. I have around 500 subdomains with each one having its own WP installation.
I notice that under heavier-than-normal load the server will become very slow and start throwing 504 'Gateway Timed Out Errors'. However even under the heaviest loads I'm not using more than 200MB of the allocated RAM and processor load is averaging below 5.
The logs show '*196362 upstream timed out (110: Connection timed out) while reading response header from upstream'
I have tried to increase my nginx workers to '2' and the worker processes to '2048' - but it has made absolutely no difference.
What would you guys recommend?
1) Should I try using PHP-FPM?
2) Some kind of combination of nginx/apache?
3) Software load balancer?
4) Some mySQL optimization?
Any kind of advice/reading material would be highly appreciated.
Thanks
7 Replies
/usr/bin/php-fastcgi:
/usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -C 6 -u www-data -f /usr/bin/php5-cgi
Am I correct in understanding that this spawns 6 children?
How many children would you recommend for the described setup?
Thanks
As far as optimization goes I recommend the following:
set the workers for nginx to 4 (the number of cpus on the system) with 1024 processes each;
replace fast-cgi with php-fpm (it will kill your children when they reach a specified number of requests and respawn fresh ones);
install APC for php.
use mysqltuner.pl or mysql-tuning-primer.sh to optimize your mysql server.
Start with this and scale up if needed. The sky is the limit…
php-fpm is also worthwhile.
@brianmercer:
If each of 500 sites has its own WP installation, then APC will have to cache 500 copies of the same files. You might consider the multisite features of WP 3.0 or a mu version of 2.x. It's more complicated to set up, but then you'd be working from one codebase and APC will only cache one copy. Plus you'd only have to do one upgrade when security patches come out.
php-fpm is also worthwhile.
If APC does caching based on a file hash, it shouldn't duplicate files in memory like that…
I should mention that, from a resource allocation perspective, consider that there are only 4 virtual CPUs available to your linode. For this reason, something between 4 and 8 fcgi processes is usually what makes the most sense unless you have scripts that hold open connections (AJAXy stuff, for example).
@Guspaz:
If APC does caching based on a file hash, it shouldn't duplicate files in memory like that…
I should mention that, from a resource allocation perspective, consider that there are only 4 virtual CPUs available to your linode. For this reason, something between 4 and 8 fcgi processes is usually what makes the most sense unless you have scripts that hold open connections (AJAXy stuff, for example).
If you check out apc.php and browse the entries, it does keep a different copy for each file path. There is a setting "apc.file_md5" which is off by default, but I'm not sure if it'd solve that issue. I haven't tried that setting.
I agree 6 or 8 php children might be enough, but you'd want more than 4 children if you have the memory for it. Sometimes a child might be waiting for a response from a backend db server or waiting for a feed response from another site or waiting for imagemagick to process an image and you'll want a few extra php children to be using available CPU resources while that's going on. Depending on your site, of course.
New php-fpm has a stats page and you can have collectd/munin/cacti poll it and create a chart of php child usage and then you'll see if some are lying unused.