Tuning a Linode 4096 Running LAMP Hosting WordPress
The MySQL is hosted on a separate dedicated server.
Currently, the linode is running out of memory with default apache settings. Looking for any insight from the more experienced admin's out here on Max servers, childs, etc I should be tweaking.
Thanks in advance.
40 Replies
Also dropped Max Requests Per Child to 5000 (down from 10000)
I don't want to speak too soon, but it's already looking better.
Usually if you enable keepalive you use more Ram and lesser CPU.
If you turn it on I'd suggest to have:
MaxKeepAliveRequests 50
KeepAliveTimeout something between 1 and 5 (the default it's 15)
MaxClients put something that don't make crach the VPS, for this you should also check how much memory are you giving to php in the file php.ini.
So for example if you have set 100 MB in php.ini in the worst case 10 http process could eat 1 GB of Ram…
Reference:
What's the best way to count HTTP processes? ps aux | grep http ?
pidof httpd | wc -w
Which returns 62 processes
Currently php.ini is set to 64mb of RAM - so that right there would eat up all my RAM.
I'm going to move to 32MB in php.ini
And on which data center that you are hosting mysql setup? If you arent running memcached or some sort of caching system you will be adding much latency to page loads because of mysql setup is being hosted on dedicated server which linode doesnt provide!
@ruchirablog:
dude you are using 225GB traffic a day thats about 6.75TB a month! Are you aware of linode bandwidth limitations and costs?
And on which data center that you are hosting mysql setup? If you arent running memcached or some sort of caching system you will be adding much latency to page loads because of mysql setup is being hosted on dedicated server which linode doesnt provide!
It's mostly incoming too, which is the opposite of what you'd expect (unless the graphs are misconfigured)… Assuming the graphs are misconfigured, since incoming is free, that's still 192GB per day, or ~5.8 TB per month, at an expected bandwidth overage cost of $416 per month.
Normally, I'd recommend upgrading to the next level linode to get the bandwidth rather than paying overage, since it's effectively free, but in this case the OP is on a 4096, and that's the largest linode that features linearly scaling bandwidth; the price of bandwidth included with a linode skyrockets after that (4096 is $0.10/GB, 8192 is $0.16/GB, 12288 is $0.24/GB, and so on, to the 20480 which is $0.40/GB. So there's not really any cheaper option here. I'm also not sure that even 5-6 TB/mt is enough to negotiate a bulk bandwidth discount with Linode, although it might still be worth looking into.
Yes, bandwidth estimates from the clients old host were grossly under reported.
I'm actively looking to get off linode for this particular setup.
There are also other options, perhaps using CloudFlare
I still use Linode for a lot of my other clients and my other services.
Since making some tweaks we're experiencing a lot less crashing. Maybe once a day if we're lucky.
We'll see where this goes.
Remember that maxclients is the maximum number of simultaneous connections that Apache is allowed to handle. But you've only got four processor cores, there's a point of diminishing returns to trying to handle too many requests at the same time.
So, drop the maxclients value until you're not using up all your RAM (and remember that you don't want to max out the RAM, you want to leave a healthy chunk unused so that it's available for disk caching), and your OOM crashes will stop entirely.
In terms of CloudFlare (and disclaimer here, I've never used it because my volume isn't high enough to justify it), they only cache static content (images, js, css), and since images are what you're worried about, it seems ideal. I believe the free version caches them, but if not, $20/mth for pro is a lot cheaper than the hundreds in bandwidth costs at linode.
Another thing you can do is, if you're not doing it already, enable http compression. This typically has a small CPU hit and produces decent bandwidth savings (since all of your HTML/js/css/etc files are suddenly 80% smaller). It's very easy to do, just enable mod_deflate.
EDIT: CloudFlare actually did a blog post specifically on optimising wordpress:
One of their suggestions is to run your images through smush.it, which is a Yahoo product that essentially just runs your images through a bunch of lossless image optimization tools like pngcrush and jpegtran, and spits out the result. It can often make a decent difference on JPEGs, primarily by stripping out extra stuff a website doesn't need (all the metadata, thumbnails, etc), although it can also recompress the non-lossy parts of the JPEG compression process to help a bit too.
I like smush.it because I can just throw a bunch of images at it, which is faster than running them through various tools on the commandline myself
@mattm:
Here you can see my graphs before the system seized up for running out of memory.
https://skitch.com/mattmm/fsi1f/linode- … node124975">https://skitch.com/mattmm/fsi1f/linode-dashboard-linode124975
Sorry to be off topic, but I'm looking to start monitoring my box too, what are you using to generate these graphs, and does it use a web interface?
Thanks!
~~![](<URL url=)
There's also the GraphiteEtsy's Church of Graphs
~~![](<URL url=)
… but if you just want the graphs from the original poster, log into the Linode Manager and click on your Linode, then scroll down.
[1] I think SCADA is an appropriate term to describe "the things between {api,manager}.linode.com and the hypervisor". Just because it controls the flow of clouds instead of molasses doesn't mean it's not industrial.~~~~
I'm thinking about using awstats, wow I haven't heard of any of those. Thanks for the suggestions
I was thinking smush.it as well - just worried about load on the server as I SMUSH them.
mod_deflate might make some sense as well. I'll give it a go with your suggestions of Max Clients.
I'm assuming if I max it all out I could hit a threshold of clients not loading the site at all because all of Apache is busy. No way to track that I'm assuming since they won't actually hit the apache server?
72 requests currently being processed, 4 idle workers
I use it on LEMP
@mattm:
I'm assuming if I max it all out I could hit a threshold of clients not loading the site at all because all of Apache is busy. No way to track that I'm assuming since they won't actually hit the apache server?
I run a cron job once a day to search the Apache error log for MaxClients.
sudo grep MaxClients /var/log/apache2/error.log
I hardly ever get anything, but occasionally I get 1 or 2 entries. So, not a big deal. If it were a lot entries though, I'd be worried.
[Sun Sep 04 21:47:33 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Mon Sep 05 23:46:00 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Tue Sep 06 07:24:34 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Tue Sep 06 07:58:09 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Tue Sep 06 09:32:25 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Wed Sep 07 05:27:50 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Wed Sep 07 07:12:51 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Wed Sep 07 08:53:30 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Thu Sep 08 01:37:39 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Thu Sep 08 13:07:52 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Thu Sep 08 16:47:57 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Fri Sep 09 09:13:55 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Fri Sep 09 10:38:25 2011] [error] server reached MaxClients setting, consider raising the MaxClients setting
One thing you can also consider doing is switching apache from mpmprefork to mpmworker, which is a lot more efficient, but more difficult to set up. It requires setting up PHP as a fastcgi (usually using FPM, I believe) since PHP doesn't like multithreading, and there are some other considerations to take into account, but it saves a ton of RAM because you pick the number of PHP processes, and you don't need a copy of PHP loaded into RAM just to handle a static request.
For much help in tweaking Apache beyond this, you'd really have to turn to the Apache wizards around here. I'm a lighttpd user, and many people around here use nginx, both of which are single-process single-threaded servers; you don't have to manage any of this stuff to the same extent because they're much simpler, but I'm a bit biased because I've been using lighttpd for a very long time.
In terms of load concerns regarding smush.it, I'm not sure I follow; it's not an active service that you integrate, it's just an image processing tool. You upload your images, it optimizes them, you download the result. Smaller images won't really help your server load, just your bandwidth bill, and probably not dramatically (but every bit helps, right? A bit from smush.it here, a bit from mod_deflate there, a bit from javascript minifiers there, it all adds up)
as for smush.it I meant the server load as it optimizes the images (cpu crunching images etc) but maybe if I'm reading correctly, Yahoo's servers do this.
I'm switching to Cloudflare + Wc3 cache tomorrow morning to see how it handles.
I'm also considering nginx as an alternative
Currently, at an off peak time, the http count is:
root@li [~]# pidof httpd | wc -w
74
I'm at MaxClients = 100 right now.
I do have keep alive off as well.
@Guspaz:
It requires setting up PHP as a fastcgi (usually using FPM, I believe)
Just remember to use modfastcgi, not modfcgid. FPM is not necessary at all, tho it'd probably help. I'll tell you in a year or so after the next Debian upgrade, 6.0.x doesn't have FPM.;)
@mattm:
Thoughts on having suphp enabled on a server with only 1 domain? Won't be shared hosting…
I don't think there's a point, just set Apache to run as whatever user; www-data for instance. Actually, since suphp executes scripts as the owner's user it is potentially less secure than a discrete www-data user. On a shared server of course it's more important to isolate users from each other.
@mattm:
Do you think it's more resource intensive? Or noticeable if I were disable it?
Yes, suphp is more resource intensive. Presumably your script would run identically, only as www user.
Timeout 120
TraceEnable On
ServerSignature Off
ServerTokens Full
FileETag All
StartServers 5
<ifmodule prefork.c="">MinSpareServers 5
MaxSpareServers 10</ifmodule>
ServerLimit 256
MaxClients 100
MaxRequestsPerChild 5000
KeepAlive Off
KeepAliveTimeout 2
MaxKeepAliveRequests 100
I've enabled W3TC Plugin and Cloudflare:
I just added W3TC as of this post, so the attached image shows the first drop in load/bandwidth saved, but it's normalizing right now.
![](
@rsk:
@Guspaz:It requires setting up PHP as a fastcgi
Just remember to use modfastcgi, not modfcgid.
Could you elaborate on this?
I am currently not running PHP on my MPM worker instance, but my current understanding was that the fcgid package is better maintained and is the superior option.
Anyway. "Standard" php-fastcgi can work in two modes.
In one, you spawn a single "php worker process", that's handling one request at a time.
That mode is fine with both modfastcgi, and modfcgid. You use the Apache modules "process management" features, and it's the mod_* that spawns and kils PHP subprocesses as it needs them. Quite like the MinSpareServers/MaxSpareServers/MaxClients directives work for Apache's own workers.
The other mode, which's obtained by setting PHPFCGICHILDREN=
That mode is necessary to have APC work, because APC uses shared memory, and needs all PHP processes to be forked from the dispatcher to have access to it.
Using APC with the "simple PHP workers" would cause each of them to have its own cache. Quite pointless.
Now, mod_fcgid doesn't do pipelining.
If you'll use fcgid to spawn a php dispatcher tree, it'll see only the single process it started itself (the dispatcher), and it'll feed it one request at a time, and wait for it to complete before it sends the next one.
So, in result, you'll have all the php-children except one idling, and you'll be processing one php script at a time, period.
mod_fastcgi does pipelining, and thus lets you run your php dispatcher->children tree, making use of all the parallel workers, and letting them share the APC cache between them.
Seems like an easy decision to me.
The average 'text' page load is 32KB. Do that hosting math on that. The rich experience is the kicker.
If you even need a 1024 after that, I'd be surprised. My similar setup is getting 2k hits per second on a 1024.
Outside of that I would need to see more than your traffic metrics. What other metrics do you monitor? hits/sec, CPU, memory, disk io? I know Linode offers some of this but I do not see any post related to these graphs.
@Ericson578:
@mattm:Here you can see my graphs before the system seized up for running out of memory.
https://skitch.com/mattmm/fsi1f/linode- … node124975">https://skitch.com/mattmm/fsi1f/linode-dashboard-linode124975 Sorry to be off topic, but I'm looking to start monitoring my box too, what are you using to generate these graphs, and does it use a web interface?
Thanks!
I recommend Cacti which is built on top of rrdtool.
I'm still learning how to interpret the info.