Moving from MT & site craps out using Load Impact

Hi all, I have a site on Media Template GS that crashes on busy nights.

I am in the process of moving it to a linode but the performance is so much worse & dies at 40 users when I run a basic LoadImpact test.

I am using the optimisation guide here but it's not helping, any tips?

51 Replies

I've ordered one of these…

~~![](<URL url=)http://www.ihcworld.com/products/images … -PW403.jpg">http://www.ihcworld.com/products/images/microscope/Inverted-Microscope-XDS-PW403.jpg" />

Hopefully that will help us discover any useful detail that might be buried in your question.~~

no harm in a bit of sarcasm, I enjoy a bit myself…

linode 512, Ubuntu 10.0.4 LTS with LAMP installed.

Running WP & BuddyPress.

The site gets 20k page views on a busy night, 5k average on week days.

I do run a lot of plugins but with all disabled the load test still craps out.

Currently hosting on: http://xfactor-updates.com

Linode IP for testing is http://109.74.198.57/

let me know if any other details would be helpful.

any tips appreciated before I switch over.

php info: http://109.74.198.57/php.php

apc info: http://109.74.198.57/apc.php

That referenced page is for a machine with a GB of RAM, and the suggested MaxClients of 250 is going to cause real problems for probably all but the largest Linodes. To be honest, I think the settings are probably too high even for that machine. For example, if I'm reading it right they limit PHP to 64MB of memory, which means that even without the Apache process overhead, at 250 simultaneous clients they risk needing like 16GB of memory worst case. Of course, in practice most PHP requests likely use a fraction of that memory which is how they get away with it at all.

Once you overcommit memory heavily and start thrashing on a VPS, performance is going to tank. The odds are good that during your stress test, you're forcing your Linode to start so many Apache processes in parallel that you're swapping heavily and everything essentially comes to a stop while waiting for all the I/O to complete.

There are a large number of Apache tuning threads here in the forum (searching for MaxClients might be a good start) that you might try reviewing.

Certainly if you're on a Linode 512, just for the heck of it, drop MaxClients down to 15-20 or even lower to start with and see how it does. Watch peak memory usage during the stress tests.

Peak transaction throughput may suffer at that low value, but you shouldn't keel over completely at any point and if you find yourself with spare working memory under load, you can start raising the value, and start playing with requests per child and so on. PHP caching is definitely also worth pursuing. But MaxClients is the single big knob that should at least let you get into the ballpark in terms of resources.

Last time I tried this myself (Linode 512 with Apache fronting a Silverstripe CMS site over mysql), I settled on about 25 for MaxClients for testing with absolutely no other PHP or mysqld tuning. My apache processes were averaging about 16-18MB in resident size, and I was getting 20 requests/s. The system floated with around 50-75MB free during the test. The processing was entirely CPU bound, so Apache's requests per child configuration made little difference, but I could keep churning 20 requests/s as long as needed.

Now there's a variety of things I could do to try to increase the processing rate, but increasing MaxClients would not be one of them. Even just going above 30 pretty much guaranteed I'd thrash and performance would absolutely tank. But, for example, I tried with 50, and ended up going from almost 100% cpu bound, to almost 100% I/O wait bound and no free memory. My transaction rate during tests dropped to 3/s (an 85% drop) and 72% of 200 test transactions failed to complete. I even generated a kernel panic when the machine OOM'd without finding a killable process (I had only configured 256MB of swap). All from changing MaxClients from 25 to 50.

It's useful to note that MaxClients keeps memory usage in check, but even low settings can deliver high request rates as long as the page processing is fast and memory could in theory permit a higher value. For example, static content through the same system (no PHP used by the page, resident apache process size around 4-6MB) was about 6000 requests/sec. I had a lot more free memory during the test, but even without leveraging all of it got decent rates.

Hmm, this got pretty long - I was originally just going to say "drop MaxClients" :-)

– David

There's a handful of discussions on the forum about optimization, plus several Linode Library articles that discuss configuring a LAMP staff for optimal traffic.

The link to the page you listed for optimization may be good advice for some people - but it's counter productive to a limited memory VPS system (especially the max clients and maxrrequestsperchild is VERY high for a 512M VPS system).

db3l, vonskippy,

perfect thanks, that's the kind of advise I need.

Will reduce the MaxClients and report back.

I have no issues upgrading to a higher memory Linode, just want to get optimal settings working first before I switch over.

@xfactor-updates:

db3l, vonskippy,

perfect thanks, that's the kind of advise I need.

Will reduce the MaxClients and report back.

I have no issues upgrading to a higher memory Linode, just want to get optimal settings working first before I switch over.

Definitely do some tuning first. I run a phpBB on my 512. I know WP is heavier, but my peak last month was 50k pageviews in a day and my linode wasn't sweating in the least. I had some sweating back in January when I got about 40k in a 3-4 hour period, but that was also prior to the bump to 512 (it used to be 360).

Rule of thumb for LAMP servers:

MaxClients and/or ServerLimit should be no more than RAM (MB) / 20

xfactor,

Are you using a cache plugin for WP?

I have tried lots of different Apache, PHP & MySQL configs now & the load test still craps out.

Can anyone point me in the direction of optimal config files for a WordPress set up on a 512 linode?

Does upgrading to a 1024 linode make much of a difference?

I have no problems upgrading but while I'm testing I'll try any OS on 512 to see what results in the best performance.

@xfactor-updates:

Does upgrading to a 1024 linode make much of a difference?
It's only likely to make a difference if the reason for the failure is running out of memory, and the needed configuration would fit into 1GB but not 512MB. Otherwise you'll just have the same issue slightly later on the load curve with the Linode 1024. So to answer that question you need to know the memory working set your configuration needs.

Are you sure you aren't running out of memory with your testing? Can you provide some data on the configurations you've tried and their results? What does "crap out" mean with respect to your testing?

I have a difficult time believing that a low enough MaxClients can't prevent a lock-up on a Linode 512 no matter what the application or load, though perhaps at a horrible transaction rate. But maybe by "crap out" (or your original "dies" comment) you just mean it isn't meeting your expectations for transaction rate?

Knowing the actual configuration modifications you've made, and how the test is failing might help. Perhaps it's actually some bug in the application stack that is hanging things up (in which case the Linode should still be responsive but your app itself fails to work) which might be a completely different kettle of fish.

Although I suppose you could take the other tack of just creating a very large Linode as a test and see if it performs differently. You can always cancel and any remaining fee will be prorated back to your account as a credit.

– David

db3l

thanks for the fast reply, by crap out, I mean I can't access the site via HTTP & SSH responses are extremely slow.

If I run a clean install of WP & import my posts by SQL, without activating any plugins I am having the same issue.

What OS and WAMP config files would you recommend for a WordPress blog with 5k page views a day & I will work from there.

Open another SSH connection to your server, and run top. Leave it running, and do whatever it takes to make your server "crap out" again. While it's crapped out, take a screenshot of top and post it here. That'll be the easiest way for knowledgeable people to figure out what's happening.

If you don't have a caching plugin installed on your WordPress blog, try WP Super Cache or W3 Total Cache. Try them both, one at a time, but not both of them at the same time. Take some time to select all the recommended settings, and do your load test again. See if either plugin makes things better.

You shouldn't have to upgrade your Linode to handle 20K page views per day. As soon as you find the culprit and fix it, you'll be able to get by with a Linode 512 and save $$.

@xfactor-updates:

db3l

thanks for the fast reply, by crap out, I mean I can't access the site via HTTP & SSH responses are extremely slow.

If I run a clean install of WP & import my posts by SQL, without activating any plugins I am having the same issue.
With what exact configuration for Apache?

I also second hybinet's suggestion - using top is a quick way to watch your memory usage during the test. If you find yourself exhausting free memory, the working set of your current configuration is too high.

– David

Enable some sort of caching plugin, drop MaxClients down to 15, and turn off KeepAlives. (I realize these are all things people have already said; I'm just trying to distill the most important points out for you)

Assuming most of your traffic is from readers who aren't logged in, that will get you a long way. Most of the caching plugins out there just serve a static file to unauthenticated users. Apache can serve a lot of static files very quickly.

If you find you still have a lot of free RAM (I mean 'free' in the sense that it's only being used for disk i/o caching http://www.linuxatemyram.com/ ), then feel free to bump MaxClients up a bit. Start low and work your way up.

Remember though, Linux likes to cache disk reads, and is pretty good about improving i/o performance if you leave enough memory lying around for it to use, so don't get too aggressive in reducing your free memory.

OK, have reduced MaxClients down to 15 and turned off KeepAlives.

Linode unresponsive on a 10 user test.

here is a screenshot of top at the time:

![](" />

Yep, the reason you're unresponsive is that you've run out of memory, used all your swap and are thrashing (68% wait, kswapd is your top CPU process). I wouldn't be entirely surprised if you had a kernel panic at some point under this load if the OOM had nothing to kill.

So the good news is that it's clear what is happening (and it's what has been suspected in this thread), just still not why. The question remains what is tying up so much memory under load, as the processes in the top display only account for a small portion. We're one step closer, but we still haven't really identified the primary culprit for the resource use. Once we get to that "ah ha" moment, it should be easier to then work forward to plan a reasonable working memory set.

The apache processes shown aren't enough, but there must also be some missing (if you have MaxClients of 15) - probably due to the default sort order. But even if all 15 were using the high value shown of 29MB I don't think you'd have so little free memory, so someone else is using up a big chunk. Maybe the database server. Or there's an anomalous apache case we can't currently see.

If I were you, I'd probably drop MaxClients to 1, sort the top display by memory usage, then try to hang the machine again. You might actually prevent a hang at a setting of 1 (since it should free up a few hundred MB of memory) but hopefully will still see the peak memory user in any case. If everything works with a setting of 1, that's useful data too, but then you can start increasing it until you do get a hang.

Having a taller window to get more process entries on the top display it couldn't hurt, but the key is likely to be the first few top memory users.

Oh, and I'd probably just also look at current process memory usage even when idle. If there's some process we haven't discussed yet sitting around tying up a large chunk of memory even in idle conditions, that would be useful to know.

– David

PS: I think this data also continues to argue for not just upgrading the machine plan yet, since you're already clearly at least using 768MB (physical+swap) and at the moment there's no reason not to expect that your configuration wouldn't just grow to encompass any larger memory on a larger plan. You may end up with a larger plan as a better fit, but you still need to identify where the usage is coming from in order to determine that.

thanks again for all the help with this, here is a screenshot sorted by memory with MaxClients at to 15, will reduce to 1 and post again.

![](" />

Switching MaxClients to 1 doesn't kill the server but it still gets extremely slow.

![](" />

@xfactor-updates:

Switching MaxClients to 1 doesn't kill the server but it still gets extremely slow.
The benchmark testing gets slow, or the server gets slow? The former I'd expect due to the fact that MaxClients of 1 essentially serializes all requests through the single process. Given these stats (which don't show much in the way of CPU or I/O overhead) I'd probably expect the Linode itself (say via ssh) to be fairly responsive. You seem to have eliminated the memory stats from the top output, but I'm assuming you had some memory free.

An important fact to highlight is that at least you have a configuration (poor performing though it is) that doesn't ever completely keel over. That's real progress in terms of troubleshooting, and at least provides a stable starting point to work up from.

The fact that your single apache2 process is using 81MB of resident memory is a big deal. If that happens to more than a few of the possible processes under your older MaxClients of 15 you could easily explain the problems you were getting into. Unfortunately, the prior post doesn't actually seem sorted in memory order (still CPU) so can't say for sure - but even there you had a much higher average resident usage (39-40MB) than the prior summary.

So, my take away from this is that your stack is actually using quite a bit of memory for the apache processes on average (say 30-40MB), with some extreme peaks (80+MB). That may or may not be something you can optimize, but it certainly crys for starting with a very low MaxClients (just divide those resident sizes into the available memory). I'm not really a PHP guy, but seem to recall comments somewhere about some of the cache solutions needing a lot of memory - since that's your tightest resource at the moment, you might also experiment with disabling any caching for the heck of it.

Sans optimization, a next step would be to slowly raise MaxClients, testing with each change, and watching until you got tight on free memory (or see a spike in CPU or I/O, but I suspect memory will be the first resource you exhaust). Given these resident sizes I'm thinking you won't get much beyond 5-10. That can give you an inkling of best performance without any further tuning. Keep the process list sorted by memory so you can observe peaks in apache process sizes.

You might also consider making sure that MaxRequestsPerChild is low (but not 0), in case there's a memory leak in the application stack that let's the longer lived apache processes grow. Such an issue could also explain why limiting things to a single process resulted in an even larger size. Setting it to 1 will hurt performance, but ensure the smallest footprint in such cases since the process exits after each request forcing a resource cleanup. Note that a setting of 1 will probably also make it harder to catch all the processes in top - dropping the refresh interval to 1s can help but you'll still just be seeing snapshots at intervals that are large compared to the process creation/destruction interval.

To put these two parameters in perspective, consider a load test pummeling your server with thousands of requests. Apache is going to let MaxClients copies of itself be running simultaneously, each handling a request. Thus, your memory footprint will be MaxClients times the process size, which in turn depends on what that process does, such as your PHP code. Apache leaves a process around for MaxRequestsPerChild, handling multiple requests. So if your processing causes the apache process to grow by some amount per request, your peak usage will be something like MaxClients * (initialapachesize + (MaxRequestsPerChild * perrequestgrowth)).

Your goal here is to get these parameters high enough for maximum throughput, yet low enough to not exhaust memory worst case. Even if you do have some leak/growth, letting at least a few requests get handled by a single child can help a lot with fixed overhead of starting up the process and PHP interpreter, while protecting against unbounded growth over time. So for example, if you have a choice between MaxClients of 10 and MaxRequestsPerChild of 1, versus MaxCients of only 5 but MaxRequestsPerChild of 5, the latter might actually perform better due to a slower rate of process creation/destruction.

That should let you settle in on a rough working set for the current Linode. Let's say that you can only get MaxClients to 5, and due to a leak with higher MaxRequestsPerChild you're stuck with that at 1 to help bound individual apache process sizes. Now you'll know the rough requests/s you can handle on a Linode 512 without ever dying, and judge if that's fast enough. Remember also that even low request/s numbers can yield very large daily page visit counts. An average of 1/s is still 86,400 per day though clearly you want instantaneous peak req/s rates to be higher to support that on average. But your original target of 5K page views a day (let's say over an 8 hour period, and each page needs 10 individual requests, so 50,000 http requests in 8 hours) could be met with about 1.75 req/s on average. And that's probably a pretty conservative estimate since the static parts of the page can be serviced much more rapidly (and with less memory) by apache so won't be anywhere near as slow as your PHP-backed content.

If you judge the rate insufficient, then yes, at that point increasing the Linode plan will let you continue to raise MaxClients (slowly) depending on how much more memory the plan has, and gain some parallelism - at least up until the point where CPU or non-swapping I/O overhead begins to dominate.

Or of course, at that point digging more deeply into your application stack to find out if there are bugs, bottlenecks or things that can be improved there becomes an option too. But at least you'll have a stable platform to attempt tuning on.

Geez, this got long again … sorry about that.

-- David

Thank you so much & don't be sorry about a long reply.

Gives me a lot to look at.

Can't wait to move the site to a linode, support here is way too good.

Will report back tomorrow with some new tests based on your recs.

FYI, if you're running top, you can hit capital M to sort processes by memory usage.

Thanks again for all the help with this.

Just testing again with MaxClients set at 2.

After about 10 minutes of running a 10 user load test, this is the result sorted by memory.

~~![](<URL url=)http://109.74.198.57/wp-content/top/Capture-bymem.PNG" />

If I offload the mail to gmail, do I need clamav running?

and a minute later sorted by CPU as I noticed I'm hitting a high CPU usage at times.

![](" />~~

You are hitting 80 megs per apache. The math is pretty easy – if you had MaxClients = 10, you would be using 800 megs. So you can probably sustain MaxClients of 5 or so (=400 megs).

I have not used Wordpress, but these processes seem huge. For my codeigniter sites, the processes typically max out at 30 megs, and they include a lot of caching etc. I also use nginx as a proxy, so that might help by offloading a lot of the work.

Are there plugins you can turn off in Wordpress? Even if not, I would guess MaxClients of 5 should handle all the traffic you ever need it to.

OK, get it now, thank you.

any recommendations for the other settings?

StartServers 2

MaxClients 2

MinSpareThreads 25

MaxSpareThreads 75

ThreadLimit 64

ThreadsPerChild 25

MaxRequestsPerChild 0

I do have some plugins I will be removing & I'll hardcode a lot of the links to reduce database queries, just want to get these settings right first.

So you're running Apache, and Webmin, and Python, and Perl, and MySql, AND Postgres, and BIND, and a SMTP app, AND a pop3 app, and a Antivirus…..

Besides optimizing Apache (which is pretty much a given) might want to trim back some of the non-essentials.

@xfactor-updates:

Thanks again for all the help with this.

Just testing again with MaxClients set at 2.

After about 10 minutes of running a 10 user load test, this is the result sorted by memory.
This seems about as high as I would go without removing other tasks or finding out how to shrink your average apache size. You have a reasonable amount of memory for caching purposes, but not a ton. So before increasing MaxClients any further, you need to find a way to save memory elsewhere. Or if your request rate is ok with this configuration, you could declare a short term victory, pending further optimization.

> If I offload the mail to gmail, do I need clamav running?
If you don't have inbound files flowing through your machine (whether mail or otherwise) that you want to scan, absolutely, I'd get rid of it. This should let you add at least 1 to MaxClients.

– David

PS: BTW, your screen shots don't seem to be working from wherever they are hosted at the moment. If you still have them yourself and have any way to perhaps post them in this thread as text, it might help ensure the context is preserved for future readers.

@xfactor-updates:

OK, get it now, thank you.

any recommendations for the other settings?
Hmm, I have to admit I had been assuming that you were using the pre-fork MPM (which I thought was more common with PHP) and not the worker MPM. If you're using the worker model (threaded), then the multiple request threads per child might account some for the larger average apache process size. It probably doesn't make that much difference if n requests are spread across n apache processes or n threads within a fewer number of processes, but you could also experiment with using the pre-fork model to see if you get better behavior and/or control over your process size.

In either case, if you still have very large apache processes, you could also try with a low (but not 0) setting for MaxRequestsPerChild. A value of 0 here lets the same apache process run forever, so if there's a leak of any sort in the application stack, it could contribute to the unusually large apache process sizes.

You're basically looking at an iterable approach at this point - stick with your working configuration as a base, and then tweak in various ways to see how it affects your processing rate, memory usage, and general performance. Only fiddle with one or two parameters at a time or else you won't be able to tell what change is affecting behavior. There isn't necessary a single "right" set of configuration parameters, though you ought to be able to determine what works best for your application stack and load.

I'll second what others have said too - watch your heavy resource (particularly memory) users other than apache too, and decide if you really need them, or if the resource they are using could be freed up for use by apache and your web processing stack.

– David

Thanks again guys.

I'm setting up a new disk image to check mem usage without all the unneeded processes that got installed by installing webmin.

Forgot this would kill the images, will re-add them shortly.

@db3l:

Hmm, I have to admit I had been assuming that you were using the pre-fork MPM (which I thought was more common with PHP) and not the worker MPM.

Sorry, I am using pre-fork, copy & pasted wrong bit from config file.

ok, seems to be an issue with some of the WordPress plugins I am using.

Disabling all but essential plugins has reduced 80 megs per apache to 15 megs per apache.

Will re-enable one by one to find the culprit.

Anyone know the ideal or expected MB per apache on a WP install?

You might need to work on tuning APC.

I can't see your apc.php today, but I can see from your phpinfo.php that your APC memory is only 30MB, the Ubuntu default.

Is your apc.php pretty ugly with full cache, lots of fragmentation and high cache clear count?

If so, you should increase to about apcshmsize=80 in /etc/php5/conf.d/apc.ini and also increase the system shared memory size to about 96MB on Ubuntu using /etc/sysctl.conf setting of kernel.shmmax = 100663296

Keep increasing the size of the apc cache til you get no cache clears and no fragmentation.

followed all the advice here & things are looking so much better, thank you all.

here is the result of the same test now after 10 minutes load test:

~~![](<URL url=)http://109.74.198.57/wp-content/top/Capturenew.PNG" />

My prefork MPM is now:

StartServers 5

MinSpareServers 5

MaxSpareServers 10

MaxClients 2

MaxRequestsPerChild 200

Increased APC memory

php info: http://109.74.198.57/php.php

apc info: http://109.74.198.57/apc.php

Any other tips or suggested MySQL configurations?~~

I'm pretty sure that having MaxClients < StartServers is a waste of RAM on eternally-idle processes…

… but then, I use worker and I don't know for sure about prefork.

Change to Nginx.

wow, must say that using Nginx as a reverse proxy has helped massively & was extremely easy to set-up.

followed this guide

@xfactor-updates:

wow, must say that using Nginx as a reverse proxy has helped massively & was extremely easy to set-up.

followed this guide

It's even easier to just ditch Apache entirely and use Nginx and PHP via FastCGI.

Or apache-worker with fastcgi. Really, once you move PHP out to FCGI semipermanent processes, the choice of frontend webserver is a matter of preference. Apache may be more practical because of all the "assuming apache's rewrites" .htaccess files out there.

Finally switched DNS to my Linode & while it hasn't fully populated yet, I notice I am dipping into Swap Memory, is there anything in the screen shot below I should be worried about?

Running MaxClients at 5, should I cut back to 1 or 2 while I tweak other settings?

![](" />

16 KB isn't much of a dip, in the grand scheme of things.

Nah, 16K barely qualifies even as "dipping"; that's small enough to essentially just be statistical noise. I wouldn't even worry if that grew to be a few MB, unless you start to see the I/O wait % rise. It's really when your working set forces constant swapping that things fall over the cliff and you'll recognize that when it happens, having now been through it before :-)

As for dropping MaxClients, mostly a judgment call. Doing so will free up a little more memory for the kernel to use for filesystem caching which can help with performance as long as the fewer simultaneous clients don't bottleneck things first.

You may find over time that mysqld will grow in memory usage (some of which can also be tuned as needed) as your database grows.

But you look pretty reasonable at this point so if I were you I might let things settle in a little further and gather more data about what steady state looks like. Just spot check your resources in the near term, or better yet set up something like munin to monitor it to let you review it over time.

– David

something is very wrong,

if you have php 5.3.3 turn off apache, and use php-fpm to manage your php connections, post a TOP image then.

@fiat:

something is very wrong,

if you have php 5.3.3 turn off apache, and use php-fpm to manage your php connections, post a TOP image then.

why do you say something is very wrong?

you should be aiming to reduce the memory consumption of apache, it looks as if it has actually increased.

using php-fpm or fast-cgi will reduce total memory consumption as only once instance will run, rather than an instance per apache process, at least, that is my understanding.

i had a similar problem before switching to lighttpd/fast-cgi.

with apache + mod_php i was averaging ~ 384mb used, now ~ 170mb used (active).

@fiat:

as only once instance will run, rather than an instance per apache process, at least, that is my understanding.

If you'd have one instance of PHP, you could handle one PHP request at a time, serializing your server's performance down no nothing.

Facts:

1. wth modprefork+modphp there's a separate large Apache process WITH integral php parser launched for every single http connection, no matter if it's for a php script or a simple image file. That's a huge hog, and you plainly have to cut MaxClients down to a number that'll accomodate all these processes in memory.

2. With fastcgi, you "detach" PHP from Apache. There's a "manager" process, that spawns $PHPFCGICHILDREN sub-handlers, so it can handle that much parallel scripts. Your apache (or any other webserver) talks to it, but only if it actually needs to execute a php script - "plain" files are served by the webserver directly.

3. If you want to use fastcgi and PHP in Apache, you need to use modfastcgi - modfcgid doesn't pipeline requests, and assumes "one subprocess = one script at a time". The whole PHP tree IS one subprocess to Apache, as you spawn only the "manager". So, mod_fcgid thinks you can execute only one PHP parser at a time.

4. If you're moving to fastcgi, there's nothing (except other non-threadsafe apapche modules you may have, that is) to hold you onto prefork. You can switch to mpm-worker, set a nice number or threads, and enjoy your fast, memory-efficient, Apache setup.

I have an mpm-worker setup that has 50-250 threads active (split 25 per process; probably could go for 50 per process, but don't see any need), and 25 php slaves. That means I can handle up to 250 parallel connections, 25 of which can be executing a php script, and rest pulling html files and/or images - otherwise, request is queued until a thread and/or a parser is free. It all fits in about 200MB of RAM, and that's including the 64MB shared-memory APC cache.

This thing serves about 200k hits per day without a hitch, and with about 18% CPU load in the rush hours.

Once you put nginx in front of apache, apache pre-fork isn't such a horrible choice anymore.

1. You configure nginx to serve static files directly, so apache doesn't have to use a php-loaded thread to serve static files.

2. You disable keepalives on apache so php-loaded threads don't have to wait around for the keepalive time and can die immediately.

The memory and performance difference between nginx+php-fpm and nginx+apache-prefork isn't much.

Now, I use nginx+php-fpm and recommend it highly. But using nginx in front of apache-prefork does solve most of apache-prefork's disadvantages.

I would second (or third..or fourth) the suggestion to try nginx. It's much lighter than Apache. You can use it as a proxy, but I would probably just use it as a primary httpd. Wordpress isn't too difficult to setup with nginx, either.

Thanks for the advice, trying Nginx on it's own with php5-cgi & it's consuming more memory than Ngnix as a proxy to Apache, see screen shot.

any other tuning advise?

![](" />

It's using almost identical memory. Both apache and php-cgi are using 13MB per process in addition to the 29-31MB in the shared cache.

If you want to go without apache as a backend, you really should go with php-fpm instead of php-cgi.

php-fpm is another server interface like cgi that builds on cgi and adds a number of important features and improvements for process management. It was made part of the official php package in 5.3.3 and is part of the Ubuntu Maverick version. php-fpm has nice logs and stats and is pretty much tailored to use behind nginx.

From your php version of 5.3.2-1ubuntu4.5 you're on Ubuntu Lucid, you can use my repo for php-fpm.

add-apt-repository ppa:brianmercer/php

aptitude update

aptitude install php5-fpm

Or you could upgrade to Maverick.

You asked about mysql tuning. Get mysqltuner.pl and tuning-primer.sh and start getting familiar with mysql directives.

> From your php version of 5.3.2-1ubuntu4.5 you're on Ubuntu Lucid, you can use my repo for php-fpm.

add-apt-repository ppa:brianmercer/php

aptitude update

aptitude install php5-fpm

Tried using this method. Everything installed OK. Now how do I enable php-fpm? my php info is still saying:

Server API : CGI/FastCGI

and when I look in my processes it says php-cgi.

You can start(/stop/restart) php-fpm with

sudo service php5-fpm start

That is the recommended way to start/stop daemons and is compatible with upstart and old scripts.

Killing php-cgi depends on how you started it. Maybe "sudo /etc/init.d/spawn-fcgi stop" or if necessary "sudo killall php-cgi" or "sudo killall -9 php-cgi"

You can also save RAM by using less PHP processes, if you've got lots of spares. I've run very memory-constrained boxes with as few as two, since that's sufficient for light (to possibly medium) load if you don't have any long-running scripts. I wouldn't suggest that for an actual box if you've got RAM to spare, but 4 to 6 goes a long way if there aren't any long-running scripts.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct