nginx + php-fpm installation on Linux (Centos 5.3)
You must install nginx or lighty (lighttpd) web servers.
For nginx to work with drupal (or other CMS) you need to install php-fpm, it is better then spawn-cgi.
Tested on a completely new image on
Distribution - Linux Centos 5.3
===== preparing the environment ======
you better run this just to be sure:
yum install gcc
yum install make
yum install libxml2-devel
yum install libevent
yum install libxml2
yum install diffutils
yum install wget
yum install bzip2
yum install patch
yum install autoconf
mkdir /download (( or what you want ))
cd /download
======= download + compile + install libxml2 and libevent =======
wget
tar -xzf libevent-1.4.12-stable.tar.gz
cd libevent-1.4.12-stable
./configure --prefix=/usr
make
make install
cd /download
wget
tar -xzf libxml2-2.7.2.tar.gz
cd libxml2-2.7.2
./configure --prefix=/usr
make
make install
======= download php + php-fpm packages =======
cd /download
wget
wget
====== build the patch ======
tar -xzf php-fpm-0.6~5.2.11.tar.gz
php-fpm-0.6-5.2.11/generate-fpm-patch
===== patching php with fpm =====
bzip2 -cd php-5.2.11.tar.bz2 | tar xf -
patch -d php-5.2.11 -p1
cd php-5.2.11
./buildconf --force
./configure --enable-fastcgi --with-fpm --with-libevent
make all install
=====================================
=====================================
=====================================
After installation
/etc/init.d/php-fpm stop
/etc/init.d/php-fpm start
/etc/init.d/php-fpm reload
===== php-fpm documentation ======
13 Replies
@eranb22:
Since a light server is a must on a VPS because of a limited RAM,
You must install nginx or lighty (lighttpd) web servers.
Thanks for the guide. I may get brave someday and try it out. However, I do have to respectfully disagree with this. Sure, nginx or lighty will probably make things faster, lower impact.. Allow you to run a larger site on a low memory VPS. But there's no "must" about it. Apache is perfectly serviceable on a 360 and runs just fine. I routinely have RAM free and a speedy website for my medium traffic websites, and considering how popular apache is I'm hardly the exception.
Just be careful with absolutes.
what apache server do you use? (vesrion)
how many people per day?
how did you configure the server?
do you use proxy?
thanks
@eranb22:
thanks for the reply, well, i'm not an expert, i saw alot of posts talking about it and saying that apache is a problematic server because of RAM.
I suspect that in a lot of these cases, it's less apache itself than the fact that many configurations wrap in large, resource-hungry, embedded modules (like mod_php), which when combined with distribution default apache configurations often yield a large number of simultaneous client processes, which then starts swapping and kills the machine, since disk I/O is often the most heavily-contended VPS resource. It doesn't help that the issue probably doesn't show up in light testing, but only when the client load gets heavy and the overly lenient configuration limits start to hurt.
The single best thing you can do in the VPS environment with apache is ensure that you limit the number of simultaneous client processes to stay within your available resources (avoiding swapping if possible). Best throughput for a system is rarely achieved by trying to do everything in parallel - letting requests queue up to an appropriately sized configuration is best. But given such a configuration, apache can run quite well even on a Linode 360.
Of course, if you don't need specific apache features, something like nginx or lighttpd is certainly not a bad idea to maximize resources available on your Linode for other processing.
Or, what I'm doing in some cases is fronting apache with nginx, proxying over requests that need support for something apache has but nginx doesn't. Such a configuration can let you tune down the apache configuration even more tightly since it no longer needs to be tied up serving static content. The blog.a2o.si link you reference helps highlight how that plays to each server's strength, as it shows how nginx tested better at static content, while conversely apache did better with the php processing.
– David
ok, with dynamic content, apache is faster, but it eats alot of RAM.
@eranb22:
But what about the RAM issue?
ok, with dynamic content, apache is faster, but it eats alot of RAM.
I still think it depends. Is apache's footprint somewhat larger than nginx's for example? Sure. Does that automatically have to be a problem or preclude its use? No.
Now, if you find yourself with an apache config using the mpm_prefork worker with MaxClients set to 150, can that quickly thrash a Linode 360 under load? Absolutely. But if apache is giving you useful functionality, the answer may just be to adjust the configuration rather than automatically discard apache as an option.
For example, on my nginx+apache Linode, here's a current snapshot of memory usage:
~$ ps -C apache2 -C nginx -o %mem,vsz,sz,rss,args
%MEM VSZ SZ RSS COMMAND
0.1 10472 2618 416 /usr/sbin/apache2 -k start
0.1 10244 2561 416 /usr/sbin/apache2 -k start
0.3 68020 17005 1352 /usr/sbin/apache2 -k start
0.3 68020 17005 1284 /usr/sbin/apache2 -k start
0.1 4544 1136 684 nginx: master process /usr/local/sbin/nginx
0.3 4820 1205 1336 nginx: worker process
0.2 4688 1172 1072 nginx: worker process
Apache's larger virtual size is in large part a much larger collection of shared libraries (32 vs. 9 for nginx), which themselves would be shared between the apache processes, and much of which I will never even invoke given my configuration. It's current resident footprint isn't that much worse in my case.
Now, this is just a snapshot and there are tons of variables, so I make no claims that this is representative of anything beyond my own system. But it's certainly an existing example of a modest apache configuration.
Here's the key though - this is a fairly stock nginx configuration (2 workers), and it should see modest growth under load. But if I were using a more default apache configuration, with MaxClients 150 for example, it might end up with 10-12 processes (mpmworker) or even 100+ (mpmprefork) under load. So it's not the base overhead but the result of multiplying that overhead by the configuration.
Those default configurations are fine for dedicated servers with GBs of memory, but nowhere near appropriate for resource constrained VPSes. But you may not notice during initial testing when there just aren't enough simultaneous requests to push past the default initial process count.
In my case, I know that the ratio of dynamic content I'm serving through apache is trivial compared to static content, so my apache configuration is quite small, and the working set shouldn't get much above this - it certainly won't burn more than an extra process or two at most.
Now, I don't currently have something like mod_php loaded in my apache, so anyone else's apache footprint might be different, and likely somewhat larger. But then again, serving php via nginx is going to be larger too given the need for the external fcgi php process.
This isn't to say that nginx doesn't have some clear advantages over apache in constrained environments. It's my preferred server on all of my Linodes. But when comparing apples to apples I think that apache does get a little bit of a bum rap more due to its average default configuration, and I'm certainly willing to use it if I want a feature it has.
– David
Two Linode 360s I work on:
1. Ubuntu 8.04, Apache 2.2.8, and two PunBB forums. MaxClients set to 50. Gets ~50,000 hits per day, around 15,000 of which are for dynamic pages.
2. CentOS 5.3, Apache 2.2.3, and MediaWiki. MaxClients set to 25. I don't have good statistics, but it gets less than 10,000 hits per day total.
Edit: both use mpm_prefork
Level of security?
–-------------------------------------------------------
Which version is better and why?
apache 1.3 or apache 2 or apache 2.2 ?
Thanks, it is important for me to figure that out ones and for all.
Go with 2.2. It's been out for over 4 years now and development on 1.3 and 2.0 has been essentially dead for 2 years.
Or nginx is too young?
@BarkerJr:
Don't forget that most of your requests are for static content. Apache processes serving static content will use very little resources. My 360 is currently serving 50 clients. Load average 0.00, CPU 0.1%, RAM 60% used. This is an arguments against setting your server limits too low. I set mine to 256.
I'm not sure an example of client load that doesn't happen to use up all available clients is a reason to avoid lowering the setting to as appropriate a value as possible. I agree you don't need to go lower than your system can handle, but choosing an appropriate size to protect against worst case is appropriate, and missing low is likely to have a smaller negative impact than missing high, especially for new administrators.
While the total resource usage for processing the request will differ, and clearly dynamic content processing may eat up further memory beyond the starting process size, the core/minimum resident memory footprint of all worker processes, regardless of request, are the same. That's why the option of fronting apache with nginx for static content is useful when your average apache worker process is heavy-weight, and/or you have to use something like mpm_prefork (say for php). Why burn such an expensive worker process just for static content.
For example, one apache system of mine has worker proceses (including mod_php) of about 3MB at creation. That's 3MB whether that worker is going to run a php script or just return a static page.
If the 256 is MaxClients, I suspect your configuration could be a time bomb waiting to go off if the load gets high enough, depending on the worker you are using (there's more head room with mpmworker than mpmprefork for example). 256 clients with mpmworker might still only be a handful of processes, depending on ThreadsPerChild), while mpmprefork could create 256 processes which I seriously doubt a 360 could come close to handling, even without any modules.
Let's use your 60% memory usage, with 50 clients. Let's assume mpmprefork, so memory usage should be linear to simultaneous clients (mpmworker is more of a step function as it uses up worker threads). Let's be generous and say only half of that is apache and half other stuff, so 30% memory for 50 clients. Hitting 256 clients would need 180+% of your memory (150% for apache and the 30% from other stuff). You'd be swapping like crazy and likely unusable.
What's important to realize is that in your current setup, dropping MaxClients to 50 would have no negative impact (it would still service your 50 clients simultaneously), but would protect against the worst case. What the right value for "50" is depends of course.
That's a large part of why people get bit with this - you only see the impact on having an overly large MaxClients when you are under load and having problems keeping up in the first place. Until then, a far smaller number of processes will be around, just as if MaxClients was lower in the first place. Which is also why there's no penalty to setting it lower up front, other than to protect you in a worst case scenario.
As long as your machine's ram can handle the total number of apache processes that may (worst case) be created due to the configuration, there shouldn't be a problem. But you should be estimating worst case, not typical, since anything beyond that is just going to thrash your machine anyway.
Of course, if you know that you'll never have more than 50 simultaneous clients through some limiter external to your box, then MaxClients won't matter either, since it'll never get used. But again, why not have a configuration that is safe should something go wrong and the request load spike?
– David