Just another MaxClients (prefork MPM) thread for Linode 512.

As title.

I have a Linode 512 with light load, it is used to answer cell phones with some data, nothing heavy.

Sometimes the load may increase considerably.

In normal conditions this is the free -m output __free -m

total used free shared buffers cached

Mem: 424 335 89 0 33 166

-/+ buffers/cache: 134 290

Swap: 255 10 245__

The VPS is running CentOS 6 with the latest paravirt kernel from Linode.

Services running:

  • LAMP + phpMyAdmin

  • Postfix, Dovecot, Squirrelmail

  • Cacti for server monitoring (SNMP)

  • fail2ban

How should I set this parameters?

StartServers 3

MinSpareServers 3

MaxSpareServers 6

ServerLimit 30

MaxClients 30

MaxRequestsPerChild 2000

StartServers 3

MaxClients 150

MinSpareThreads 25

MaxSpareThreads 75

ThreadsPerChild 25

MaxRequestsPerChild 0

36 Replies

Oh, not again!:twisted:

Can't tell exactly, because we don't know what causes the load to "increase considerably". It could be RAM, it could be a rogue script that eats up CPU cycles, or it could be a disk-heavy operation.

But MaxClients 100 definitely seems too high for a server with PHP on it.

Try this:

ServerLimit 15

MaxClients 15

That should leave you with plenty of RAM for other things. If you still get slowdowns after that, RAM might not be the culprit.

The "worker.c" part doesn't apply, since you're using the prefork MPM.

I edited my actual settings on the first post, the settings is:

StartServers 3

MinSpareServers 3

MaxSpareServers 6

ServerLimit 30

MaxClients 30

MaxRequestsPerChild 2000

The load increase considerably is done by heavy load produced by customers that log in to ask for an update when the update is ready.

What about MaxRequestsPerChild ?

@sblantipodi:

MaxRequestsPerChild ?

MaxRequestsPerChild usually doesn't matter unless there's a memory leak in PHP itself. PHP used to have annoying memory leaks in the past. But nowadays, you can usually set MaxRequestsPerChild to any reasonably high value (1000+) or even disable it (0) without any ill effect.

@sblantipodi:

The load increase considerably is done by heavy load produced by customers that log in to ask for an update when the update is ready.

Can you describe the "update" in some detail? How is it generated? PHP script accessing a database? Does it involve any image processing? What kind of symptoms does it cause when you have high load? Does the server become slower or completely inaccessible?

You said you had Cacti running on your server. What does the RAM, CPU, disk I/O, and load average look like when the server load increases? You can upload images to sites like imgur and post links here. A list of processes (the "top" command) would also help.

@hybinet:

@sblantipodi:

MaxRequestsPerChild ?

MaxRequestsPerChild usually doesn't matter unless there's a memory leak in PHP itself. PHP used to have annoying memory leaks in the past. But nowadays, you can usually set MaxRequestsPerChild to any reasonably high value (1000+) or even disable it (0) without any ill effect.

@sblantipodi:

The load increase considerably is done by heavy load produced by customers that log in to ask for an update when the update is ready.

Can you describe the "update" in some detail? How is it generated? PHP script accessing a database? Does it involve any image processing? What kind of symptoms does it cause when you have high load? Does the server become slower or completely inaccessible?

You said you had Cacti running on your server. What does the RAM, CPU, disk I/O, and load average look like when the server load increases? You can upload images to sites like imgur and post links here. A list of processes (the "top" command) would also help.

we sell mobile software… when an updated version of the software is released our linode sends thousands of emails informing customers about the news.

the server complete the "sending mail" task in about 15 minutes.

during this heavy work load the server is accessible without problem, only some little slowdown.

generally every customers once received the email contact our server to download the updated version (just a 1MB download) and than starts a php/java script to check for license.

if I set the MaxClients to 20, when I restart apache after an upgrade apache informs me that he received more than 20 request but servers remains stable.

Sincerely I don't know what is a server crash since I'm on linode, never experienced one. Knock on wood as english people says.

I can tell that the request are light weighted since I managed 30 clients (in the same time) without any server issues, just a little bit of slow down but nothing to warry about.

This linode rocks.

I am trying to understand why max clients should be set as low as you suggest with such a powerful toy like a linode 512.

I know that someone is using more "resource intensive" script but I think that its better to work on the script instead of lowering that parameter too much.

The point is that I cannot understand why generally linux users suggest to lower that parameter too much.

Some years ago people do magicians with 128MB of RAM or less, why can't we manage more than 15 clients with 512MB of ram?

Obviously every case is different and I can't speak for everyone.

what are your keepalive settings? if the timeout is too high, that can allow a user to keep a connection longer than they should. Some here recommend turning the off completely, I tend more towards setting the timeout to 1 or 2 seconds. Default is something like 15 seconds, which is too high.

@glg:

what are your keepalive settings? if the timeout is too high, that can allow a user to keep a connection longer than they should. Some here recommend turning the off completely, I tend more towards setting the timeout to 1 or 2 seconds. Default is something like 15 seconds, which is too high.

I have 20 seconds, I need it to grant all mobile phones to live well with my server, unfortunantly not every country have good mobile network, america included.

@sblantipodi:

@glg:

what are your keepalive settings? if the timeout is too high, that can allow a user to keep a connection longer than they should. Some here recommend turning the off completely, I tend more towards setting the timeout to 1 or 2 seconds. Default is something like 15 seconds, which is too high.

I have 20 seconds, I need it to grant all mobile phones to live well with my server, unfortunantly not every country have good mobile network, america included.

Keepalive is not a connection timeout, it's a timeout for how long a client can send additional requests on the same connection. You'll be better off turning that down or even off.

@glg:

Keepalive is not a connection timeout, it's a timeout for how long a client can send additional requests on the same connection. You'll be better off turning that down or even off.

thanks for the suggestion, may I ask you why of this suggestion?

I would like to understand why of this tips.

Thanks.

@sblantipodi:

I am trying to understand why max clients should be set as low as you suggest with such a powerful toy like a linode 512.

I know that someone is using more "resource intensive" script but I think that its better to work on the script instead of lowering that parameter too much.

The way Apache and PHP are typically deployed together is rather unusual. Instead of having a separate set of PHP interpreters to handle requests that need it, the PHP interpreter is embedded into the web server itself (as mod_php). This makes installation quite a bit easier, but there are two very big downsides.

First, PHP does not handle multithreading very well. This means that Apache needs to have a separate process for each request, instead of just being able to instantiate a thread. This is heavy, and means that the number of simultaneous requests must be set lower than you would with other setups.

Secondly, because the nature of the request is not known until after it is accepted, every process must be prepared for anything. This means, at a minimum, a PHP interpreter, along with any libraries that get loaded over its lifetime. This makes things quite heavy, especially when frameworks or heavy applications are involved. If you have, say, Drupal and WordPress, you get twice the whammy, since it doesn't unload everything between requests.

The "stereotypical" Apache+PHP problem is running out of memory because the default MaxClients is 150. Traffic gets heavier than usual for a moment, the server starts swapping, requests take longer to process, and Apache reacts to this by spawning more processes. MaxClients is a safety valve, and setting it very low will immediately stop the bleeding. You can increase it, of course, as your situation allows.

> The point is that I cannot understand why generally linux users suggest to lower that parameter too much.

Some years ago people do magicians with 128MB of RAM or less, why can't we manage more than 15 clients with 512MB of ram?

The applications we run have become larger over time, and since the "default" is to integrate PHP into Apache, this has had a direct effect on the amount of RAM required per simultaneous connection. We also have more objects on each page load – I just counted 24 on one of the sites $EMPLOYER has, ranging from jQuery to video thumbnails to stylesheets to ads. So, we have more RAM, but we've found new, innovative ways to use it.

Now, a really good question for the history department: why did we go to mod_php in the first place? In The Beginning, when computers were physically large, relatively rare, and slow, we did dynamic content by configuring the web server to spawn a process and run a script. At the end of the request, the script would terminate and, ta-da, everything it printed would be returned to the user. This was fine from the web server's standpoint, but… well, it's slow, even on today's equipment. I timed it, and it took 6.8 seconds to handle a relatively simple view of the above-mentioned site on my workstation. Sure, it only took 0.9 seconds the second time (hooray for caching), but it only takes 350 ms to do this same request against the production web server, and at least 42 ms of that is network delay.

So, the trend was to stuff interpreters into the web server. This was a pretty clever idea, since it doesn't involve any operational changes: there's no additional daemons to run, and the web server can still do what it always did, except instead of spawning /usr/bin/php when it sees a .php file, it can just pass it off to its built-in PHP interpreter. Downside is that it now has a built-in PHP interpreter, which it has to carry around like a millstone when handling any request, no matter how trivial.

Today, of course, the way to handle boatloads of traffic is to take a little bit from both approaches. With something like FastCGI, the web server does not have a built-in PHP interpreter; instead, when it encounters a .php file, it proxies the request to another server, which does have a built-in PHP interpreter. In your situation, you wouldn't have a bulky PHP interpreter sitting around idle while someone's smartphone downloads a 1 MB file over SlothWireless's ⅓G network, or while a browser keeps an idle network connection open in case the user requests another page (this is what a keepalive is, basically).

Somewhat like zombo.com, you can do anything with 512 MB of RAM, anything at all. The only limit is the resources required per request.

it make sense but I need that idle because making up connection on cell phones require more time than transferring 200KB on GPRS.

opening a new connection on every request isn't good, in this case.

If you stay with Apache+mpm-prefork+mod_php for handling all HTTP requests, you will need to balance the performance benefits of persistent connections vs. the ability to handle more requests per second. There's no right answer.

Ok, I think I'm good with 30 max clients.

Never had problem in this way, just searching for better tweaking.

Probably things will go also better when I will boot with the latest Kernel 3.0, since I'm using the 2.6.39.1 that has some memory problem with 64bit.

With the extra RAM, you might be able to bump it up to 31 or maybe even 32. :-)

@sblantipodi:

@glg:

Keepalive is not a connection timeout, it's a timeout for how long a client can send additional requests on the same connection. You'll be better off turning that down or even off.
thanks for the suggestion, may I ask you why of this suggestion?

I would like to understand why of this tips.
Lowering or disabling KeepAlive in Apache is very important when you also have a low MaxClients setting. Here's why:

When you have a high MaxClients setting, your server tries to process a lot of clients at the same time. This causes a load spike, because too many things are happening at the same time. As a result, all of the clients experience a serious slowdown. Imagine a chaotic market where everyone tries to buy the same thing at the same time. The stampede would crush the seller, and only a few people would get what they wanted. Not good!

On the other hand, when you have a low MaxClients setting, your server tries to process a few clients at a time, and tells other clients to wait in line like good ol' Japanese gents until it's their turn. There is no load spike on the server, so each client gets served very quickly, and the line also moves very quickly. In fact, you are able to serve even more clients per unit time this way, because the whole process is so orderly and the server is humming along at a more fuel-efficient RPM.

But the success of the "please wait in line" approach depends on how quickly you can serve each client. What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit? The entire line behind him must wait longer. This is exactly what happens with KeepAlive. A client who opens a persistent connection holds up the line for 20-30 seconds just in case he might need to send another request. Now, if everyone does this, the system becomes extremely inefficient. Therefore, when you have a low MaxClients setting, you must also have a low KeepAlive setting.

But don't despair, there's still hope.

If your application really needs long-lasting connections, consider putting nginx in front of Apache as a reverse proxy. nginx is a lightweight web server that was specifically designed to handle tens of thousands of connections using only a tiny amount of server resources. Give nginx a generous KeepAlive setting, let it handle all the client connections, and disable KeepAlive on the Apache side. The connection between nginx and Apache is local, so the lack of KeepAlive doesn't matter there. In fact, this is exactly how many of your "magicians" manage to pump out an insane amount of hits on very small servers.

the problem is that if I lower the keepalive no one will use apache since most mobile phones needs more than 2 or 3 seconds to process a second request or to process a single request.

I think that this suggestion in this case doesn't work.

KeepaliveTimeout just limits the length of time a connection may be held open after a request is finished. It has no effect on a client making a single request. The limit to how long an active request may take is different, and is usually on the order of minutes.

This might be useful:

http://httpd.apache.org/docs/2.0/mod/co … ivetimeout">http://httpd.apache.org/docs/2.0/mod/core.html#keepalivetimeout

If someone is done making the request, you want Apache to free up those resources as quickly as possible.

I noticed that if I set the keepalive parameter at 15/20 a cell phone can make a second connection in an instand without the need of creating a second socket.

If I set this paramter to zero, I need to open another socket to make a second connection.

How can you explain this?

@sblantipodi:

I noticed that if I set the keepalive parameter at 15/20 a cell phone can make a second connection in an instand without the need of creating a second socket.

If I set this paramter to zero, I need to open another socket to make a second connection.

How can you explain this?

Do you mean connection instead of socket?

> The number of seconds Apache will wait for a subsequent request before closing the connection. Once a request has been received, the timeout value specified by the Timeout directive applies.

If you mean connection, yes, that's how keepalive works, it keeps that connection persistent. However, once that phone is done, your server, then has to wait 15-20 seconds to free up the connection and resources that phone has been using before freeing them up. You either need to reduce your keepalivetimeout and/or maxclients, switch to a different webserver (nginx or lighttpd) or a different apache/php configuration or get a bigger server.

> Setting KeepAliveTimeout to a high value may cause performance problems in heavily loaded servers. The higher the timeout, the more server processes will be kept occupied waiting on connections with idle clients.

Aren't you the guy who's saying you run 64-bit on a 512, because "it's cool" and you "don't need the RAM"?

It really does sound like you do need the RAM. :)

@waldo:

If you mean connection, yes, that's how keepalive works, it keeps that connection persistent. However, once that phone is done, your server, then has to wait 15-20 seconds to free up the connection and resources that phone has been using before freeing them up. (…)
This shouldn't be the case if the phone is really done, since it will close the connection and immediately free it up. What the server side timeout does is prevent clients from connecting, keeping the session persistent, but then not actually making any further requests in that time frame. It also protects against persistent sessions that do not close properly (client turned off, packets lost, etc…).

But for clients that are actually going to make more than one request, permitting them to re--use the existing connection is much more efficient, so I wouldn't set the timeout too low and block that.

So it's not like every connection will require the keepalive timeout before it can get reused, it just sets an upper limit. Of course it's still a trade off since some fraction may incur that "wasted" time so you need to balance it against your resources. Or, as already suggested elsewhere, use a separate front daemon like nginx, that is lower overhead per connection to assist in managing the uncertainty there.

-- David

I'm good for now, I will see in the future,

thanks for the answers :)

@hybinet:

What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit?

Your market analogy is excellent! Well put.

@glg:

Your market analogy is excellent! Well put.

Thanks, I think I've been hanging out in r/ELI5 a little too much lately.

@glg:

@hybinet:

What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit?

Your market analogy is excellent! Well put.

I can't see the problem.

If a customer has 5 credit cards over the limit it will take a "client" but there are other 29 cash desk.

@sblantipodi:

@glg:

@hybinet:

What happens when a customer at a grocery store holds up the line by fumbling with five different credit cards all of which went over the limit?

Your market analogy is excellent! Well put.

I can't see the problem.

If a customer has 5 credit cards over the limit it will take a "client" but there are other 29 cash desk.

His example was only one line. Yes, you have 30 lines (MaxClients), but also potentially hundreds waiting in line. If you have 20-25 of those lines fumbling with their credit cards or worse "can I write a check?" (Keepalive timeout too high), then suddenly only a handful of your lines are moving.

@glg:

@sblantipodi:

@glg:

Your market analogy is excellent! Well put.

I can't see the problem.

If a customer has 5 credit cards over the limit it will take a "client" but there are other 29 cash desk.

His example was only one line. Yes, you have 30 lines (MaxClients), but also potentially hundreds waiting in line. If you have 20-25 of those lines fumbling with their credit cards or worse "can I write a check?" (Keepalive timeout too high), then suddenly only a handful of your lines are moving.

be real… :)

@sblantipodi:

@glg:

His example was only one line. Yes, you have 30 lines (MaxClients), but also potentially hundreds waiting in line. If you have 20-25 of those lines fumbling with their credit cards or worse "can I write a check?" (Keepalive timeout too high), then suddenly only a handful of your lines are moving.
be real… :)

Every analogy breaks down at some point…

In real life, it's unlikely that every customer in every cashier will try to use an expired credit card or offer to write a check. But in computing, if you allow something to happen, it will happen sooner or later. Especially if the reason you're allowing it in the first place is to accommodate clients who actually need it badly.

How long does it take for a mobile client to open a connection, make the first request, receive the first response, process it, make the second request, receive the second response, and finally close the connection? Let's be generous and say 20 seconds. If so, each and every client is holding up a line for 20 seconds, regardless of how long it actually takes for the server to process their requests. Every single client walks up to the cashier, puts down a bunch of stuff, realizes that it forgot the milk, and tells the cashier to wait while they get milk! Unfortunately, Apache with mpm_prefork isn't smart enough to let another client through while the first client is getting milk. That's what nginx is for.

If your setup works fine with MaxClients 30, it's only because there are never more than 30 clients trying to connect in any 20-second interval. If you sell enough apps to get 31 clients in a 20-second interval, the 31st client will have to wait 20 seconds before it can even make the first request, because all of the 30 lines are being held up by milk-forgetters. Sooner or later, you'll end up with a client that needs to wait 40 seconds. But not many clients will wait 40 seconds. They'll just timeout, making it look like your site is down.

This doesn't need to be fixed right away, but it's worth remembering if you expect more clients in the future. Switching to nginx often gives you an incredible speed boost, simply because nginx manages client connections much more efficiently than Apache's old-fashioned mpm_prefork. nginx is very smart. If a client so much as fumbles with one credit card, nginx will process a couple of other clients in the meantime.

KeepAlive is safe to use with nginx, but not with Apache.

Edit: remember the milk reference.

@hybinet:

Every analogy breaks down at some point…

In real life, it's unlikely that every customer in every cashier will try to use an expired credit card or offer to write a check. But in computing, if you allow something to happen, it will happen sooner or later. Especially if the reason you're allowing it in the first place is to accommodate clients who actually need it badly.

How long does it take for a mobile client to open a connection, make the first request, receive the first response, process it, make the second request, receive the second response, and finally close the connection? Let's be generous and say 20 seconds. If so, each and every client is holding up a line for 20 seconds, regardless of how long it actually takes for the server to process their requests. (Unfortunately, Apache with mpm_prefork isn't smart enough to let another client through while a line is being held up. That's what nginx is for.)

If your setup works fine with MaxClients 30, it's only because there are never more than 30 clients trying to connect in any 20-second interval. If you sell enough apps to get 31 clients in a 20-second interval, the 31st client will have to wait 20 seconds before it can even make the first request, because all of the 30 lines are being held up. Sooner or later, you'll end up with a client that needs to wait 40 seconds. But not many clients will wait 40 seconds. They'll just timeout, making it look like your site is down.

This doesn't need to be fixed right away, but it's worth remembering if you expect more clients in the future. Switching to nginx often gives you an incredible speed boost, simply because nginx manages client connections much more efficiently than Apache's old-fashioned mpm_prefork. KeepAlive is safe to use with nginx, but not with Apache.

I really like the answer, thanks for it.

I don't understand why a huge good old software like apache has no support to smarter "manager" like the small nginx.

The solution is: "switch to nginx".

The answer is: "I don't have time for nginx until I will need it".

but thanks, now I know where to work if I will ever need it.

For now I never seen apache complaining that I goes over 30clients, if apache will ever complain about it I will switch to nginx.

For now it doesn't have sense to lower the keepalive because doing it will create problems to many users only for a tought than one day ONE users can wait 40 seconds.

For now it works and as a wise says, don't fix it if it isn't broken :D

Thanks to all guys ;)

@sblantipodi:

I don't understand why a huge good old software like apache has no support to smarter "manager" like the small nginx.

In fact, recent versions of Apache support several much better process managers, such as mpmworker and mpmevent. The problem is PHP, because modphp forces you to use the inefficient and outdated mpmprefork. PHP was developed in the heyday of mpm_prefork and never got beyond it. This causes Apache to behave like a 10-year-old piece of junk. In fact, Apache without PHP can be as fast as any other modern web server. Lots of people use Apache with Django, Rails, or Tomcat with excellent results.

There are ways to deploy PHP with mpmworker, but this involves FastCGI (FPM). For historical reasons, Apache has two competing FastCGI modules (modfastcgi and modfcgid), neither of which gets it quite right, and both of which are a pain in the ass to configure. Newer web servers such as nginx and lighttpd, by contrast, come with much better FastCGI support by default. As a result, people who need FastCGI flock to nginx, and PHP deployments tend to polarize with Apache+mpmprefork+mod_php on the one side (for low loads) and nginx+FPM on the other (for high loads).

@hybinet:

There are ways to deploy PHP with mpmworker, but this involves FastCGI (FPM). For historical reasons, Apache has two competing FastCGI modules (modfastcgi and mod_fcgid), neither of which gets it quite right, and both of which are a pain in the ass to configure.

There are plenty of us running worker/fastcgi on apache. I know you're an nginx groupie, but you don't need to overstate your case.

@glg:

@hybinet:

There are ways to deploy PHP with mpmworker, but this involves FastCGI (FPM). For historical reasons, Apache has two competing FastCGI modules (modfastcgi and mod_fcgid), neither of which gets it quite right, and both of which are a pain in the ass to configure.
There are plenty of us running worker/fastcgi on apache. I know you're an nginx groupie, but you don't need to overstate your case.

9697 ?        Ss     0:13 /usr/sbin/apache2 -k start
27562 ?        S      0:00  \_ /usr/sbin/apache2 -k start
27563 ?        S      0:05  \_ /usr/sbin/fcgi-pm -k start
27870 ?        Ss     0:00  |   \_ /usr/bin/php-cgi
18314 ?        S      0:02  |       \_ /usr/bin/php-cgi
 <snip 22="" more="">18671 ?        S      0:00  |       \_ /usr/bin/php-cgi
27810 ?        Sl     0:43  \_ /usr/sbin/apache2 -k start
27811 ?        Sl     0:42  \_ /usr/sbin/apache2 -k start
 1482 ?        Sl     0:36  \_ /usr/sbin/apache2 -k start</snip>
13:36:18 up 384 days,  7:22,  3 users,  load average: 0.00, 0.00, 0.00
-/+ buffers/cache:        198        300

Average of 25 pagehits/sec, peak at ~200 pagehits/sec. Of those, average 15 php requests/sec (custom code, definitely not as optimized as I'd like it), with peak at the limit of 24. Most of them ajax backend calls. 64 MB APC cache. mysql on the same host.

CPU on average - 8%, on peak 20% (out of 400%).

mpmworker + modfastcgi - it works, nginxers!

@glg:

@hybinet:

There are ways to deploy PHP with mpmworker, but this involves FastCGI (FPM). For historical reasons, Apache has two competing FastCGI modules (modfastcgi and mod_fcgid), neither of which gets it quite right, and both of which are a pain in the ass to configure.
There are plenty of us running worker/fastcgi on apache. I know you're an nginx groupie, but you don't need to overstate your case.
Sure, you can get it to work if you know what you're doing. But last time I checked, most tutorials for setting up worker with fastcgi involved writing wrapper scripts, adding a bunch of pseudo-XML to Apache configuration files, etc. etc. One mistake and you ended up with 5 time as many children as you intended, because each Apache process spawned it own. It was also not easy to restart Apache and fastcgi seperately when you wanted to tweak php.ini, because Apache insisted on managing everything. In contrast, with nginx and lighttpd, you just threw in spawn-fcgi, copied a standard init script, and you were done.

But that was before the days of FPM, so maybe I missed out on a few improvements. Sorry if I misrepresented the current state of worker/fastcgi. (FPM also made life even easier on the nginx side. In Ubuntu 10.10 or later, it's just one command: apt-get install nginx php5-fpm)

Still, you're right, I'm an nginx fanboi. I just love the configuration syntax 8)

@hybinet:

Sure, you can get it to work if you know what you're doing.

I got it working fine on the first try converting from prefork/mod_php. No wrappers, no xml crap.

Precisely four lines of basic config (one pretty long, I admit, six parameters), two of which are necessary also for mod_php, plus another six of typical allow/deny stuff securing it up. I'm not using FPM.

There are no wrapper scripts. Not sure what you mean by "xml crap", unless you mean the very same pseudo-HTMLish syntax of all Apache conf files. Yes, if you'll try to "make it work like with nginx" with externally spawned php, you're going to need some insane hacks. But it's possible, at least with mod_fastcgi. It can connect to external socket.

But modfastcgi spawns php itself. Reloading php.ini is easy - you just apache2ctl reload, and modfastcgi restarts its slave PHP tree. Almost like with mod_php.

"One mistake and you ended up with 5 time as many children as you intended, because each Apache process spawned it own." is total bull.

With mpm-worker, you get

a) An Apache master process

b) One or more worker host processes, running worker threads.

Adding mod_fastcgi to the mix adds

c) An fcgi-pm (process manager), which spawns all your FCGI handlers.

In case of PHP, it spawns the php master listener, which in turn spawns $PHPFCGICHILDREN workers.

You can see that in the snippet of ps axf that I posted above.

There's literally no way to get too many of those, unless you'll mix up the PHPFCGICHILDREN directive and mod_fcgid which assumes "one subprocess = one handler", spawning a new php master for each parallel request up to the limit. You can use fcgid, but then don't use the children directive. That'll limit your ability to use APC, but external memcache could be used.

@sblantipodi:

Knock on wood as english people says

lmao - priceless :) Thank you hehehehe

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct