High APC Fragmentation, Apache Segmentation Faults,
So, I'm having some huge issues. I've been trying to tune my server with APC and Apache settings since the 1gb memory upgrade and I'm not having any luck. My server runs about 15 wordpress sites all running w3 total cache. All the sites are very low traffic (the one with the highest is about 200 visitors a day, the rest are like 40-50).
I keep getting [notice] child pid xxxxx exit signal Segmentation fault (11) errors and my APC fragmentation is super high (right now at 44%) with 99% hit rate. I changed the TTL to 0 and it skyrocketed to 90% fragmentation so I put it back 7200. I also occasionally get "server reached MaxClients setting, consider raising the MaxClients setting" errors so I tried doing that as well.
ANY help/recommendations/advice to help me tune this properly would be great. I was doing great on my 512 till I started messing around and I don't know enough to get this right.
Thanks so much guys!
–----
Here are all my relevant settings.
Running:
Linode 1024
Debain 6 32bit
Apache/2.2.16
PHP 5.3.3-7+squeeze15
My current apc settings:
extension = apc.so
apc.enabled=1
apc.shm_segments=1
apc.shm_size=768M
apc.numfileshint=4096
apc.userentrieshint=4096
apc.ttl=7200
apc.userequesttime=1
apc.user_ttl=7200
apc.gc_ttl=0
apc.cachebydefault=1
apc.filters = ";"
;apc.mmapfilemask=/apc.shm.XXXXXX
apc.fileupdateprotection=2
apc.enable_cli=0
apc.maxfilesize=1M
apc.stat=1
apc.stat_ctime=0
apc.canonicalize=0
apc.write_lock=1
apc.report_autofilter=0
apc.slam_defense=0
apc.optimization = 0
Mem: 1000 953 47 0 44 613
-/+ buffers/cache: 294 705
Swap: 255 140 115
LockFile ${APACHELOCKDIR}/accept.lock
PidFile ${APACHEPIDFILE}
LimitInternalRecursion 20
TimeOut 10
KeepAlive on
MaxKeepAliveRequests 100
KeepAliveTimeout 3
prefork MPM
StartServers: number of server processes to start
MinSpareServers: minimum number of server processes which are kept spare
MaxSpareServers: maximum number of server processes which are kept spare
MaxClients: maximum number of server processes allowed to start
MaxRequestsPerChild: maximum number of requests a server process serves
MinSpareServers 3
MaxSpareServers 6
ServerLimit 40
MaxClients 40
MaxRequestsPerChild 3000
CacheDirLength 2
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript
DeflateCompressionLevel 9
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
DeflateFilterNote Input instream
DeflateFilterNote Output outstream
DeflateFilterNote Ratio ratio
LogFormat '"%r" %{outstream}n/%{instream}n (%{ratio}n%%)' deflate
–-
Loaded Modules:
core_module (static)
logconfigmodule (static)
logio_module (static)
mpmpreforkmodule (static)
http_module (static)
so_module (static)
alias_module (shared)
authbasicmodule (shared)
authnfilemodule (shared)
authzdefaultmodule (shared)
authzgroupfilemodule (shared)
authzhostmodule (shared)
authzusermodule (shared)
autoindex_module (shared)
cache_module (shared)
cgi_module (shared)
cloudflare_module (shared)
deflate_module (shared)
dir_module (shared)
diskcachemodule (shared)
env_module (shared)
expires_module (shared)
headers_module (shared)
mime_module (shared)
negotiation_module (shared)
php5_module (shared)
reqtimeout_module (shared)
rewrite_module (shared)
setenvif_module (shared)
ssl_module (shared)
status_module (shared)
17 Replies
Longview might give you information about what causes the segfaults, are you sure it's not going oom?
And yes I have mysql in there.
I have longview installed but I'm really have a hell of time sorting out how to read it properly. I do know my apache workers are using large chunks of ram per process (between 40 - 150+ depending) so going out of memory is very possible.
You could also switch to nginx to get rid of Apache's huge memory usage, or MPMWorker which is also better iirc, not sure on that. However I believe there are some security issues with using MPMWorker and multiple websites and PHP.
As for NGINX, I used it for a year on a different server and I got frustrated always trying to find workarounds to get certain site features to work plus I use Webmin which at the time didn't support nginx very well.
It's certainly something worth considering but in the meantime, I'd really like to work with what i got.
the mpm_worker idea is something worth looking into as well but I know nothing about it.
A solution for this could be PHP-FPM, where you can create separate PHP pools. Though I'm not sure if this will fix your memory issues.
I don't know how webmin works with nginx, but I run Wordpress on nginx which works fine
my free -m reading was this just now:
Mem: 1000 973 26 0 43 652
-/+ buffers/cache: 277 723
Swap: 255 10 245
Also, depending on your needs, Wordpress-MU might be a better fit. There will only be a single copy of the PHP files which will cut WAY back on APC memory usage.
also you may want to consider updating to latest PHP using the DotDeb Repos and/or switching to nginx/php-fpm (which again you can get latest versions via dotdeb)
more info about dotdeb here:
WP-MU only helps reduce APC usage on the global/opcode cache size but is a worthwhile thought
you should also look to set apc.stat = 0 as well if these are the only php sites on here and you have w3tc setup with APC on all sites and running the latest version of w3tc (it will clear apc caches when you update plugins which is needed for apc.stat=0)
I'll update with results.
this sucks cause I kinda want to have it running.
Not sure if it helps, but this is a post from 6 months ago. If you're still running an old version of mod_cloudflare, a newer one should fix it. See here:
Please let us know
No more seg faults, mod_cloudflare is working, haven't had a server error but my APC segmentation is 100% now which can't be good.