Some tuning hints, please: NGINX, PHP-FastCGI, MySQL
NGINX –
/usr/local/nginx/conf/nginx.conf
user www-data www-data;
worker_processes 4;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] '
# '"$request" $status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
tcp_nodelay off;
keepalive_timeout 5;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /usr/local/nginx/sites-enabled/*;
}
In /usr/local/nginx/sites-enabled/ , I have several files that correspond to my main and sub- domains. No variables that should affect performance are in these included files.
PHP (using PHP-FastCGI)
/etc/default/php-fastcgi
## Snipped irrelevant info
PHP_FCGI_CHILDREN=5
PHP_FCGI_MAX_REQUESTS=4000
the php.ini has default values for the most part, except I raised the memory_limit from 16M to 64M.
I also have XCache set up with the following parameters:
[xcache-common]
zend_extension = /usr/lib/php5/20060613+lfs/xcache.so
[xcache.admin]
xcache.admin.enable_auth = On
xcache.admin.user = "snipped"
xcache.admin.pass = "snipped"
[xcache]
xcache.shm_scheme = "mmap"
xcache.size = 96M
xcache.count = 4
xcache.slots = 8K
xcache.ttl = 0
xcache.gc_interval = 0
xcache.var_size = 4M
xcache.var_count = 4
xcache.var_slots = 8K
xcache.var_ttl = 0
xcache.var_maxttl = 0
xcache.var_gc_interval = 300
xcache.test = Off
xcache.readonly_protection = Off
xcache.mmap_path = "/dev/zero"
xcache.coredump_directory = ""
xcache.cacher = On
xcache.stat = On
xcache.optimizer = Off
[xcache.coverager]
xcache.coverager = Off
xcache.coveragedump_directory = ""
XCache has no OOMs, and seems to be working fine.
Now for MySQL:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
skip-external-locking
bind-address = 127.0.0.1
key_buffer = 600M
max_allowed_packet = 16M
thread_stack = 128K
thread_cache_size = 8
myisam-recover = BACKUP
max_connections = 40
table_cache = 2000
#thread_concurrency = 10
query_cache_limit = 1M
query_cache_size = 256M
log = /var/log/mysql/mysql.log
#log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 5
#log-queries-not-using-indexes
expire_logs_days = 10
max_binlog_size = 100M
skip-federated
skip-name-resolve
join_buffer_size=512K
max_heap_table_size=180M
tmp_table_size=180M
open_files_limit=3000
query_cache_min_res_unit=1K
innodb_buffer_pool=40M
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
My MyISAM index space comes to a total of 575MB at the moment, so 600MB should be fine for the keybuffersize.
My InnoDB tables come to 38MB I think it was, thus the 40MB innodbbufferpool.
The scripts I've been using to help me tweak tell me I should raise maxheaptablesize and tmptable_size, but I don't think I have the RAM to do that. When I was on a Linode 720, those scripts reported both values as fine, but now that I've upped to a 1440, it seems they're NOT fine. XD Go figure, right? XP
Anyways, I'm having a big problem with IO rate, apparently, with my average hovering around 3-4K, and with the graph spiking upwards of 20K. (Yikes indeed.) Some of my members are complaining about slow page loads and 504 Timeout errors. I'm guessing that MySQL is probably my culprit here, but figured I'd ask to some advice from you guys.
These scripts I've been using to help me (a mysqltuner.pl and tuning-primer.sh) both report that I'm hovering at about 1.0 to 1.2 GB of max MySQL usage, and from what I read, I should not go beyond 80% to 90% of my available memory for that.
Now, however I had my MySQL configured back on my 720 was working fine, though occasionally slow; I decided when I moved to the 1440 to take advantage of the doubled memory and started changing stuff. Because of that, I'm pretty sure that it's not that I need more memory, but just that I need to tweak it right. XP
During the periods of slow response, when I have top running, it does report "high" load averages ("high" being 2 to 6, occasionally higher), and my average load is usually just around 1, even lower during the slow hours.
I have tried to see if it was any obvious queries that was slowing MySQL by enabling logging of the slow queries, but that didn't help much; most of it was,
SELECT someid FROM sometable WHERE someotherid=X LIMIT 1
Those queries would often take 30-50 seconds! Obviously, a straight-forward query like that on an optimized table using indexes in the WHERE clause would normally not be taking that long, even considering my post table has almost 4 million records.
For the tl;dr of it:
What could MySQL be hung up on, if it is indeed MySQL? Why does it want higher maxheaptablesize and tmptable_size than it did before, and would it hurt my performance in other areas to raise those values? If MySQL is configured to more than 80% of the system RAM for it's maximum memory allocation, would it start killing the server? Is this indeed where my site's slow load problem and IO rate spikes are coming from?
Forry for the novel-length post, I just wanted to provide as much info as possible.
2 Replies
> Be cautious about sizing the query cache excessively large, which increases the overhead required to maintain the cache, possibly beyond the benefit of enabling it. Sizes in tens of megabytes are usually beneficial. Sizes in the hundreds of megabytes might not be.
More generally, setting things higher is not always better.
In any event, you should check and see whether your machine actually is running out of memory or not.
@Emufarmers:
In any event, you should check and see whether your machine actually is running out of memory or not.
Ditto. If you're on a 1440 plan and MySQL alone is using 1.0-1.2G, you haven't got much RAM left for anything else. Not all RAM is there for you to use up; Linux needs to have a fair amount of free RAM to use as buffers and cache.
80% of RAM for MySQL is for machines that run nothing but MySQL. On a web+database server, 50-60% would be the upper limit. Especially as XCache needs quite a bit of RAM, too.
How much swap are you using? Try free -m