Disk I/O Rate
I see that the default alert is set for 300, but I'm averaging closer to around 500 it looks like from the graph. This is a snapshot of free which is pretty average from what I've been seeing:
total used free shared buffers cached
Mem: 360 354 5 0 8 168
-/+ buffers/cache: 177 182
Swap: 255 2 253
And vmstat:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 2528 6152 9052 170572 0 0 5 20 11 50 2 0 98 0
I'm on a Linode 360, running lighttpd, php, mysql, and postfix. Lighttpd is hosting Wordpress and Vanilla Forums. I average around 3000 hits a day, but I do have a large photo gallery so people are constantly downloading relatively large images.
Should I be concerned with my Disk I/O rate so consistently over 300 with this set up?
.
9 Replies
But since more disk I/O usually means slower page generation times, and since you've got some RAM left, I'd strongly recommend installing a PHP opcode cacher such as APC, eAccelerator, or XCache. Any one of those will cut your disk I/O significantly, and also cut page generation times in half. Oh, and don't forget wp-super-cache.
Anyone have experience configuring wp-super-cache to work with lighttpd? I've found stuff online, but they weren't all too clear. They kept mentioning mod-magnet; and I have no idea what that is.
Any pointers?
mod_magnet is a lighttpd module. If you're using Debian/Ubuntu, you can install it with
apt-get install lighttpd-mod-magnet
and maybe edit some configuration files as well.
@hybinet:
I wouldn't be too worried about disk I/O as long as you're not swapping hard. The number 300 seems to be a legacy of the UML days (it's ridiculously easy to go over it in Xen without doing anything out of the ordinary), so just change it to something more reasonable based on your real usage.
When I switched to Xen I got these all the time, often in the thousands, but I never swapped and it wasn't a problem. I set the notification threshold to 10,000.
@hybinet:
But since more disk I/O usually means slower page generation times, and since you've got some RAM left, I'd strongly recommend installing a PHP opcode cacher such as APC, eAccelerator, or XCache. Any one of those will cut your disk I/O significantly, and also cut page generation times in half. Oh, and don't forget wp-super-cache.
Does that imply that something like XCache will increase the amount of RAM I use on average?
I installed XCache and noticed that I'm running with about 30MB more RAM free on average. That's a good thing, right? I was just expecting it to go down with XCache, not up.
I still haven't gotten around to wp-super-cache yet. The lighttpd rewrite rules sound too daunting atm.
.
Though I do seem to have far more available memory since installing xcache, my CPU utilization has gone up noticeably. From the Linode graphs, it looks like it's gone from an average of 4% to 8%.
Is this expected? Furthermore, am I correct in assuming that a CPU utilization of anything less than 20% is acceptable (give the Linode host hardware)?
.
But of course, if your server is pumping out more pages per second thanks to XCache, that might cause a higher load on the CPU. I mean, twice as many pageviews per second = twice as many MySQL queries per second. Just a blind guess.
Still, a host node has 800% CPU so anything below 20% should be okay for you to use as you wish.
@Xan:
When I switched to Xen I got these all the time, often in the thousands, but I never swapped and it wasn't a problem. I set the notification threshold to 10,000.
I did 100,000 on a Linode 360 at one time. I was trying to restructure a large and complicated MySQL database in several steps, and it took ~30 minutes to complete. The web server felt a tad bit slower than usual during the operation, but otherwise everything was normal.
I had configured XCache to have 16MB to work with and it was constantly having to forget what it had cached to make room for new stuff.
I changed that value to 64MB and now I'm getting no OOMs and both the CPU utilization and IO Rate have gone down.
So if anyone is trying to set-up XCache, let that be a lesson to you.
.