Linode Manager IOPS graph vs. iostat
TL;DR:
What amount of iops are your graphs showing, and how does it match the tps and kb/s readouts of iostat? What would you consider a "high" value?
I'm getting about 1100 iops on the graph on a lightly-loaded server (steady 40KB/s write stream, single-digit-MB write peaks every ten seconds), and it seems odd to me.
///
I've been refreshing an old Linode - sillicon knows that it should have been done long ago. As part of the updates I've migrated all databases to InnoDB… and promptly started receiving email alerts about crossing the standard 1000 avgiops threshold.
The database is small, and it's performing well. The single high-activity point is a heartbeat table where about two dozen clients are updating a timestamp in their specific rows every second. So, let's call it thirty UPDATE commits per second. Peanuts by the enterprise scale.
Since switching that table to innodb, the Linode Manager IO graph averages to about 1100 iops, as opposed to the old 150-ish. "iostat 1" is showing me steady stream of writes (about 40 KB/s, 10-15 TPS) all the time, with a peak of 4-5 MB/s @ 200-250 TPS once every ten seconds (log flush, I guess).
Just switching that one table back to MyISAM drops the iostat to idle, with one 50-100 KB write per a couple seconds, and drops the Manager graph back to the old values.
So - are such numbers normal, and I'm just used to an idle nontransactional database setup here?
Is the "iops" unit as shown by Linode Manager completely different in scale to what I'm familiar with, so e.g. 1000 is small value, and 10 000 would be large one?
(I have an Oracle database on physical hardware for which the SAN monitoring reports constant 450 MB/s 24/7 and a "very high" by its standards value of 1800 IOPS…)