Disk Performance Is Terrible

My linode is on host24 and disk performance lately has been terrible. I first saw this when I ran a Debian "apt-get update" or "apt-get upgrade". Upon further analysis I can even see the problem by simply issuing a "man" command. ANY command that does disk I/O is slow, really slow. I can see the apt-get or man processes stuck in "D" state.

Fortunately, the one website I have on this linode appears to be cachable and is not badly affected.

12 Replies

This may be due to alot of swapping happening.

Take a look at

cat /proc/io_status

to see what things are like

if io_tokens in negative then you have problems.

Adam

I can't find the /proc file you mentioned. However, I do not see any significant swapping. This problem is also sporadic. For example, I just did another test and the disk I/O is moving along just fine. But, 20 minutes ago it was extremely slow.

may need to reboot to find it, it's a new addition to the kernel…

Or:

ssh linodeUsername@host24.linode.com io_status

-Chris

Rebooted last night and I now see /proc/iostatus. The iotokens run between 175K and 200K.

So, what do the various parameters in io_status mean?

Still watching disk performance. Guessing it is related to load on other Linodes on same host.

I'm on host 7. I've been with linode more than a year. For the first few months, disk performance was good. I suppose as more users joined, the performance got worse. I'm guessing that other users with heavy swap are killing everybody's disk usage. If this is the case then I have a suggestion. Put aside a seperate disk just for swap usage. That way normal usage doesn't have to suffer because someone else is swapping heavy. If I'm wrong, please tell me why the disk performance is poor. Here is the output from cat /proc/io_status

io_count=16945028 io_rate=0 io_tokens=20000 token_refill=100 token_max=20000

I just ran apt-get update and it crawled like a 386 accessing a floppy.

Linode Staff

@berryman77:

Here is the output from cat /proc/io_status.
Please check again. Those were old values.

Still though, those values don't indicate that you actually hit the limiter. I'll keep an eye on things on host7.

Thanks,

-Chris

Not sure what you mean by 'old values'. I took them when I made the post. Anyway, I took some more values and here they are:

io_count=16953351 io_rate=0 io_tokens=399995 token_refill=512 token_max=400000

I'm not familiar with 'the limiter'. Is that something that limits my disk access?

I love my linode. There's nothing else that compares to it. This disk issue is my only gripe and I just assumed it was inherent in the setup and could not be fixed. But if it can, that's great news and I would like to get it fixed.

Randall

O.K. As usual, looks like I'm gonna eat my words. Maybe you made a change. My disk access is zipping. So fast, I would think the whole file system is in RAM. Wow!

The limiter is what keeps other people's swapping activities from killing your disk usage, so actually, it was probably your own usage at one point that slowed it down for a minute at a time after you've hit the limit once. I'm unfortunately familiar with the limiter's behaviour, all being my own fault. =)

This is my /proc/Io_status:

iocount=26896456 iorate=101 iotokens=-21 tokenrefill=100 token_max=400000

Is this because I'm using my linode account improperly (ie., grinding on the disk too much), or is it something on the host?

(What follows is an edit of my original post…)

I have a much better handle on this now -- watching io_status really helps understand the limiter and what it does (it's pretty simple, obviously).

My refill rate seems to have been upped a little -- thanks for that, if that is what happened. It makes a huge difference.

I'm curious about how the parameters were selected, though -- it seems like the max is pretty high, and that the default refill rate is a little low. But that's just me eyeballing it, I don't have a rigorous argument to support what I'm saying.

Anyway, I understand the system now, I think I can avoid sinking myself, and everything's right with the world. :D

@astrashe:

I'm curious about how the parameters were selected, though – it seems like the max is pretty high, and that the default refill rate is a little low. But that's just me eyeballing it, I don't have a rigorous argument to support what I'm saying.

This gives a little more info on the numbers:

http://www.linode.com/forums/viewtopic.php?t=1151

As for how they were chosen, that was up to caker. I would assume he based those values off of the behaviour he was seeing with Linodes.

As for the refill rate being low, there's two sides to that. The first being that that the iorate isn't a measurement of seconds, and actually, I'm not sure what they are a measure of, but it's small whatever it is. The second being that if it was coming down to you using all your tokens up, and then relying on the iorate to handle the load your putting on the server, your using much more io than is meant for a Linode. I like the limiter for the fact that it tends to point out that you've made mistakes in your server config and can be like a signal saying one of your services is doing something it shouldn't be and you need to fix it. That's just me though =/

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct