shared cpu

Hello.

On a Linode 360 in Newark, I had awful results for the command "uptime" over the last seven hours. Sites located there were pretty slow at times. Can I be 100% sure that the processes responsible for that over consumption of resources are mine? Is it possible that someone else on the same server exceeds its normal usage dragging everybody else?

Thanks

8 Replies

I have the exact same question. I'm on dallas80 and noticed my sites and ssh were freezing. I checked /proc/loadavg and noticed that it was reporting load average: 1.14, 1.08, 0.60. The CPU graph on the Members area shows that my linode is only using ~2.61% CPU, so I would imagine it is other linodes causing the problem.

I realize a load average of 1.14 is still reasonable. I've seen much higher loads on other VPS providers and they still function reasonably well. I imagine the issue is something else that I'm not noticing. I thought it could be packet loss, but ping shows 0% packet loss.

On a side note, the Host Summary Host Load only shows 'Low'. The only time I saw this before was when the host was creating my new disk image.

This is strange because my whole system is now back to normal. I have not changed any thing in terms of set up or codes. I checked the Apache logs and the traffic was as usual. When it was bad, I rebooted the server and the high results given by "uptime" were back right away. Like a bad storm that comes and goes.

> On a Linode 360 in Newark, I had awful results for the command "uptime" over the last seven hours. Sites located there were pretty slow at times. Can I be 100% sure that the processes responsible for that over consumption of resources are mine? Is it possible that someone else on the same server exceeds its normal usage dragging everybody else?

Are you on newark21 by any chance?

Our VM on that host was incredible sluggish for about 10 hours last night. Nothing out of the ordinary on our end, but anything that hit the disk took 5-30 seconds to complete. Even though there was nothing running, we saw 5+ loads and lots of IOwait.

I have a ticket in and the Linode guys are looking into it…

Yes, this is Newark21. Looks like there was something going on.

I have a node on Newark8 that is running fine but that doesn't mean Newark21 is having problems. As for Dallas80, I have a node on there too and it is running great. Only problem I had was an out of memory condition that was easy to fix.

The load average displayed by uptime and top is for your node only, not the host. This means that if the average is high and your node is sluggish then you are the one killing performance.

@SelfishMan:

The load average displayed by uptime and top is for your node only, not the host. This means that if the average is high and your node is sluggish then you are the one killing performance.

Thanks for the info. For some reason I couldn't find anything directly related to the high loads from /proc/loadavg. Perhaps it was iowait or something, since top and the linode manager graphs didn't show anything above 3%. I also looked at the IO Rate graphs and didn't notice anything above 200 at the time.

My linode on dallas80 is now running great. The problem only occurred once and hasn't appeared since.

Sorry about going off topic. Should have started a new post for my problems.

> The load average displayed by uptime and top is for your node only, not the host. This means that if the average is high and your node is sluggish then you are the one killing performance.

That is not necessarily true, as can be evidenced on a physical machine. Let's say:

* Your hard drive begins to fail

The kernel is repeatedly timing out IO requests and retrying them

A process (let's say __ls__) tries to read /home/you

It blocks until the kernel reads the block

Another process (now we pick __vi__) tries to /tmp/lolcats.txt

That blocks until the kernel finishes with ls and retries the write</list> 

If that lasts long enough your load is going to go up to 2. Now most people would agree that 2 is not optimal. I think most people would also say that running ls and vi at the same time is not me killing performance. Now add in all the normal IO that happens on a healthy system…

Linode staff was all over this and had an answer to us yesterday: someone on newark21 was thrashing/grinding their disk and it causes any IO access on other VMs to tank. So all of our loads went up.

For posterity, I'll also mention that they are monitoring this sort of thing now to prevent it from becoming a problem in the future. Gotta love Linode, these guys must never sleep!

@hippo:

I have the exact same question. I'm on dallas80 and noticed my sites and ssh were freezing. I checked /proc/loadavg and noticed that it was reporting load average: 1.14, 1.08, 0.60. The CPU graph on the Members area shows that my linode is only using ~2.61% CPU, so I would imagine it is other linodes causing the problem. A common cause for this is someone hammering the disks on that host. The CPU usage won't necessarily reflect this (though it can impact the load average).

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct