shared cpu
On a Linode 360 in Newark, I had awful results for the command "uptime" over the last seven hours. Sites located there were pretty slow at times. Can I be 100% sure that the processes responsible for that over consumption of resources are mine? Is it possible that someone else on the same server exceeds its normal usage dragging everybody else?
Thanks
8 Replies
I realize a load average of 1.14 is still reasonable. I've seen much higher loads on other VPS providers and they still function reasonably well. I imagine the issue is something else that I'm not noticing. I thought it could be packet loss, but ping shows 0% packet loss.
On a side note, the Host Summary Host Load only shows 'Low'. The only time I saw this before was when the host was creating my new disk image.
> On a Linode 360 in Newark, I had awful results for the command "uptime" over the last seven hours. Sites located there were pretty slow at times. Can I be 100% sure that the processes responsible for that over consumption of resources are mine? Is it possible that someone else on the same server exceeds its normal usage dragging everybody else?
Are you on newark21 by any chance?
Our VM on that host was incredible sluggish for about 10 hours last night. Nothing out of the ordinary on our end, but anything that hit the disk took 5-30 seconds to complete. Even though there was nothing running, we saw 5+ loads and lots of IOwait.
I have a ticket in and the Linode guys are looking into it…
The load average displayed by uptime and top is for your node only, not the host. This means that if the average is high and your node is sluggish then you are the one killing performance.
@SelfishMan:
The load average displayed by uptime and top is for your node only, not the host. This means that if the average is high and your node is sluggish then you are the one killing performance.
Thanks for the info. For some reason I couldn't find anything directly related to the high loads from /proc/loadavg. Perhaps it was iowait or something, since top and the linode manager graphs didn't show anything above 3%. I also looked at the IO Rate graphs and didn't notice anything above 200 at the time.
My linode on dallas80 is now running great. The problem only occurred once and hasn't appeared since.
Sorry about going off topic. Should have started a new post for my problems.
> The load average displayed by uptime and top is for your node only, not the host. This means that if the average is high and your node is sluggish then you are the one killing performance.
That is not necessarily true, as can be evidenced on a physical machine. Let's say:
* Your hard drive begins to fail
The kernel is repeatedly timing out IO requests and retrying them
A process (let's say __ls__) tries to read /home/you
It blocks until the kernel reads the block
Another process (now we pick __vi__) tries to /tmp/lolcats.txt
That blocks until the kernel finishes with ls and retries the write</list>
If that lasts long enough your load is going to go up to 2. Now most people would agree that 2 is not optimal. I think most people would also say that running ls and vi at the same time is not me killing performance. Now add in all the normal IO that happens on a healthy system…
Linode staff was all over this and had an answer to us yesterday: someone on newark21 was thrashing/grinding their disk and it causes any IO access on other VMs to tank. So all of our loads went up.
For posterity, I'll also mention that they are monitoring this sort of thing now to prevent it from becoming a problem in the future. Gotta love Linode, these guys must never sleep!
@hippo:
I have the exact same question. I'm on dallas80 and noticed my sites and ssh were freezing. I checked /proc/loadavg and noticed that it was reporting load average: 1.14, 1.08, 0.60. The CPU graph on the Members area shows that my linode is only using ~2.61% CPU, so I would imagine it is other linodes causing the problem. A common cause for this is someone hammering the disks on that host. The CPU usage won't necessarily reflect this (though it can impact the load average).