strange load average
I have my linode running fedora core, and I have a strange, high, load average: (~0.5, see bottom of post.)
[unix]$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.2 2028 644 ? Ss 22:10 0:00 init [3]
root 2 0.0 0.0 0 0 ? SN 22:10 0:00 [ksoftirqd/0]
root 3 0.0 0.0 0 0 ? S< 22:10 0:00 [events/0]
root 4 0.0 0.0 0 0 ? S< 22:10 0:00 [khelper]
root 5 0.0 0.0 0 0 ? S< 22:10 0:00 [kthread]
root 49 0.0 0.0 0 0 ? S< 22:10 0:00 [kblockd/0]
root 61 0.0 0.0 0 0 ? S 22:10 0:00 [pdflush]
root 62 0.0 0.0 0 0 ? S 22:10 0:00 [pdflush]
root 63 0.0 0.0 0 0 ? S< 22:10 0:00 [kswapd0]
root 64 0.0 0.0 0 0 ? S< 22:10 0:00 [aio/0]
root 67 0.0 0.0 0 0 ? S< 22:10 0:00 [jfsIO]
root 68 0.0 0.0 0 0 ? S< 22:10 0:00 [jfsCommit]
root 69 0.0 0.0 0 0 ? S< 22:10 0:00 [jfsSync]
root 70 0.0 0.0 0 0 ? S< 22:10 0:00 [xfslogd/0]
root 71 0.0 0.0 0 0 ? S< 22:10 0:00 [xfsdatad/0]
root 616 0.0 0.0 0 0 ? S< 22:10 0:00 [kcryptd/0]
root 617 0.0 0.0 0 0 ? S< 22:10 0:00 [ksnapd]
root 713 0.0 0.0 0 0 ? S< 22:10 0:00 [kjournald]
root 768 0.0 0.2 2120 564 ? S
~~Can someone explain why this is the case?
n~~
9 Replies
I have exactly the same problem on my Linode. It's been that way ever since I changed to the 2.6.20-linode28 kernel.
Load average hovers around 0.4 all the time for me.
My vmstats show 100% idle times:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 6964 83548 125664 0 0 1 11 61 15 0 0 100 0
0 0 0 6964 83548 125664 0 0 0 0 110 17 0 0 100 0
0 0 0 6964 83548 125664 0 0 0 0 105 13 0 0 100 0
0 0 0 6964 83548 125664 0 0 0 0 105 13 0 0 100 0
0 0 0 7012 83548 125664 0 0 0 0 105 17 0 0 100 0
I also set up a log to see if there was a trend, but the graph is pretty much flat, as you can see:
0.7 ++-+--+--+--+--+-+--+--+--+--+--+--+--+--+--+--+--+-+--+--+--+--+--+-++
+ + + + + + + "load-20070429" using 1:2 ****** +
| |
0.6 ++ ++
| |
| |
| |
0.5 ++ +*
| *
| *
0.4 ++ +*
|* * ** * *|
|***** ** ** ** ***** ** ****** ** *** ** ********* *|
0.3 +* * **** * ********** **** *** *** **** **+
* *|
* |
* |
0.2 ++ ++
| |
+ + + + + + + + + + + + +
0.1 ++-+--+--+--+--+-+--+--+--+--+--+--+--+--+--+--+--+-+--+--+--+--+--+-++
00:00 02:00 04:0006:00 08:00 10:00 12:00 14:00 16:0018:00 20:00 22:00 00:00
The load is wrecking havoc with my monitoring - I was actually considering opening a ticket, but then I saw your post.
–deckert
@caker:
Jeff knows about this and is looking into it.
-Chris
Thanks caker. I'd be happy to assist in testing (i.e. be a guineapig).
Also, the load does not affect the actual performance of my Linode. I've done a couple of staggered tests between the 2.4-latest and 2.6 latest kernels and here are the results:
OS : Linux 2.4.29-linode39-1um
C compiler : gcc version 3.3.4
libc : ld-2.3.2.so
MEMORY INDEX : 13.791
INTEGER INDEX : 9.333
FLOATING-POINT INDEX: 23.904
and
OS : Linux 2.6.20-linode28
C compiler : gcc version 3.3.4
libc : ld-2.3.2.so
MEMORY INDEX : 16.524
INTEGER INDEX : 11.235
FLOATING-POINT INDEX: 28.948
So the 2.6 kernels are a little faster, even with the additional load indicated.
–deckert
it seems as if there is an unaccounted for load on machines, but this load has no affect on performance or CPU use percentage. For some reason, I find this disconcerting :)
take care
n
@npk1977:
it seems as if there is an unaccounted for load on machines
The load is being incorrectly calculated. Something similar happened previously - the kernel reported 1+'actual load value'.
EDIT:
-Chris
@pclissold:
That link points to an Ingo Molnar initiated 'KVM paravirtualization for Linux' thread om lkml.org - or did I miss something?
Whoops. Fixed.
-Chris
@caker:
Jeff found the problem and has provided a fix for it. I'll be building a new kernel in the next few days with the correction.
Nicely done, caker! Thanks for pushing this through to Jeff for us.
–deckert