RFT: Kernels 2.6.27.4-linode14 and 2.6.27.4-x86_64-linode3
I'd appreciate some wider testing of these kernels – available now under your configuration profile's kernel drop-down.
The sourceballs are also located here:
Thanks!
-Chris
34 Replies
So far no problems. I will let you know how it does after a few days.
# file 2.6.27.4-linode14.tar.bz2
2.6.27.4-linode14.tar.bz2: gzip compressed data, from Unix, last modified: Thu Nov 6 14:14:17 2008
@cburgess:
Installed and rebooted.
So far no problems. I will let you know how it does after a few days.
Ditto.
dfelicia@catch-22 ~ $ uname -a
Linux catch-22 2.6.27.4-linode14 #1 SMP Thu Nov 6 09:22:58 EST 2008 i686 Intel(R) Xeon(R) CPU L5420 @ 2.50GHz GenuineIntel GNU/Linux
dfelicia@catch-22 ~ $ uptime
20:58:47 up 2 days, 11:52, 1 user, load average: 0.12, 0.08, 0.01
````
So far so good. In fact, though it could quite possibly be due to some change/improvement by Internet provider, I swear access to my node is faster. When I ssh, I'm logged in instantly. My homepage also loads instantly (http://www.donsbox.com/
Any change from 2.8.18 -> 2.6.27 that could account for my now seemingly lower latency?
@dfelicia:
Any change from 2.8.18 -> 2.6.27 that could account for my now seemingly lower latency?
That's several years of kernel development… it could've been anything :)
@dfelicia:
Any change from 2.8.18 -> 2.6.27 that could account for my now seemingly lower latency?
Perhaps your use of a kernel from the future was affecting the space time continuum negatively thus increasing your latency.
I'm running 32bit Gentoo with Drupal
I'll let you know if I see any adverse affects
> Any change from 2.8.18 -> 2.6.27
OK, I made a typo. You guys are brutal
BTW, like marcus, I'm running 32-bit Gentoo.
@zengei:
I'm only seeing 2 CPU cores now instead of 4 as with 2.6.26.x.
Reboot and you'll see 4 cores again. This was a temporary misconfiguration on our end.
-Chris
@caker:
@zengei:I'm only seeing 2 CPU cores now instead of 4 as with 2.6.26.x.
Reboot and you'll see 4 cores again. This was a temporary misconfiguration on our end.-Chris
Great, thanks.
The Proliferation of Linux
So far everything seems to work. I think that things might be a bit more zippy now, that's a purely subjective observation.
The server (a Linode 360) runs:
* *Wordpress MU
*Varnish 2
*Lighttpd (and PHP through FastCGI)</list></r>
@k33l0r:
So far everything seems to work. I think that things might be a bit more zippy now, that's a purely subjective observation.
Here's some results from Apache benchmark (quite impressive, I like to think):
hex-vps ~ # ab -n 10000 -c 50 http://proliferationoflinux.org/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking proliferationoflinux.org (be patient)
Server Software: lighttpd/1.4.20
Server Hostname: proliferationoflinux.org
Server Port: 80
Document Path: /
Document Length: 29109 bytes
Concurrency Level: 50
Time taken for tests: 1.392 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 295349990 bytes
HTML transferred: 291090000 bytes
Requests per second: 7184.97 [#/sec] (mean)
Time per request: 6.959 [ms] (mean)
Time per request: 0.139 [ms] (mean, across all concurrent requests)
Transfer rate: 207234.49 [Kbytes/sec] received
2) Munin's display of memory cache looked most peculiar for my site.
3) Three times in a couple of days I had unusual 'disk io' alerts for no reason I could find.
I went back to the standard "latest 2.6" as before. Munin graphs now look normal. I'll wait and see about disk io alerts.
James
@zunzun:
I went back to the standard "latest 2.6" as before. Munin graphs now look normal.
Munin's memory cache graph now looks as it used to, fairly smooth, not anywhere nearly so chopped up and fragmented as with the newer kernel.
Link to munin image of cache memory showing this:
Link to munin image of CPU showing bad display of idle cpu:
James
http://www.donsbox.com/munin/
What version of munin are you using? I'm on 1.34.
@dfelicia:
What version of munin are you using? I'm on 1.34.
from
I don't normally use this kind of arcane technical jargon, but my graphs are definitely poopy-doopy.
James
Not sure why (tell me how to find out and I'll cooperate) but this bug applies even to default Gentoo installation.
* Setting system clock using the hardware clock [UTC] ...
* Cannot access the Hardware Clock via any known method.
Use the --debug option to see the details of our search for an access method.Cannot access the Hardware Clock via any known method.
Use the --debug option to see the details of our search for an access method.
* Failed to set clock You will need to set the clock yourself
[ !! ]
* Configuring kernel parameters ... [ ok ]
* Updating environment ... [ ok ]
* Cleaning /var/lock, /var/run ... [ ok ]
* Wiping /tmp directory ... [ ok ]
* Device initiated services: udev-postmount
* Setting hostname to fell ... [ ok ]
* Loading key mappings ... [ ok ]
* Setting terminal encoding to UTF-8 ... [ ok ]
* Setting user font ... [ ok ]
* Starting lo
* Bringing up lo
* 127.0.0.1/8
[ ok ]
* Adding routes
* 127.0.0.0/8 ... [ ok ]
* Initializing random number generator ... [ ok ]
INIT: Entering runlevel: 3
* Starting metalog ... [ ok ]
* Starting eth0
* Bringing up eth0
* 66.246.76.xxx
[ ok ]
* Adding routes
* default via 66.246.76.1 ... [ ok ]
* Mounting network filesystems ... [ ok ]
* Starting local ... [ ok ]
After that, it just hangs up forever.
@drake127:
I am running Gentoo x64 and this new kernel is unable to boot properly.
It's booting all the way, you're just not getting a getty login prompt on the correct console device node.
Do you have "Xenify" set to Yes in your configuration profile?
Do you get a getty login prompt on the 2.6.18-x86_64 kernel?
Can you paste the getty lines from your /etc/inittab?
For reference, 2.6.18 kernels want /dev/tty1 for console, whereas the pv_ops kernels (those > 2.6.18) want /dev/hvc0. Xenify is supposed to take care of making this modification for you, even when switching between kernels.
-Chris
@caker:
For reference, 2.6.18 kernels want /dev/tty1 for console, whereas the pv_ops kernels (those > 2.6.18) want /dev/hvc0. Xenify is supposed to take care of making this modification for you, even when switching between kernels. Thank you very much. Didn't know that one.
So no, I didn't have xenification turned on because it wasn't need in 2.6.18_x64.
@zunzun:
I don't normally use this kind of arcane technical jargon, but my graphs are definitely poopy-doopy.
I'm now inclined to believe this is munin-specific, and that memory cache is fine.
James
However: the OOM killer has blown up. The system is totally unresponsive and the console is repeating this over and over forever:
Mem-info:
Normal per-cpu:
CPU 0: Hot: hi: 186, btch: 31 usd: 12 Cold: hi: 62, btch: 15 usd: 51
Active:84829 inactive:41008 dirty:0 writeback:0 unstable:0
free:655 slab:3389 mapped:3 pagetables:1201 bounce:0
Normal free:2620kB min:2960kB low:3700kB high:4440kB active:339316kB inactive:164032kB present:548640kB pages_scanned:228230182 all_unreclaimable? yes
lowmem_reserve[]: 0 0
Normal: 1*4kB 1*8kB 17*16kB 11*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 2620kB
Swap cache: add 1158766, delete 1158766, find 2060666/2215923, race 5+107
Free swap = 0kB
Total swap = 263160kB
Free swap: 0kB
138240 pages of RAM
0 pages of HIGHMEM
3950 reserved pages
573 pages shared
0 pages swap cached
printk: 27627 messages suppressed.
cron invoked oom-killer: gfp_mask=0xa01d2, order=0, oomkilladj=0
1e297cb0: [<080693f2>] dump_stack+0x22/0x30
1e297cc8: [<080b3cd9>] out_of_memory+0x109/0x140
1e297cf4: [<080b51c5>] __alloc_pages+0x355/0x380
1e297d48: [<080b7688>] __do_page_cache_readahead+0x128/0x1a0
1e297d84: [<080b77f1>] do_page_cache_readahead+0x51/0x70
1e297da4: [<080b1615>] filemap_fault+0x1f5/0x310
1e297de8: [<080bed75>] __do_fault+0x55/0x3d0
1e297e2c: [<080bf13e>] do_linear_fault+0x4e/0x50
1e297e50: [<080bf39c>] handle_mm_fault+0xbc/0x2a0
1e297e84: [<0806a52c>] handle_page_fault+0x13c/0x230
1e297eb8: [<0806a887>] segv+0x177/0x2f0
1e297f6c: [<0807e046>] handle_segv+0x56/0x60
1e297f90: [<0807e856>] userspace+0x216/0x260
1e297fe4: [<0806b434>] fork_handler+0x74/0x90
1e297ffc: [<4000b080>] 0x4000b080
Performance-wise I didn't see much difference. Both RAM and CPU usage are still the same.
free -m -t
and
htop
reports 349MB of available memory while using the stable kernel, both reports 360MB of memory (as it should).
It was nice though, reporting that I'm only using 35MB/349MB versus 60MB/360MB which was the usual usage on the stable kernel.
Unfortunately, I can't tell if that was just a misreport or actually savings…
Linux x.y.com 2.6.27.4-x86_64-linode3
when running
/sbin/iptables -t mangle -A MANGLE_OUTPUT -p tcp –dport 80 -j TOS --set-tos Maximize-Throughput
the output is
iptables: Unknown error 18446744073709551615
i use this ruleset on other hosts (containing this rule), and haven't had this problem before.
Thanks,