BUG: Bad page state in process kworker
I'm running Debian Bullseye using the Linode supplied kernel 5.10.13-x86_64-linode141. Nothing unsual here. Everything has been fine until a recent migration to the new infrastructure and then on 2 separate Linodes, I start seeing this in my kern.log:
Mar 21 08:52:38 top kernel: [ 2916.700336] BUG: Bad page state in process kworker/0:4 pfn:096de
Mar 21 08:52:38 top kernel: [ 2916.708001] page:0000000076eae2ef refcount:-1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x96de
Mar 21 08:52:38 top kernel: [ 2916.709718] flags: 0x1ffff0000000000()
Mar 21 08:52:38 top kernel: [ 2916.710422] raw: 01ffff0000000000 dead000000000100 dead000000000122 0000000000000000
Mar 21 08:52:38 top kernel: [ 2916.711868] raw: 0000000000000000 0000000000000011 ffffffffffffffff 0000000000000000
Mar 21 08:52:38 top kernel: [ 2916.713278] page dumped because: nonzero _refcount
Mar 21 08:52:38 top kernel: [ 2916.714166] Modules linked in:
Mar 21 08:52:38 top kernel: [ 2916.714744] CPU: 0 PID: 488 Comm: kworker/0:4 Not tainted 5.10.13-x86_64-linode141 #1
Mar 21 08:52:38 top kernel: [ 2916.716131] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
Mar 21 08:52:38 top kernel: [ 2916.718160] Workqueue: mm_percpu_wq drain_local_pages_wq
Mar 21 08:52:38 top kernel: [ 2916.719118] Call Trace:
Mar 21 08:52:38 top kernel: [ 2916.719626] dump_stack+0x6d/0x88
Mar 21 08:52:38 top kernel: [ 2916.720244] bad_page.cold.119+0x63/0x93
Mar 21 08:52:38 top kernel: [ 2916.720974] free_pcppages_bulk+0x18e/0x6a0
Mar 21 08:52:38 top kernel: [ 2916.721747] drain_pages_zone+0x41/0x50
Mar 21 08:52:38 top kernel: [ 2916.722434] drain_pages+0x3c/0x50
Mar 21 08:52:38 top kernel: [ 2916.723073] drain_local_pages_wq+0xe/0x10
Mar 21 08:52:38 top kernel: [ 2916.723828] process_one_work+0x1fb/0x390
Mar 21 08:52:38 top kernel: [ 2916.724566] ? process_one_work+0x390/0x390
Mar 21 08:52:38 top kernel: [ 2916.725322] worker_thread+0x221/0x3e0
Mar 21 08:52:38 top kernel: [ 2916.726014] ? process_one_work+0x390/0x390
Mar 21 08:52:38 top kernel: [ 2916.726788] kthread+0x116/0x130
Mar 21 08:52:38 top kernel: [ 2916.727376] ? kthread_park+0x80/0x80
Mar 21 08:52:38 top kernel: [ 2916.728055] ret_from_fork+0x22/0x30
Mar 21 08:52:38 top kernel: [ 2916.728709] Disabling lock debugging due to kernel taint
Mar 21 08:52:38 top kernel: [ 2916.729672] BUG: Bad page state in process kworker/0:4 pfn:0827c
Mar 21 08:52:38 top kernel: [ 2916.730768] page:00000000089561ca refcount:-1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x827c
Mar 21 08:52:38 top kernel: [ 2916.732430] flags: 0x1ffff0000000000()
Mar 21 08:52:38 top kernel: [ 2916.733113] raw: 01ffff0000000000 dead000000000100 dead000000000122 0000000000000000
Mar 21 08:52:38 top kernel: [ 2916.734498] raw: 0000000000000000 0000000000000011 ffffffffffffffff 0000000000000000
Mar 21 08:52:38 top kernel: [ 2916.735870] page dumped because: nonzero _refcount
Mar 21 08:52:38 top kernel: [ 2916.736731] Modules linked in:
Mar 21 08:52:38 top kernel: [ 2916.737288] CPU: 0 PID: 488 Comm: kworker/0:4 Tainted: G B 5.10.13-x86_64-linode141 #1
Mar 21 08:52:38 top kernel: [ 2916.738934] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
Mar 21 08:52:38 top kernel: [ 2916.740937] Workqueue: mm_percpu_wq drain_local_pages_wq
Mar 21 08:52:38 top kernel: [ 2916.741893] Call Trace:
Mar 21 08:52:38 top kernel: [ 2916.742362] dump_stack+0x6d/0x88
Mar 21 08:52:38 top kernel: [ 2916.743002] bad_page.cold.119+0x63/0x93
Mar 21 08:52:38 top kernel: [ 2916.743717] free_pcppages_bulk+0x18e/0x6a0
Mar 21 08:52:38 top kernel: [ 2916.744478] drain_pages_zone+0x41/0x50
Mar 21 08:52:38 top kernel: [ 2916.745186] drain_pages+0x3c/0x50
Mar 21 08:52:38 top kernel: [ 2916.745806] drain_local_pages_wq+0xe/0x10
Mar 21 08:52:38 top kernel: [ 2916.746538] process_one_work+0x1fb/0x390
Mar 21 08:52:38 top kernel: [ 2916.747964] ? process_one_work+0x390/0x390
Mar 21 08:52:38 top kernel: [ 2916.749414] worker_thread+0x221/0x3e0
Mar 21 08:52:38 top kernel: [ 2916.750792] ? process_one_work+0x390/0x390
Mar 21 08:52:38 top kernel: [ 2916.752193] kthread+0x116/0x130
Mar 21 08:52:38 top kernel: [ 2916.753413] ? kthread_park+0x80/0x80
Mar 21 08:52:38 top kernel: [ 2916.754697] ret_from_fork+0x22/0x30
This is not completely benign. These servers are now freezing up from time to time, roughly once a week or so.
I have tried rebooting into single user and running e2fsck
. The file system is clean.
Is anyone else seeing these errors after a recent migration? What could be the cause and how to fix it?
7 Replies
I suggest you report this to Debian…"Bullseye" is not released yet. That being said, here's some info about a problem similar to yours:
Note the comment about refcount:-1. I think that's what's relevant here (however, just my wild-a** guess…). I'm not a kernel guy and I don't play one on TV…
-- sw
I already posted this on the Debian mailing list and the conculsion there was that this was a Linode problem since this is in the kernel.
@mgrant --
You write:
I'm running Debian Bullseye using the Linode supplied kernel 5.10.13-x86_64-linode141.
Emphasis mine.
Maybe you could try a different kernel with the same minor revision (10) but a lower minor-minor revision (13-x86_64-linode141). I don't know if such a thing exists or even what it would be if it did…another wild-a** guess on my part.
It may give the Linode folks some clue as to what regression, if any, might have occurred.
-- sw
Within the last couple of days, I am getting the same log entry on same kernel, but in CentOS without recent migration.
Hello,
We are also experiencing random kernel panics on our Linodes, it seems to be related with the newest host updates/migrations. We are running CentOS 7 on our servers.
Odd that I've just found this post, as we've been seeing similar things on one of our Linodes over the past few weeks. Nothing has changed on the server other than the usual updates, and after reaching out to Linode Support they have advised I run some diags, htop dumps etc to find out what is causing the issue.
We run Ubuntu 16.04.
I've sent Linode support a link to this thread