Fedora 32 does not boot using updated kernel
Using native Linode Grub2, and Fedora 32 distribution (created via vagrant-linode).
When I update the kernel:
# dnf update kernel-core
I expect the next reboot to use the updated kernel. But it does not.
My /etc/default/grub is:
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="console=ttyS0,19200n8 net.ifnames=0 rhgb "
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
GRUB_TERMINAL=serial
GRUB_DISABLE_OS_PROBER=true
GRUB_SERIAL_COMMAND="serial --speed=19200 --unit=0 --word=8 --parity=no --stop=1"
GRUB_DISABLE_LINUX_UUID=true
GRUB_GFXPAYLOAD_LINUX=text
GRUB_UPDATE_DEFAULT_KERNEL=true
My reading of this BLS issue seemed to suggest that BLS was working, but might there be a problem with the Fedora 32 image?
If I set GRUB_ENABLE_BLSCFG=false in /etc/default/grub then do the following, it works.
# grub2-mkconfig -o /boot/grub2/grub.cfg
# dnf reinstall kernel-core
Is BLS broken on Grub2 + Fedora 32?
20 Replies
I had exactly the same issue, and the same change resolved it for me.
I didn’t have enough time to look in detail at it, as I was in a production Linode at the time trying frantically to get it booting the latest kernel installed.
From what I could work out, the kernels were listed in reverse order with BLS enabled, so the latest was always bottom, with Grub always booting the first and ignoring the value of GRUB_DEFAULT, or that saved by grubby.
Disabling BLS got everything working “correctly” (at least from my POV, not necessarily from the developers!)
I need to deploy a new Linode, so I figured I'd look in more detail at this before it goes into service.
It seems like there is a GRUB "saved_entry" pointing to the initial kernel version (5.7.11 in my case) which always overrides any other configuration.
The steps below allow BLS to be used (in my initial testing; time will tell as newer kernels are released.)
- Deploy a Linode built on F32
- Edit the
/etc/default/grub
file and add in the following config:
GRUB_SAVEDEFAULT=false
This tells GRUB not to automatically save the selected option as the default.
- Run the following command to remove the saved_entry link to the original kernel:
grub2-editenv /boot/grub2/grubenv unset saved_entry
Update the GRUB config file:
grub2-mkconfig -o /boot/grub2/grub.cfg
Upgrade and reboot:
dnf upgrade
Following the reboot, the latest kernel should be automatically selected and used (in my case, 5.7.14.)
@andysh and @bradrubenstein - You're exactly correct. This has to do with the order in which Fedora 32 displays kernels. Our engineers deployed a fix that should allow newly created Fedora 32 Linodes to boot into the correct kernel. For existing Linodes created before the fix went out, you can edit your /etc/default/grub
file to set the saved kernel to whichever one you booted to last, and then run grub2-mkconfig -o /boot/grub2/grub.cfg
. You might need to manually select the kernel on your next reboot, but after that it should be automatic. You don't need to disable BLS if you go this route.
Thanks for this; but I'm still seeing the same behaviour on a fresh Linode built earlier today. The Linode is initially built with 5.7.11, after a dnf upgrade
, 5.7.14 is installed but the Linode always boots into 5.7.11.
In fact, even after rebuilding and following the instructions I posted that just worked for me, don't work! It seems very "hit and miss."
The only consistent way I've got this to work is to run grub2-set-default 2
which forces GRUB to boot to index 2 (the 5.7.14 kernel) but I suspect this will then have the same effect when a new kernel update is released.
Just an update on this.
I’d built an image with my steps above in place and used this image to build my 2 Linodes and they were happily booting into 5.7.14 kernel post-update from the kernel in the Linode image (5.7.11) using BLS and without intervention.
Yesterday, kernel 5.7.15 was released for F32.
I ran a normal “dnf upgrade” on one of my Linodes, rebooted and GRUB automatically selected the new kernel (5.7.15) and booted it, again without intervention.
So I’m fairly confident in my fix/workaround.
@jyoo was there any further movement on this?
I've just deployed a new Linode on F33 and it has exactly the same issue - new kernel update installed, but not selected by default.
I've tried my fix detailed above, but this hasn't worked either.
@andysh Hi!
I was looking at this issue a few days ago and was able to reproduce it on F33, but when looking at it again today I curiously was no longer able to reproduce it (i.e. a kernel update applied to a freshly-deployed F33 instance resulted in the new kernel being automatically selected in the GRUB menu, as expected). Even after rebooting 5-6 times, the new kernel was selected each time. I did notice that among the updates there appears to be an update to GRUB2 itself, so it's possible there may have been a bug in the previous version which is now fixed.
Are you still experiencing this issue on a fully-updated F33 instance? If so, one thing you could try is to pass the name of the newest kernel entry to grub2-set-default
instead of a number (since the menu order is not guaranteed to be consistent between reboots). Normally Fedora will do this for you when applying a kernel update, but if you've changed it manually yourself then it might stop doing that. You can find a list of valid names by running ls /boot/loader/entries
.
For example, selecting the latest kernel (as of this writing) would look like this:
grub2-set-default a612f43b70f841bba6af7deb53a7714e-5.10.23-200.fc33.x86_64
(the hex prefix may vary on your installation).
You can run cat /boot/grub2/grubenv
to confirm the change has taken effect.
Hi @lblaboon
Thanks for taking a look.
Unfortunately I'm still seeing this with a new F33 Linode (Nanode 1GB) in London.
After setting up the new Linode, I run dnf upgrade
and I can see that the kernel is upgraded from 5.10.20 to 5.10.23, and GRUB 2 to 2.04-33.
I then rebooted through the Linode Manager, and the kernel that is booted is still 5.10.20-200.
The strange thing is that Fedora has updated /boot/grub2/grubenv
- see below, I've done nothing other than the dnf upgrade
and reboot - note the kernel should be booting 5.10.23-200:
[root@li666-87 ~]# cat /boot/grub2/grubenv
# GRUB Environment Block
saved_entry=34e566a8f40e42e187beb9f41b488575-5.10.23-200.fc33.x86_64
boot_success=0
###################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################
Yet it isn't:
[root@li666-87 ~]# uname -a
Linux li666-87 5.10.20-200.fc33.x86_64 #1 SMP Thu Mar 4 13:18:27 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
[root@li666-87 ~]# ls /boot/loader/entries/
34e566a8f40e42e187beb9f41b488575-0-rescue.conf
34e566a8f40e42e187beb9f41b488575-5.10.23-200.fc33.x86_64.conf
80d8462157374e998348f8a25b06b18c-0-rescue.conf
80d8462157374e998348f8a25b06b18c-5.10.20-200.fc33.x86_64.conf
See screenshot of what GRUB2 looks like on boot - note the newer kernel is at the end of the list.
Hi again @lblaboon!
Really strange - I've done exactly the same steps, but on a Linode in Newark, and this works fine - kernel 5.10.23-200 is selected and booted fine - see screenshot here:
Does the host machine provide its own version of GRUB when "GRUB 2" is set as the kernel in the configuration profile? Could there be a difference in these versions?
When "GRUB 2" is selected your Linode boots from a copy of GRUB that is stored on the host, but this binary is identical across all of our hosts, so I suspect something else must be at play here. In my testing from before, the Linode instance I was using was on the same physical host both when it worked and when it didn't work. I'm going to continue digging to see if I can find anything else.
Thanks @lblaboon
It does seem to be quite inconsistent.
I tried on another Linode (in London) which updated to 5.10.22 (must be a mirror not quite updated to .23 yet) and that booted and was selected fine in GRUB.
I rebuilt the Linode and did the same again; it still updated to .22 but only booted .20 by default!
I think I’ve figured this out with a simple statement I found in some Redhat documentation. The “hash” part of the boot loader entry’s file name is actually the machine ID. Which got me thinking, why is the newer kernel’s different to the older one, if it’s on the same machine?
Because the older kernel was installed using the machine ID of where the Linode image was built. After deployment, that ID is re-generated but the boot loader files remain the same.
This causes the order of the entries in /boot/loader/entries
to become “undefined” depending on whether the new Linode’s machine ID is before or after the Linode image machine’s ID alphabetically - explaining why it works sometimes and not others.
I’ve tried this simple solution - rename the original files in /boot/loader/entries to have the new machine ID. (the 80d8...
is static, based on the current F33 image at the time of writing. This may change when Linode next update the image.)
mv /boot/loader/entries/80d8462157374e998348f8a25b06b18c-0-rescue.conf /boot/loader/entries/`cat /etc/machine-id`-0-rescue.conf
mv /boot/loader/entries/80d8462157374e998348f8a25b06b18c-`uname -r`.conf /boot/loader/entries/`cat /etc/machine-id`-`uname -r`.conf
grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot the Linode and then run dnf upgrade
to get the latest kernel. Reboot and you should be running the new version!
@lblaboon that would be amazing!
I would like to build a pair of Fedora linodes to put into production soon, so I’ll happily try out the new images when they’re available.
Hi @andysh!
Just following up to report that the aforementioned update to our F33 image is now live. The new image will now automatically set the machine ID of the existing bootloader entry to correctly match the machine ID of your Linode instance, so that manual step should no longer be necessary.
Let me know if you run into any further issues.
Hi @lblaboon
This looks good. The new image carries the current kernel for Fedora so I can’t test a kernel update just yet, but there is a 5.11.7 kernel incoming soon so I’ll check it again when that lands.
Just a question - is it intentional there is no “rescue” boot loader entry now?
Thanks again for turning this round quickly.
Hi @lblaboon, tried this on a new F33 Linode this evening. It pulled down the 5.11.7 kernel and rebooted into it just fine after a dnf upgrade.
Thanks again for getting this sorted!
@andysh My apologies for the delayed response. Glad to hear the new image is working for you!
It is intentional that there is no rescue entry anymore. Upstream Fedora images do not generate them by default, so we figured we'd match their behavior. If you'd like to re-enable them on your instance you can simply install the dracut-config-rescue
package.