Can't mount filesystem after Ubuntu 9.04 -> 9.10 upgrade
After upgrading to 9.10, I'm unable to boot. I used the paravirt kernel as instructed here:
This is what my Lish console says:
NET: Registered protocol family 17
NET: Registered protocol family 15
Bridge firewalling registered
Ebtables v2.0 registered
ebt_ulog: out of memory trying to call netlink_kernel_create
802.1Q VLAN Support v1.8 Ben Greear <greearb@candelatech.com>All bugs added by David S. Miller <davem@redhat.com>SCTP: Hash tables configured (established 8192 bind 8192)
registered taskstats version 1
unknown partition table
blkfront: xvdb: barriers enabled
xvdb: unknown partition table
XENBUS: Device with no driver: device/console/0
md: Waiting for all devices to be available before autodetect
md: If you don't use raid, use raid=noautodetect
md: Autodetecting RAID arrays.
md: Scanned 0 and added 0 devices.
md: autorun ...
md: ... autorun DONE.
kjournald starting. Commit interval 5 seconds
EXT3-fs: mounted filesystem with writeback data mode.
VFS: Mounted root (ext3 filesystem) readonly on device 202:0.
Freeing unused kernel memory: 496k freed
Write protecting the kernel read-only data: 7192k
init: ureadahead main process (991) terminated with status 5
mount: mount point /dev/pts does not exist
mountall: mount /dev/pts [1001] terminated with status 32
mountall: Filesystem could not be mounted: /dev/pts
mount: mount point /dev/shm does not exist
mountall: mount /dev/shm [1002] terminated with status 32
mountall: Filesystem could not be mounted: /dev/shm
mount: mount point /dev/pts does not exist
mountall: mount /dev/pts [1006] terminated with status 32
mountall: Filesystem could not be mounted: /dev/pts
mount: mount point /dev/shm does not exist
mountall: mount /dev/shm [1007] terminated with status 32
mountall: Filesystem could not be mounted: /dev/shm
init: mountall main process (996) terminated with status 4
Mount of root filesystem failed.
A maintenance shell will now be started.</davem@redhat.com></greearb@candelatech.com>
15 Replies
do-release-upgrade
without thinking.
Now of course the system doesn't boot. The Linode Console gave me a bunch of mountall errors. I need to get into some sort of rescue system.
Now I need help
The /boot directory is empty, there is no grub or lilo to edit. I have no networking so I cannot do much from within the system.
I really messed up. I'm sorry.
Can anyone help me?
a.
In the Linode Manager dashboard, edit the profile of the host.
Choose the latest paravit kernel from the pull-down menu.
reboot.
-Chris
@caker:
Click on your Linode, click on the config profile, choose "Latest Paravirt" from the kernel drop-down, save, and reboot. Done.
Already tried that - see original post:
@sumowrestler:
I used the paravirt kernel as instructed here:
http://blog.linode.com/2009/10/30/ubunt … mic-koala/">http://blog.linode.com/2009/10/30/ubuntu-9-10-karmic-koala/
I had the same problem. I think it is either wrong kernel installed or a wrong initrd.
1. log in to rescue shell
2. mount -o remount,rw /
3. mkdir /var/run/network
ifup eth0
Networking required for apt-get. I was missing /var/run/network after upgrade, not sure why.
4. apt-get install linux-image-virtual
This will provide the right kernel and run update-initramfs. It will fail but that's enough to get a normal boot.
5. reboot
6. apt-get -f install
To complete the kernel installation.
Cheers
This worked! In addition to your instructions, I also ran
apt-get update
before
apt-get install linux-image-virtual
because some of the packages 404'd.
My server is back up!
@sumowrestler:
rvl,
This worked! I had to run
apt-get update
before
apt-get install linux-image-virtual
because some of the packages 404'd.
My server is back up!
Wait, that fixed it? Are you running pv-grub?
> Wait, that fixed it? Are you running pv-grub?
I don't know if I'm running pv-grub. I ran the full list of commands that rvl suggested, not just the ones I mentioned in my previous post. I edited my post to clarify this.
Whenever I boot the system using the latest paravirt kernel (Latest 2.6 Paravirt (2.6.32-x86_64-linode11)), I am unable to get to the recovery console, as the console prints out some text, then goes into a loop where three or four mountall errors are printed constantly.
Here is the startup sequence below (I apologize for any spelling mistakes, as I'm copying this from a screenshot and not able to copy-paste)
EXT3-fs: mounted filesystem with writeback data mode.
VFS: Mounted root (ext3 filesystem) readonly on device 202:0.
Freeing unused kernel memory: 496k freed
Write protecting the kernel read-only data: 7192k
init: ureadahead main process (987) terminated with status 5
init: ureadahead-other main process (996) terminated with status 4
init: ureadahead-other main process (997) terminated with status 4
then, I get into the mountall loop
mountall: mount [6378] terminated with status 32
mou
uld not be mounted:
mountall: Skipping mounting since Plymouth is not available
does not exist
mountall: mount [6379] terminated with status 32
mountall: Filesys
nted:
mountall: Skipping mounting since Plymouth is not available
mount: mount p
mountall: mount [6380] terminated with status 32
mountall: Filesystem could not
: Skipping mounting since Plymouth is not available
mount: mount point does not
t [6381] terminated with status 32
I can get to the recovery image using finnix, however, I don't know what to do in there, as update-initramfs will create a ramfs for the finnix kernel, and not the one I want to boot!
Note that 10.04 is still in beta, so there's a nonzero chance you hit some random bug at the wrong time… if worse comes to worse, revert to the image you cloned prior to attempting the upgrade and try it again in a few days.
@hoopycat:
Hmmm, what's in your /etc/fstab? It sounds like something weird is in there. This is probably going to be a different problem than the 9.04 -> 9.10 upgrades.
Note that 10.04 is still in beta, so there's a nonzero chance you hit some random bug at the wrong time… if worse comes to worse, revert to the image you cloned prior to attempting the upgrade and try it again in a few days.
Thanks for the help…
Just in case someone else runs into this problem, the looping occurred because I had an error in my /etc/fstab
dev /dev tmpfs rw 0 0
to /etc/fstab.
libudev: udev_monitor_new_from_netlink: error getting socket: Invalid argument
mountall:mountall.c:3206: Assertion failed in main: udev_monitor = udev_monitor_new_from_netlink (udev, "udev")
init: mountall main process (782) killed by ABRT signal
General error mounting filesystems.
I added the dev line to /etc/fstab, which did not fix the problem in and of itself. Switching the kernel from stable to paravirt did get me back up and running. I have not tried booting with paravirt and without the dev entry in fstab.
Edit: My bad guys, Google to the rescue. If anyone else is as lazy as me, the OS version is simply stored in the /etc/webmin/config file, and apparently is not updated during upgrades. Kernel info should be correct though if it is upgraded.