You just completed apt upgrade. The server reboots hard stopping at “Kernel panic – not syncing: VFS: Unable to mount root fs on unknown-block(0,0)”. By now your Slack is a blaze & all you see is tons of messages on console, & none of them make sense. You know it is something between bootloader & initramfs and time is of essence.
The Problem: A routine update moved around your kernel, bootable partition, or ramdisk used to get into the real root. The server no longer sees the disk it utilized for last 2 years.
The Constraints: You cannot nuke (destroy/ erase) your server. This situation needs to be resolved on-site with no loss of data, and not to create a production outage.
The Solution: Boot from live USB chroot into the file system, rebuild the initramfs image, & edit the GRUB configuration file. This is a step-by-step approach & once you practice this procedure a couple of times the dreaded “unknown-block” messages will be just another day at work. Below I will illustrate how to get a fleet of Ubuntu and Red Hat Enterprise Linux (RHEL) servers back online after upgrades were corrupted due to missing kernel components.
Quick Summary
- Understanding the messages from the kernel panic due to the missing modules.
- How to boot from the live USB, perform the chroot action, regenerate the initramfs images for both Debian based & Red Hat based distributions.
- How to treat LUKS & LVM as exceptions to many of the other guides out there.
Environment and Operating System Tested: Each step of these instructions was performed successfully in Ubuntu 22.04 LTS, as well as on Red Hat Enterprise Linux (RHEL) 9.2 (the logic and steps will also work on CentOS Stream 9 and Debian 12).
Understanding the Unknown-Block Kernel Panic Error
The kernel reports that it cannot mount a device to be used as a root filesystem when it prints the error message “VFS: Unable to mount root fs on unknown-block (0,0)”; the kernel did not find any devices that it has drivers to talk to and therefore did not find anything to use as a root filesystem. The (0,0) indicates that both the major and minor device numbers for the unknown-block device were 0, indicating that the device has no identifiers; effectively, it’s like a “null” device. The most common causes for this error are a failure to pass a valid block device from the initramfs to the kernel or failure to load a necessary driver into the kernel to allow it to see the storage device.
Do not simply reinstall GRUB without first checking the stack trace and the lines printed immediately before the panic. The most common cause of this issue is either a missing SCSI/SATA/NVMe driver, an incorrect initramfs image, or an incorrect UUID.
Analyzing the Boot Screen Error Logs
The following is a filtered sample output typically found on the console from this issue. I have included arrows pointing to the significant error messages.
[ 1.234567] Loading initial ramdisk ...
[ 1.345678] md: Waiting for all devices to be available before autodetect
[ 1.456789] md: If you don't use raid arrays, ignore the above message
[ 1.567890] Begin: Running /scripts/init-premount ... done.
[ 2.123456] mdadm: No devices listed in conf file were found.
[ 2.234567] Gave up waiting for root file system device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Missing modules (cat /proc/modules; ls /dev)
[ 2.345678] ALERT! UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx does not exist. Dropping to a shell!
[ 2.456789] Rebooting automatically due to panic= boot option
[ 2.567890] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) <-- Panic line
[ 2.678901] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---
The message “ALERT! UUID= : : :does not match any devices” tells me that there is a UUID hardcoded into the initramfs that doesn’t match any devices connected to the system. If you don’t see the UUID message or see “Gave up waiting,” it could mean you are missing the driver for the storage device. If you see the message “unknown-block(0,0),” it indicates that the kernel never received a valid block device.
System Architecture Prerequisites
I try to visualize how the boot sequence occurred so I can see where the chain of events broke down.A text representation of the Linux boot process, from the time the system is turned on to displaying the login prompt, appears below.
BIOS/UEFI ➜ GRUB2 (bootloader)
GRUB2 loads kernel (vmlinuz) and initramfs (initrd.img) into memory
Kernel initialises hardware ➜ mounts initramfs as temporary root
Initramfs runs scripts to detect real root (LVM, LUKS, filesystem drivers)
Initramfs mounts real root (e.g. /dev/mapper/vg-root) ➜ pivots to real root
Systemd or init takes over ➜ userland boot completes
If the initramfs does not include the driver for your disk controller or cannot create a LVM volume group, then the hand-off fails, and you will see a panic message.
Core Causes of Linux Boot Failure Troubleshooting
There are three main things that I have encountered over and over when troubleshooting why Linux fails to boot.
Missing or Incomplete Initramfs Image
When you update the kernel, the initramfs file will be rebuilt. However, at times, the package script does not create the file, or it fails to create the initramfs correctly. When this happens, the initramfs does not include the virtio_blk or nvme driver, and during the reboot, the kernel cannot see the hard drive, and you receive an unknown block panic message.
Incorrect UUID in the GRUB Menu
This can be very sneaky! In your grub.cfg file, you may still have the UUID reference for the partition that you either resized or moved. The kernel will boot, but since your root= parameter does not point to a valid device, the initramfs will not be able to locate the root directory of the filesystem.
See the following for an example of a broken /boot/grub/grub.cfg file:
menuentry 'Ubuntu' {
linux /vmlinuz-5.15.0-91-generic root=UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890 ro quiet
initrd /initrd.img-5.15.0-91-generic
}
You must then check the UUID of the filesystems with blkid to see if they match.
sudo blkid /dev/sda2
/dev/sda2: UUID="f9e8d7c6-b5a4-3210-fedc-ba0987654321" TYPE="ext4" PARTUUID="12345678-02"
If the UUIDs do not match, you will not be able to mount your root filesystem. Instead of seeing a message that states “UUID mismatch,” you will see the unknown-block panic. I always check to see if the UUID used with blkid matches the UUID referenced in the root= parameter.
Corrupted Boot Partition
A power failure or a failed dd could corrupt the boot sector or the filesystem located on /boot.The initramfs file might be incomplete causing the kernel to panic when GRUB loads the kernel. This can typically be fixed by running fsck on the boot partition. You have to run the fsck command from a live environment (like a bootable USB).
What I Tried That Didn’t Work
I wasted a lot of time on two failed attempts before figuring out what the real fix was. The first thing I tried was booting with a different kernel from the GRUB advanced menu. The reason this didn’t work was that there were missing driver modules in the initramfs for the older kernel as well. The second attempt was to manually pass inroot=/dev/sda2 instead of the UUID on the GRUB command line. Again, this didn’t work as the kernel kept panicking due to the initramfs lacking any disk drivers and therefore couldn’t see any devices, so no matter what I passed into root=, it couldn’t mount the root filesystem. This only reinforced the importance of rebuilding the initramfs no matter what.
How to Fix Kernel Panic VFS Unable to Mount Root FS
Boot to a Live USB, mount the broken system, chroot into it and regenerate the initramfs. The process will be the same whether you are using Debian’s update-initramfs command or RHEL’s dracut command, so I will show both.
Before changing any configurations, be sure to BACKUP I always copy /boot and grub.cfg somewhere else while in the chroot environment; it is like a cheap insurance policy if something goes wrong.
cp -a /boot /boot.bak
cp /boot/grub/grub.cfg /boot/grub/grub.cfg.bak
Booting into a Live USB Environment
Grab an Ubuntu or Rocky Linux ISO, burn it to a USB stick, boot from it, and select “Try Ubuntu” or the rescue mode so you can get to a shell and NOT to the installer.
Mounting File Systems for Chroot Boot Recovery
In order to access the chroot recovery environment, you will need to mount the root partition (and /boot if it is mounted on a different partition) to a temporary mount point. After this, you will bind mount the virtual filesystems. The following command(s) apply to the typical case where /boot is a separate partition:
# Identify your root and boot partitions with lsblk or fdisk -l
mount /dev/sda2 /mnt
mount /dev/sda1 /mnt/boot
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
mount --bind /run /mnt/run
chroot /mnt /bin/bash
Without performing the bind-mounts, you will be unable to view current device activity or kernel activity from within the chroot environment. Once you chroot into the system, it is as if you are on the same computer system as what is broken.
Executing Update-Initramfs Rescue Mode
Next, recreate the initramfs (memory disk image) for all installed kernels. For example, on Debian/Ubuntu systems:
update-initramfs -u -k all
The output from the command(s) above should look something similar to this:
update-initramfs: Generating /boot/initrd.img-5.15.0-91-generic
update-initramfs: Generating /boot/initrd.img-5.15.0-86-generic
The output confirms that the memory disk image(s) have been recreated, contain the new kernel modules, and more importantly, have the current disk drivers included. Logout of the chroot environment and unmount all previously mounted filesystems before you restart your system. In 90% of cases, this is sufficient to resolve the panic state.
Advanced Recovery: Rebuild Initramfs Dracut and GRUB Fixes
For RHEL-based Linux distributions, the commands to create initramfs are slightly different than the ones for Debian/Ubuntu, as RHEL-based systems use dracut instead of update-initramfs. The commands to complete the recovery have the same purpose; however, the attributes differ between the methods. Also, you may want to recreate the GRUB configuration if it was corrupted during the update.
Using Rebuild Initramfs Dracut for RHEL Systems
To run Rebuild Initramfs Dracut, first, you must chroot into the RHEL system:
Once within the chroot environment, run:
dracut --force --regenerate-all
dracut: Executing: /usr/bin/dracut --force --regenerate-all
dracut: dracut module 'busybox' will not be used because it's excluded
dracut: *** Including module: bash ***
...
dracut: *** Creating initramfs image file '/boot/initramfs-5.14.0-362.el9.x86_64.img' ***
The –force option replaces the existing memory disk image, while the –regenerate-all option will recreate all memory disk images for each kernel in /boot. This process will ensure that the memory disk image will contain the required nvme, sd_mod, and usb_storage drivers that were missing previously.
Resolving a Grub Configuration Error
Confirm that GRUB correctly points to the UUID of the correct partition after you’ve repaired the initramfs, and that there are no stale device entries in GRUB’s configuration file. If you’re chrooted into the system, you’ll want to create a new GRUB configuration file:
grub-mkconfig -o /boot/grub/grub.cfg
If you’re working with a UEFI-based system, you’ll need to reinstall GRUB itself:
grub2-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
An output from a successful run of grub-mkconfig may look like this:
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.0-91-generic
Found initrd image: /boot/initrd.img-5.15.0-91-generic
Found linux image: /boot/vmlinuz-5.15.0-86-generic
Found initrd image: /boot/initrd.img-5.15.0-86-generic
done
The new UUIDs are pulled from the live system running within the chroot environment, which means that they will be the same as those returned by blkid. This is how to prevent and fix the UUID mismatches that occur.
Edge Case: LVM Inconsistencies and Encrypted Disks Post-Update
It’s possible to break LVM and LUKS volumes due to silent failures that occur when you update the metadata formats used by LVM and LUKS. I have experienced problems where updates change the LVM metadata format just enough that the initramfs’s lvm command cannot activate the logical volumes. Also, confusion in the LUKS key slot can occur. Below are the steps to reactivate the volumes.
Forcing LVM Volume Activation in Pre-boot
If you encounter a busybox initramfs rescue shell during your boot process, the quickest way to reactivate your logical volumes:
lvm vgchange -ay
If you’re already chrooted from a live USB installation, the first thing to do is to bind mount /run. Then continue with:
vgchange -ay
exit
The command will scan all the physical devices for volume group identifiers, construct the volume groups, and activate the logical volumes. Once all of the logical volumes are activated, the root LV will now be accessible via /dev/mapper/vg-root, and you will be able to mount it from the initramfs environment.If you are successful, you can preserve this change by creating a permanent version of the initramfs (which contains embedded LVM metadata) later down the road.
Unlocking LUKS Partitions Manually via Cryptsetup
When you boot into an encrypted partition where the root filesystem is located, you will be placed into a password-prompting screen without seeing anything. Once you boot into a live USB environment, you may manually unlock the LUKS container with the command below:
cryptsetup luksOpen /dev/sda3 cryptroot
Now you can now mount the unlocked /dev/mapper/cryptroot device /mnt and enter your chroot session with the command below. Finally, check the contents of the /etc/crypttab directory to make sure the UUID listed in there is correct! Next, run the command below to rebuild your initramfs; this time, because of the rebuild, new cryptsetup hooks are permanently embedded into the newly created initramfs. You may refer to the Arch Linux dm-crypt documentation for the same cryptsetup process that will work on any modern Linux distribution.
Frequently Asked Questions
Why does my server only experience crashes after I perform updates to the kernel?
Kernel updates result in the rebuilding of an initramfs. When the new initramfs build scripts do not include the necessary block driver packages (due to an issue with the packaging process or due to custom build attributes), the newly built initramfs will not contain everything it needs to properly boot. Nonetheless, your original “good” kernel’s initramfs image will still exist, but GRUB may default to using the newly built initramfs. As a consequence, you may be able to restore from a previous kernel from within the GRUB menu.
Is it possible for me to roll back to a previous kernel using the GRUB menu?
Yes. If your server is ever displaying a GRUB menu, select “Advanced options” from the GRUB menu and select a prior kernel version. If that kernel is able to boot, you will have concluded that your issue stems from either the new initramfs or the new kernel. This means that you can fix the current kernel while your server continues to run on the previous kernel. If, however, the previous kernel is also suffering from panics, it more than likely indicates you have corrupted your /boot or you have a system-wide driver problem that still requires a live USB.
How do I protect myself from possible future corruption of my initramfs during OS upgrades?
Monitor the output generated by the package manager. If you come across messages during the update-initramfs or dracut phase indicating that firmware files are missing or there were problems in building kernel modules, do not reboot until you have resolved those issues. Also, prior to every major update, I always make a copy of my /boot/initrd.img- and keep it for safety. If you are running RHEL, you may set the dracut program to maintain a previous initramfs by passing the --keep option to it. You may want to take a look at the Debian initramfs-tools documentation for information regarding how to perform a dry run of a build to ensure you have all the necessary kernel modules before it is time to rebuild your initramfs.