The Linux Page

The pivot_root error & GRUB magic

Boot on Linux

The Error: pivot_root not found

Debian or Ubuntu and pivot_root and GRUB and hard drives... In general, I like Linux, but once in a while, it generates quite a few problems!

Yesterday, I used apt-get upgrade to upgrade my system which I hadn't do for the last 2 months or so. 460 packages upgraded and many more untouched. So far so good. The TeX installation failed because I said I'd want a group and the installer did not automatically create that group. I created the group and all was fine (though, looking at the logs, many X11 entries were half installed.)

As it went on, I noticed a few kernel updates and stuff about GRUB so I thought, let's reboot to run the latest 100%. Yeah... unless it doesn't reboot, hey?!


Problem 1. GRUB

First of all, GRUB has a part in its menu.lst that is automatically updated. That part is just not trustworthy, it puts some defaults (it very much feels like it at least) and of course my system doesn't work well with defaults.

Just to mention, my HD is a copy of a RAID (yeah... I don't have my RAID on that computer anymore, it's for the company server now!) and somehow GRUB is capable to know that it was installed for RAID but not that it isn't anymore. So all were back to root=/dev/md0.

Also, the default drive was set back to (hd0,0). Which is not what I use (what's that convention anyway?!)

Okay! So today I got the answer! When I need to install GRUB from a Linux Rescue, it goes about with the command "df" to determine what's what. The problem is that the df command just prints in some formatted way the mtab file. But the mtab file is NOT up to date!!! So, now that I know, I can tell you that to fix the boot you do this:

Note: On newer kernels (2.6.x), it looks like the drives are much more likely to be sda, sdb, ... I had a boot problem the other day because that changed and the boot drive could not be mounted since I was telling the system to look for hda2, and it did not know about hda at all! Try changing your /etc/fstab to use /dev/sda1, /dev/sda2, ..., /dev/sdb1, ... instead of hd. S stands for serial, in case you were wondering. All the drives are being changed from paralell (IDE/ATA) to serial (SATA) using some SCSI drivers.
  • Get a Linux booting CD with linux rescue on it
  • Boot from that CD
  • Once in the console, determine the drive you need to fix (hda, hde, ...)
  • Create a directory in mnt (i.e. mkdir /mnt/fix)
  • Mount the drive you need to fix (i.e. mount /dev/hda3 /mnt/fix)
  • Look at the /etc/mtab file which shows you how the drive got mounted (i.e. cat /etc/mtab)
  • chroot /mnt/fix
  • Now edit your existing mtab file: vi /etc/mtab   and put one line for the root with the correct drive (as you've seen in the linux rescue mtab file); the parameters are certainly useless. What you need are the two first entries "/dev/hda3 /", the rest can be nothing.

Today, I had to run such instruction on an Ubuntu 24.04 server and needless to say, the old instructions were not quite sufficient. I used the following instead:

# If not there, create /mnt
sudo mkdir /mnt

# Mount root partition:
sudo mount /dev/sda3 /mnt # direct hard drive
# or
sudo mount /dev/mapper/ubuntu--vg-ubuntu-lv # LVM partition

# If you have a separate boot partition (change sda2 with your boot partition):
sudo mount /dev/sda2 /mnt/boot

# Mount your virtual filesystems:
for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done

# Chroot
sudo chroot /mnt

# Edit config file(s) as needed, for example:
vim /boot/grub/grub.cfg

# Apply changes

# Get out of chroot
<hit ctrl-d>

# Reboot
init 6

Note: with the Ubuntu 24.04 installation DVD, there is no "rescue" option. I booted as if I were to install the server. At the point where it asked me what type of server I wanted to install, I hit Alt-F2. It loaded the second console and auto-logged in. From there, I could run the commands above.

On the reboot, that instance of Linux asked me to first remove the DVD from the drive, which is really cool. Once done with that, it rebooted as expected.

Problem 2. RAID -> not RAID

Yeah... well, I lost my 170Gb RAID for a 200GB SATA. That's certainly a lot better, that SATA is 133Mhz access whereas the RAID was at 33Mhz. (this is because I have a CDROM, like most of us, which only takes 33Mhz... and that drops one channel to 33Mhz and for a RAID you need everything the same, so all the IDE were 33Mhz... it seems to me that one of the drives should have been at least at 100Mhz, but well...)

So, the SATA drive in my BIOS is recognized as the 1st drive. Practical when you say: Boot on drive C: (for systems which are brain dead in regard to booting). That's what I have. Once the system is booted, it looks at the IDEs first because their hardward connect comes first. That means my SATA instead of being hda, it's hde!

Problem 3. Live CD from Ubuntu

Okay, just that took me some time to decipher in part because I first tried to use the Boot CD from Debian. Today I thought: hey! why don't I use a Live CD from Ubuntu, I have several still...

A lot better, the Live CD could boot in a more real console. That console could be used to check and edit the setup much easier and especially I could run GRUB after a chroot. Very cool. So I played around and around to try to figure out all of these things. Since the GRUB installation program had wiped out my previous setup, the problem was to find out what that was. (Now I made a backup of the menu.lst AND I have the setup outside the "auto generated" marks — note that this is a special Debian feature, you don't have it in Red Hat [maybe in Fedora?])


1st error: filesystem type unknown

The very first error was something like:

filesystem type unknown partition type 0xc

Error 17 : Cannot mount selected partition

This is because it was trying to load from the wrong partition. I recognized it because 0xc is FAT32 and I have some room to someday maybe install Win32 on that drive.

So I went in and tried to change the partition. I put (hd0,10) since I have the root partition on partition 10...

2nd error: File not found

Cool, an hour spent finding the correct drive, and now on to the next error!

Error 15: File not found

So now it recognizes the file system, it can read from it and ask for the kernel file. The problem now is: the file could not be found.

Ha! Of course, GRUB counts from 0 and not 1 like everybody else so I lost a good hour figuring out that I had to put 9. At first I was very much wondering which drive I was on since GRUB would say that my SATA was hd1 when I'm in a shell booted. But it was hd0 anyway. Okay, so it was (hd0,9) for my SATA drive.

3rd error: pivot_root not found

The file system is recognized or at least so I thought. Now it tries to read the input file and I get a kernel panic after the system says that it cannot find some file (pivot_root here, you could get some others too.)

pivot_root: No such file or directory
/sbin/init: 424: cannot open dev/console: No such file
Kernel panic: Attempted to kill init!	

This one took me the rest of my day to figure out. I tried all sorts of things and the main one, I thought, would be to put root=/dev/blah to make sure that the correct device was used to do the root pivoting.

So I wrote root=/dev/hde. And that didn't work either. Now the error was invalid file system. On to the Ubuntu shell, use fsck on all the partitions, not even 1 error, of course, since I had shutdown cleanly. So?! What was wrong? Very simple: you need to specify the exact partition and not just a device. So all I had to do here was to add 10 at the end: root=/dev/hde10. And it worked, finally!

Okay, this last one is certainly my fault! I could have paid a little more attention. I'm just thinking that it would be neat if a software like GRUB could know that the specified device was wrong and explain what's wrong instead of the errors we usually get... Maybe one day it will be like that.

4th error: Back to RAID, wrong magic

Argh! I tried today to go back to a RAID system since I got new drives (2x 300Gb at $99 each! in April 2006) And I get similar errors, but the other way around. More or less, I have to boot on /dev/hde5 (i.e. I have to tell the kernel & initrd that the root is on hde5) even though the RAID is on /dev/md5.

When I boot with /dev/md5 as the root path, it panics when it needs to read files with a:

cramfs: wrong magic

Which means it's trying to read the wrong stuff. The problem is it doesn't tell you what it's trying to read. So I dunno what to do next to fix that part. Now it looks like it boots just fine. The RAID is up and I can play around and the RAID seems to be fully functional.

5th error: Wrong permissions on /tmp

It goes on... 8-) This is not really a booting issue, it's starting X-Windows. With all the changes I have made, somehow the /tmp directory got its permissions changed from the usual to drwxr-xr-x. I had to do a chmod 1777 /tmpand my X11 session would start. I'm wondering whether something else got messed up like that!