The Linux Page

Mounting a VDI disk in your host system to edit the file system

Virtual Mount of Rocks with a Beautiful Sunset in the Background

Ordeal

Today I made a mistake and create the file /etc/sudoers.d/timeout which was definitely not compatible with sudo. The file was JSON when sudo only accepts very basic var=value lines.

Result?

I could not use sudo anymore on that VM. I had to find a way to fix the file system without having to rebuild the entire disk because that would have taken way too long.

How to Mount a VDI File

Since your VDI (VirtualBox Disk Image) files are literally disk files, they should be mountable, right? Yes! They are. Actually, that's certainly exactly what the VirtualBox code does, but somehow we are not provided with a dead easy way (it seems) to do just that.

On my old system I had fuse and used that successfully but somehow with the VDI format, it seems to be confused. So I looked for a different method and there were so many steps that I really need to write them down to find them again.

The steps below work just fine in Ubuntu 16.04 and should continue to work for a while. Other solutions may work using FUSE, but I've never been too happy with FUSE.

First Your VDI File

My VDI disk is under my VirtualBox folder, something like this:

"VirtualBox VMs/VM Name/disk.vdi"

Note that in the console you will want to use quotes. At least I think quotes are easier to use than a ton of backslashes. In my full disk names, there are more spaces and parenthesis, so it becomes difficult to read with escaping instead of quotes.

Installing qemu

The file can be mounted using the qemu-nbd virtual device.

If you don't have it yet, make sure to install qemu on your host machine:

sudo install qemu

You may only need qemu-kvm, although having the entire environment on your development machine probably won't hurt.

Making the VDI Accessible as a Device

With qemu-nbd we can now start a virtual disk device this way:

sudo qemu-nbd -c /dev/nbd1 "VirtualBox VMs/VM Name/disk.vdi"

Note: I use /dev/nbd1 instead of /dev/nbd0 because somehow when I use /dev/nbd0 the system does not automatically create the expected three partitions. This may be normal, but when using /dev/nbd1 it works as expected so I am thinking that there is something fishy about /dev/nbd0.

The nbd device takes over and creates partitions. These are likely going to be named:

/dev/nbd1p1
/dev/nbd1p2
/dev/nbd1p5

Partition 1 can be mounted as is because it is an old Linux type of file system. It is used to boot the Virtual Machine.

mount /dev/nbd1p1 /mnt/vdi-boot

Partition 2 is a swap file system. You should not have to do anything with this one. Since it's a swap partition, try to mount as is will fail. The swap has ways to mount these partitions which use a different type of file system than the default for speed reasons.

Partition 5 is your root drive. This includes /home, /etc, /usr, etc. Pretty much all your files. However, this one partition has code 8E. This means Linux LVM. The LVM part means Logical Volume Manager. This is an advanced partition system which reserves the handling of the physical drive to the base system and only gives you a logical way of partioning the available space. The partitioning handles many aspects of modern hard drive capabilities such as RAID capabilities. For example, if you have two hard drives you can very easily create a partition which is duplicated on both drives (RAID 1) or you can create a very large partition (RAID 5).

The "physical" partition numbers (1, 2, and 5) are managed by the LVM system too when you format your drive, although as far as I know these could be changed.

Mounting an LVM System

Now that we have a device available, we can think about mounting it. The LVM System makes things a little more complicated, but not impossible to handle.

First, we want to be able to run kparts so we need to install it if not yet available:

sudo apt-get install kpartx

This tool transforms the LVM partitions into a set of mapped devices which can themselves to mounted.

To transform our physical drive partition 5 shown above into a set of partitions that we can mount, run this command:

sudo kpartx -a /dev/nbd1

The kpartx tool creates mapped devices. These appear under /dev/mapper and their respective names:

$ ls /dev/mapper/
control
nbd1p1
nbd1p2
nbd1p5
snaptest--vg-root
snaptest--vg-swap_1
...
$ ls /dev/snaptest-vg
root
swap_1
$

Here we see a list of physical and virtual devices each representing a partition that can be mounted.

In my case, the one that interests me is:

snaptest--vg-root

As we can see it has the base name of the Virtual Machine I created and it says "root" which means the files under "/...". This is how I found the other set of devices under "snaptest-vg". You can also run the lvdisplay command. There is an example with the out logical volume of interest here:

$ sudo vldisplay
...
  --- Logical volume ---
  LV Path                /dev/snaptest-vg/root
  LV Name                root
  VG Name                snaptest-vg
  LV UUID                ldQt4x-SH2k-ndGi-140z-kMLI-vADZ-lpTFZb
  LV Write Access        read/write
  LV Creation host, time snaptest, 2016-08-20 13:34:54 -0700
  LV Status              available
  # open                 0
  LV Size                17.52 GiB
  Current LE             4485
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:8
...

The mounting here is done as usual with that logical volume of interest:

sudo mount /dev/snaptest-vg/root /mnt/disk

Now you have full access to the VDI volume from your host computer. Note that in most cases you will need sudo to make any changes on that partition.

Cleaning Up Once Done

It is recommended that you clean up the mounted device once done and especially once ready to run your VM because with two systems writing to the same disk from two completely different interfaces is not unlikely to cause problems and break your VDI disk.

If you don't want to unmount, run the VM while not using the disk. Shutdown the VM, then do your additional edits. Finally, restart your VM. Again, try to be very careful. (Having your host and your guest both access the same VDI may work, but I suspect there may quirks... attempt such at your own risk.

Just run the steps backward with the opposite command:

sudo umount /mnt/disk
sudo lvchange -a n snaptest-vg
sudo kpartx -d /dev/nbd1
sudo qemu-nbd -d /dev/nbd1

It is important to remove the LVM. The lvchange with "-a n" does that for you. If you don't do that, you get an error as follow from kpartx:

device-mapper: remove ioctl on nbd1p5 failed: Device or resource busy

The name you use with lvchange is the group name. In the output of lvdisplay, it is actually the VG Name: ... field (VG stands for Virtual Group).

Note that the -d with kpartx doesn't delete the partition on the disk, it just removes them from the /etc/mapper and other areas.

Similarly, the -d on qemu-nbd just disconnect the device. It won't hurt your VDI file in any way.

Re: Mounting a VDI disk in your host system to edit the file ...

What I wrote worked for me. He! He! What you were trying to do is, I think, sufficiently different that it could fail compared to what I've done way back.

Although today's implementation may have changed, I don't know, I don't think it's changing much as far as the front end is concerned.

What I do on my website, though, is update when I try something again and the old examples do not work quite right. I don't tweak my drivers much, though...

Re: Mounting a VDI disk in your host system to edit the file ...

"These instructions are pretty old [...]"
"may not work [...]"
"I've not tested [...]"
"may just not be compatible [...]"

So... you didn't test anything... you didn't offer an alternative either.
Talk is cheap, player.

Re: Mounting a VDI disk in your host system to edit the file ...

These instructions are pretty old and may not work properly anymore. I've not tested the qemu-ndb in a very long time... It may just not be compatible with the newer version of /dev.

Re: Mounting a VDI disk in your host system to edit the file ...

LVM refuses to mount/create directories in /dev. Apparently, after running qemu-nbd, then kpartx, it created two volumes, with identical UUIDs...

nbd1
├─nbd1p1 LVM2_me AxG2cR-lrn7-TcvS-YF2M-bc1T-fY2s-P1hu0m
└─nbd1p1 LVM2_me AxG2cR-lrn7-TcvS-YF2M-bc1T-fY2s-P1hu0m

I'm unable to mount the drive, because it does not create LVM mountable directories.