The Linux Page

qemu to run VMs

Castle on a hill representing a domain

Introduction

I've been a big fan of VirtualBox for years. But with the last two iterations, I have had more and more issues. So now I've been looking in using qemu directly.

Pros of VirtualBox

  • Simple to Use
  • Nearly all settings accessible from the interface
  • Network expensions that allow for setups not possible with qemu
  • Easy to open/close the UI of a VM

Cons of VirtualBox

  • The mouse gets stuck once in a while1
  • On Ubuntu 24.04, once all the RAM is used by the OS for caches (i.e. I have 512Gb and use between 2% and 5% of the memory, the rest are caches...), VirtualBox stops working
  • When I upgraded my OS, somehow, the one drive I had encrypted cannot be read by VirtualBox anymore; it says that the drive is not encrypted; note that in both versions, I tested with VirtualBox version 7.x so I don't understand why it would fail

Using QEmu Instead

Now, I'll do everything on the command line. There is a GUI manager, but I don't like it because many of the options I'd like to use are not available there. Also I always wanted to use the VirtualBox command line, but never really got into it. So this is my chance to do so...

Glossary

Some of the terms used by qemu are different than what I'd expect. Here are a few that you may want to be aware of:

  • domain — virtual machine (VM); I use VM and domain interchangeably here
  • virtual CPU (vcpu) — qemu creates threads to reserve a CPU and makes that thread available to the virtual machine; in other words, there is nothing virtual about those CPUs

Create a VM

First, we have to create a VM. Note that I like to have what I call a "Clean Server" of each version of Ubuntu. That way, I can just upgrade that version, clone it, and start with a fresh server very quickly without having to re-install everything from a cdrom/Internet.

virt-install \
    --osinfo detect=on,require=on \
    --metadata name=ubuntu2404,title="Ubuntu 24.04" \
    --memory 8192 \
    --vcpus vcpu=4 \
    --network bridge=virbr0 \
    --disk path=/mnt/vm-storage-space/clean/ubuntu2404.img,size=50,sparse=yes,format=raw \
    --cdrom /path/to/iso/ubuntu-24.04.2-live-server-amd64.iso

The "detect=on" parameter of the --osinfo option is used to not have to define the OS for each type of VM. The "require =on" is to make sure that if the detection fails, the whole command fails.

The --metadata defines a VM name. Don't use spaces or other special characters. Actually, I had some issues even with underscores. The title, on the other hand can include spaces. Make sure to use quotes.

The --memory defines the memory size of the VM. 8Gb here. Note that I tried to use the memory size (i.e. "8GiB") and it did not work. You have to use a number of megabytes.

WARNING: as we will see below, the virsh setmem command uses Mb instead of Gb and again the memory size does not work there.

The --vcpus defines the number of CPUs to use within the VM.

The --network defines the name of the bridge to use. By default, the virtual system creates the virbr0 bridge so we can use that here. At the moment, I'm not too sure why you'd want to use a different bridge. If you really need to create so many VMs that you'd need multiple briges... you probably have a large server and this may not be the docs you need to read for that...

The --disk path defines a path to a .img file. If the file does not exist yet, it will be created. The size parameter defines how many Gb the disk will be. The sparse flag defines whether to allocate the whole thing or not. Be careful since not allocating all (sparse=yes) can cause two issues: (1) installation is too slow and fails with various I/O errors; (2) you end up creating hundred of VMs and they all slowly grow their disk which in the end tries to eat up  more disk space than your physical disk offers. (i.e. the disk will first take up 4Kb, once the OS is installed, it will take around 10Gb, once you install and run services on that VM, it will add logs and data and eventually end up using the entire 50Gb or at least try to...) There is another possible drawback: the sparse capability means that blocks get allocated later and thus in a different location on the drive. For an SSD, that's not an issue. For an HDD, however, fragmentation can cause slowness as reading the data requires many more head movements.

Finally, the --cdrom defines the ISO to use to boot from. In general, this is a CD image that  has some version of Linux or some other OS to be installed. Here I show Ubuntu 24.04 server. I like to install the server version because it generally installs no UI things and thus makes for a slimmer Linux base image.

Weird Bug? Unexpected Disk Size

When I created the VM, I asked the installer to use the entire drive. Only once it was all created, the drive was about 28Gb instead of nearly 50Gb. So it used rougly 50% instead of the entire drive. I have no clue what happens. On my following attempt, I paid closer attention and noticed that you're given the ability to change the size before proceeding with the installation. Doing so at that time allows you to avoid having to run the following commands.

To fix the issue, I had to extend to the drive and then resize the file system:

$ sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
$ sudo resize2fs /dev/ubuntu-vg/ubuntu-lv

Note that since I used an LVM and ext4, I could do that on the system directly. Quite practical in this situation (note that ext4 can only enlarge a live partition, not shrink it; to do a shrink, you'd want to boot on the ISO above and do the resizing from a tool found there, such as parted,)

Renaming the Drive

As a side note to myself: Keep in mind that if you want to be able to mount the image drives later, you may want to give them a name that won't clash with another drive. So the ubuntu-vg and ubuntu-lv should be changed. The LVM system uses those names in its mapper so if two distinct drives use the same name, it won't work properly.

The rename itself is rather easy, but since we are talking about renaming the boot drive, care must be taken. There are several files to update and which ones may vary by version of Ubuntu.

Since in most cases you want to follow steps in the order given, I will write them in the order I think it is safest/easiest to fix in case you lose power while working on this.

1) Determine the name of the existing boot drive

I think the simplest for this is to use:

$ df /
Filesystem                        1K-blocks    Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv  49353520 7884388  39188220  17% /

This gives your the group and volume name of the root partition.

2) Decide on new names

In my case, I used vgubuntu2404 for the volume group. That's the name of the computer (the hostname) with the "vg" prefix. I think that long term it's much more likely to make it easier to know what's what.

Similarly, I named the logical volume lvubuntu2404. For this example, though, I will use lvroot instead. I'm not too sure what makes the more sense here. Having the Volume Group different in each computer already allows me to mount any number of them simultaneously.

Now we can proceed with the remaining of the process.

Note: you probably want to avoid dashes in those names because of the double dash you otherwise end up with. Although if you don't mind, that's not that bad.

3) Edit the fstab file and rename the drives there is present

In newer versions of Ubuntu, I have not seen the LVM drive names here. Instead, these are converted to UUID or a drive identification label. The UUID is not the best for the root drive, it can cause issues. Still, if the existing name is present here, you need to rename the path to the new name.

For example, the default in Ubuntu 24.04 would be:

$ ls -l /dev/mapper
lrwxrwxrwx 1 root root       7 May 17 04:10 /dev/mapper/ubuntu--vg-ubuntu--lv -> ../dm-0

If you see such in your /etc/fstab file change the name there. From that point on, avoid any kind of mount or umount command.

Note: What's up with the double dashes? This is the way the guys taking care of the LVM system decided to escape the dash character. One dash in the name here separate the VG (Volume Group) from the LV (Logical Volume). If there are two dashes, then it's part of the name, but the name itself will only have one dash (i.e. ubuntu-vg and ubuntu-lv are the actual names).

4) Edit grub.cfg (even if it tells you not to)

Now, in a similar manner, the grub system needs to know the exact name of the disk to use as the root drive. Many things depend on it. So you have to edit the configuration file. Note that has to be done manually, because one we do the renaming, the device cannot be found. This is because the OS does not update its tables dynamically. It will do that on the next reload.

$ sudo vim /boot/boot/grub.cfg

WARNING: on machines other than Ubuntu, that file may be in a different folder; or if you use EFI; I suggest you search with the following command to make sure that's the only configuration you need to edit:

$ find /boot -name grub.cfg

As with the /etc/fstab file, search and replace all instances of the old name with the new name. For example, if you created an Ubuntu domain for 24.04, maybe you want to rename the group vgubuntu2404 and the volume to lvroot (although it's less important to rename the logical volume). So we look at the grub configuration file and find lines that look like so:

linux /vmlinuz-6.8.0-60-generic root=/dev/mapper/ubuntu--vg-ubuntu-lv ro

and we change them to:

linux /vmlinuz-6.8.0-60-generic root=/dev/mapper/vgubuntu2404-lvroot ro

5) Rename the drive

Now that /etc/fstab and /boot/grub/grub.cfg were edited, we can rename the drive.

$ sudo vgrename ubuntu-vg vgubuntu2404
$ sudo lvrename ubuntu-lv lvroot

At this point, things are flaky. Command that will try to do something with the root drive at a low level may fail if they drive to use the old name. To fix the issue we'll want to reboot, but first, you may want to check a few more files.

6) Other files that may need editing

On my end, I did not have to edit those files, but you still want to check them because if you have those files, it is much safer to make sure they include the correct names.

  • /etc/default/grub

This file may include the boot device path in the rd.lvm.lv=... and root=... variables. We do not want to run the update-grub2 or grub2-mkconfig commands at the moment, but when that happens later, the correct name is going to be really important. If the name is not defined here, grub will probe for it with grub-probe.

  • /etc/initramfs-tools/conf.d/resume

I'm not too sure what this file does. I would imagine that it allows for resuming after hibernation. I don't ever use that capability because it never worked properly for me. Also my main system runs 24/7 since it is a server and does actually serve websites.

But I would imagine that to resume from hibernation, you also need to know what drive is the root drive. So fixing that here would allow you to continue to use that feature.

7) Verify everything

Now, I suggest you go back through the steps 1 through 6 and make sure that everything looks right. For the renaming, you verify that the new names are what you want using the following:

$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vgubuntu2404/lvroot
  LV Name                lvroot
  VG Name                vgubuntu2404
  ...

8) Verify again

Just in case, make double sure. It's easier right now than through a live boot where you'll have to mount the drives, do fixes with vi, try three or four times...

9) Reboot

Okay, not it's time to reboot. Hopefully you tripple checked everything above to make sure that the names were changed everywhere.

Remember that the /etc/fstab is likely using UUID now and that's it. No changes necessary in that file.

$ sudo init 6

10) Test that it all works as expected

Once rebooted successfully, you should be able to see the new names when you use the lvdisplay and df commands. If that's the case, then grub can now be updated as normal. So let's test to make sure:

$ sudo update-grub2

If you have a resume file as mentioned in step 6 above, then you also need to update initram. Without that step, the resume function won't work right as it will most certainly attempt to access the drive using the old name.

$ sudo update-initramfs -c -k all

WARNING: the -k all flag is dangerous in that if it fails midway, you won't have a rescue version of the kernel to boot from. However, in this case, we really do want to update all versions since without this update, it won't work (i.e. the older rescue versions would try to mount the root drive using the old name).

If you intend to use the resume function, I would consider rebooting once more, just in case. I think that the initramfs update would be available as expected, but I would not be 100% confident of such.

Cloning

Once we have a clean server VM, we are ready to clone it for our own purposes. In my case, I'm looking at creating a build server.

But first make 100% sure that the machine being cloned is shutdown.

From a console inside the clean server to be cloned do:

$ sudo init 0

Wait for that domain to be shutdown completely. Since the shell closes pretty much immediately, that's not a good signal in that regard. To monitor the state, use the virsh list command with its --all command line option (see here for details).

Now we're ready to run the next command, the actual cloning. The --file option is used to specify the source being cloned:

$ virt-clone \
    --original ubuntu2404 \
    --name buildserver \
    --file /mnt/vm-storage-space/servers/buildserver.img

The virt system automatically copies the image file for you. Only the destination is indicated. The source is not necessary since the --original knows where that is.

WARNING: if your source has multiple drives, then the virt-clone command needs one --file option per drive.

Note: if the source drive is sparse, then the copy will also be sparse. However, you cannot copy a drive which isn't sparse as a sparse drive with this function. You can later run a command to find empty blocks and make the file sparse if you'd like. Keep in mind that sparse file on HDD can end up being heavily fragmented.

Once you created and started the new domain, you may want to change the hostname. This has changed over time and requires editing three files and a reboot. I have details about this procedure on this page: Changing your Linux computer hostname on Ubuntu.

Setup the Host with a Static IP

For my need, I want to have simple scripts and settings that allow me to access the VM with an IP address that does not change between runs. To force a specific IP we can use the commands below.

1) Supported IP Addresses

If you did not change the libvirt network setup, it is named "default". You may later create additional ones, but before you do so, you may want to consider that one single interface gives you the ability to create 253 virtual machines, each with their own static IP address. That should be sufficient on any one computer2

$ virsh net-edit default

This gives you access to the XML of the network named default.

Here is an example:

<network>
  <name>default</name>
  <uuid>2c1cc774-ec7e-41fa-818c-26d25b4ee32c</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:3a:f8:69'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
      <host mac='52:54:00:17:33:e4' name='buildserver' ip='192.168.122.224'/>
    </dhcp>
  </ip>
</network>

From that file, we see that the supported range is from 192.168.122.2 to 192.168.122.254 (see the <range> tag).

2) Mac Address of Interface

Start the VM if it's not already running and run:

$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:17:33:e4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.224/24 metric 100 brd 192.168.122.255 scope global dynamic enp1s0
       valid_lft 40sec preferred_lft 40sec
    inet6 fe80::5054:ff:fe17:33e4/64 scope link
       valid_lft forever preferred_lft forever

This prints all the interfaces and their Mac address. For my buildserver, I see that I have interface enp1s0 with MAC address 52:54:00:17:33:e4.

3) Actually change the IP address of that host

$ virsh net-update default add ip-dhcp-host \
      "<host mac='52:54:00:17:33:e4' name='buildserver' ip='192.168.122.224'/>" \
       --live --config

In case you make a mistake, refer to point (1) above to edit the file.

It is also possible to delete and re-add the information, although I never tried that. I think editing is easier.

4) Make it work

Just adding a new host won't make it avaialble. This is because the libvirt daemon is not aware of the change. At least, so far, I have not been able to make it work without restarting the whole thing.

I use those commands to restart the network:

$ virsh net-destroy default
$ virsh net-start default

Note that will kill your existing connections. So you want to shutdown ALL your domains, do the changes to the network, then restart your domains.

Note: at the moment, I am fine with such because I do not have any vitual VMs running. I use two or three and shutting them down is easy and really I only need to do so when creating a new test machine which is fairly rare. If I find a better way, I'll try to remember to add the info here too.

Setup Memory and CPUs

The system has access to the XML which can be tweaked with the following commands:

$ virsh setmaxmem buildserver 16777216 --config
$ virsh setmem buildserver 16777216 --config
$ virsh setvcpus buildserver 8 --config --maximum
$ virsh setvcpus buildserver 8 --config

If you want to increase the amount of memory compared to what you setup when creating a VM, you first need to change the maximum amount of memory with the setmaxmem command. Note that the size must be in Kb (multiple of 1024).

The setmem sets the amount memory the VM will receive. Keep in mind that these options do not change a live system unless you use the --live command line option. However, I would suggest you do not mess around with memory & CPUs of a live VM.

The setvcpus can be used to change the number of CPUs in a VM. Note that the biggest it gets, the more threads the qemu environment uses to simulate all of those processors.

The number of CPUs must be lower or equal to the maximum. You always define the max first if you are increasing the number of CPUs. If decreasing the number of CPUs, You first want to decrease the number of CPUs, then the maximum.

Note: I suppose you noticed the discrepancy; for memory we have two separate commands (setmaxmem and setmem) and for CPUs we have a command line option (--maximum) to distinguish between both the maximum and current number of items.

Start Clone

Once the clone is ready, you can start the server like so:

$ virsh start buildserver

It should start pretty quickly. See View Domain Display for info on how to find out the IP address of the VM the first time you start it.

Once started, verify that you have the correct amount of memory and disk space:

$ free
               total        used        free      shared  buff/cache   available
Mem:        16375736     2211504    12457564       44112     2070064    14164232
Swap:        4194300           0     4194300

$ df
Filesystem                        1K-blocks     Used Available Use% Mounted on
tmpfs                               1637576     1188   1636388   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  49353520  7884272  39188336  17% /
tmpfs                               8187868     1052   8186816   1% /dev/shm
tmpfs                                  5120        0      5120   0% /run/lock
/dev/vda2                           1992552   189280   1682032  11% /boot
/dev/vdb1                         102625208 27549072  69816924  29% /mnt/data-drive
tmpfs                               1637572       12   1637560   1% /run/user/1000

From the above, free should tell us we have 16Gb.

Similarly, the df command should show that the drive is around 49Gb,

View Domain Display

It is possible to check out the display. By default, that feature is not turned on.

$ virt-viewer buildserver

Note that this command blocks until you close the window.

List Available VMs

The VMs you create are added to a list which can be displayed using:

$ virsh list --all
 Id   Name          State
------------------------------
 13   buildserver   running
 -    ubuntu2404    shut off

The --all option is to show active (a.k.a. running) domains and also the non-active domains.

  • 1. I've not worked with qemu for long enough to know whether it would also get stuck with that one... so that may change later.
  • 2. According to Ubuntu, you can run a server with 512Mb of RAM and 1 CPU. So on a machine with 256 CPUs and 2Tb of data, you could run about 4,000 such VMs... maybe not smoothly, but hypothetically, it would be possible... but I would imagine that if you have such a machine, you're probably not reading this to run VMs on that one.