Last Updated:

KVM with ZFS support

UPATE: This post is now out of date. A new version for Ubuntu 22.04 will come soon. Once it is up, I will also link to it from here.

This guide assumes you have ZFS working on a recent version of Ubuntu server. The usual disclaimer applies. Use at your own risk. This post applies to ZFS on Ubuntu 18.04 and similar.

The default install of KVM, the hypervisor, does not support ZFS backed storage volumes. But this guide will show how to enable this and how to use them.

In theory there is nothing stopping you from just having your VM disk images on a ZFS filesystem and still enjoy the benefits of snapshots and potential bit-rot protection (assuming you are running a raidz or mirrored pool on a machine with ECC ram).

Unfortunately in the above scenario you would end up negating the usefulness of the snapshots as soon as you have multiple VM disk images on that file system, as you would be rolling back all of them if you roll back one. One solution could be to just have a file system for each VM disk image file. ZFS makes file system creation and deletion very easy. But you would still be introducing one extra file system layer for your data to traverse as your VM writes it to its disk image which then hosted atop a zfs file system. One benefit of this approach however is that you can simply take your VM disk image file and move it elsewhere to a non-zfs system with ease.

However, the much preferred approach, in our ZFS-centric view of the word is to present a zvol, a ZFS backed block device to the VM to treat like a physical disk (instead of the usual VM disk image file). This should provide better performance but also allows you to use ZFS send functionality, as well as in-line compression, snapshots, and bit-rot protection. Being able to use ZFS send at the block device level can also simplify the workload migration process to a different data-center for example as zvols can support incremental sends. This can significantly speed up migration in an emergency if regular snapshot replication already has ensured that the bulk of the zvol in the remote location is mostly up-to-date.

So let's get to it: The install

sudo apt-get install qemu-kvm libvirt-bin bridge-utils#make sure libvirt supports zfsapt-get install libvirt-daemon-driver-storage-zfs

Let's say KVM VMs are owned by the user kvmadm01:

sudo usermod -G libvirtd -a kvmadm01 # Note: since 18.04 new group is just libvirt

You can turn kernel same page merging off or on depending on your preferences in /etc/default/qemu-kvm, also note that huge pages and vhost_net switches are now depreciated but can still be set that way. Thus as an example:

sudo vi /etc/default/qemu-kvmKVM_HUGEPAGES=1KSM_ENABLED=0VHOST_NET_ENABLED=1

The general KVM/virsh command syntax is out of scope here. We will instead focus on the ZFS specific commands and procedures:

Defining the ZFS pool in virsh as a storage back-end

Assume we have some pools and want to use one of them in libvirt:

# zpool listNAME       SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOTfilepool  1,98G  56,5K  1,98G     0%         -     0%  1.00x  ONLINE  -test       186G  7,81G   178G     0%         -     4%  1.00x  ONLINE  -

Let's take filepool and define it with libvirt. This could be done using this virsh command:

virshvirsh # # now inside virsh shell:virsh # pool-define-as --name zfsfilepool --source-name filepool --type zfs Pool zfsfilepool defined# enable the back-endvirsh #  pool-start zfsfilepoolPool zfsfilepool startedvirsh # pool-info zfsfilepoolName:           zfsfilepoolUUID:           5d1a33a9-d8b5-43d8-bebe-c585e9456416State:          runningPersistent:     yesAutostart:      noCapacity:       1,98 GiBAllocation:     56,50 KiBAvailable:      1,98 GiBvirsh #

As you can see, we specify a type of the pool, its source name, such as seen in zpool list output and a name for it for libvirt. We also need to start it using the pool-start command.

Volume creation

Let's create a couple of volumes in our new pool.

virsh # vol-create-as --pool zfsfilepool --name vol1 --capacity 1GVol vol1 created# let's create one under a subvolumevirsh # vol-create-as --pool zfsfilepool --name vol2 --capacity 700MVol vol2 createdvirsh # vol-list zfsfilepool Name                 Path                                    ------------------------------------------------------------------------------ vol1                 /dev/zvol/filepool/vol1                  vol2                 /dev/zvol/filepool/vol2

Dropping a volume is also easy:

virsh # vol-delete --pool zfsfilepool vol2

Vol vol2 deleted

Uploading and downloading data

Ensure that the volume you are importing your img file into is at least the same size or larger than the img file.

Let's upload an image to our new volume:

virsh # vol-upload --pool zfsfilepool --vol vol1 --file /home/kvmadm01/myVM.img

... and download:

virsh # vol-download --pool zfsfilepool --vol vol1 --file /home/kvmadm01/zfsfilepool_vol1.img

Note: if you would check e.g. md5 sum of the downloaded file vs the uploaded original, the result would be different as the downloaded file will be of the same size as the volume it was imported into. However, if you trim zeros using truncate -r somevol.img, it'll be the same.

You can also import and export without virsh from raw files, assuming that on import there is an existing zvol that is of equal or larger size than the raw image being dd’d. To import:

dd if=your_raw_file.raw of=/dev/zvol// bs=512K

To export:

dd if=/dev/zvol// of=file.raw bs=512K

If you wanted to convert to another format from raw….

-p shows progress

qemu-img convert -p -f raw -O qcow2 file.raw file.qcow2

You can also use zfs get all to check which properties incl. e.g. LZ4 compression were inherited correctly to you VM zvols.

This should be enough to get you started. Good luck!