This guide assumes you have ZFS working on a recent version of Ubuntu server. The usual disclaimer applies. Use at your own risk. This post applies to ZFS on Ubuntu 18.04 and similar.
The default install of KVM, the hypervisor, does not support ZFS backed storage volumes. But this guide will show how to enable this and how to use them.
In theory there is nothing stopping you from just having your VM disk images on a ZFS filesystem and still enjoy the benefits of snapshots and potential bit-rot protection (assuming you are running a raidz or mirrored pool on a machine with ECC ram).
Unfortunately in the above scenario you would end up negating the usefulness of the snapshots as soon as you have multiple VM disk images on that file system, as you would be rolling back all of them if you roll back one. One solution could be to just have a file system for each VM disk image file. ZFS makes file system creation and deletion very easy. But you would still be introducing one extra file system layer for your data to traverse as your VM writes it to its disk image which then hosted atop a zfs file system. One benefit of this approach however is that you can simply take your VM disk image file and move it elsewhere to a non-zfs system with ease.
However, the much preferred approach, in our ZFS-centric view of the word is to present a zvol, a ZFS backed block device to the VM to treat like a physical disk (instead of the usual VM disk image file). This should provide better performance but also allows you to use ZFS send functionality, as well as in-line compression, snapshots, and bit-rot protection. Being able to use ZFS send at the block device level can also simplify the workload migration process to a different data-center for example as zvols can support incremental sends. This can significantly speed up migration in an emergency if regular snapshot replication already has ensured that the bulk of the zvol in the remote location is mostly up-to-date.
So let’s get to it: The install
sudo apt-get install qemu-kvm libvirt-bin bridge-utils #make sure libvirt supports zfs apt-get install libvirt-daemon-driver-storage-zfs
Let’s say KVM VMs are owned by the user kvmadm01:
sudo usermod -G libvirtd -a kvmadm01 # Note: since 18.04 new group is just libvirt
You can turn kernel same page merging off or on depending on your preferences in /etc/default/qemu-kvm, also note that huge pages and vhost_net switches are now depreciated but can still be set that way. Thus as an example:
sudo vi /etc/default/qemu-kvm KVM_HUGEPAGES=1 KSM_ENABLED=0 VHOST_NET_ENABLED=1
The general KVM/virsh command syntax is out of scope here. We will instead focus on the ZFS specific commands and procedures:
Defining the ZFS pool in virsh as a storage back-end
Assume we have some pools and want to use one of them in libvirt:
# zpool list NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT filepool 1,98G 56,5K 1,98G 0% - 0% 1.00x ONLINE - test 186G 7,81G 178G 0% - 4% 1.00x ONLINE -
Let’s take filepool and define it with libvirt. This could be done using this virsh command:
virsh virsh # # now inside virsh shell: virsh # pool-define-as --name zfsfilepool --source-name filepool --type zfs Pool zfsfilepool defined # enable the back-end virsh # pool-start zfsfilepool Pool zfsfilepool started virsh # pool-info zfsfilepool Name: zfsfilepool UUID: 5d1a33a9-d8b5-43d8-bebe-c585e9456416 State: running Persistent: yes Autostart: no Capacity: 1,98 GiB Allocation: 56,50 KiB Available: 1,98 GiB virsh #
As you can see, we specify a type of the pool, its source name, such as seen in zpool list output and a name for it for libvirt. We also need to start it using the pool-start command.
Let’s create a couple of volumes in our new pool.
virsh # vol-create-as --pool zfsfilepool --name vol1 --capacity 1G Vol vol1 created # let's create one under a subvolume virsh # vol-create-as --pool zfsfilepool --name vol2 --capacity 700M Vol vol2 created virsh # vol-list zfsfilepool Name Path ------------------------------------------------------------------------------ vol1 /dev/zvol/filepool/vol1 vol2 /dev/zvol/filepool/vol2
Dropping a volume is also easy:
virsh # vol-delete --pool zfsfilepool vol2
Vol vol2 deleted
Uploading and downloading data
Ensure that the volume you are importing your img file into is at least the same size or larger than the img file.
Let’s upload an image to our new volume:
virsh # vol-upload --pool zfsfilepool --vol vol1 --file /home/kvmadm01/myVM.img
… and download:
virsh # vol-download --pool zfsfilepool --vol vol1 --file /home/kvmadm01/zfsfilepool_vol1.img
Note: if you would check e.g. md5 sum of the downloaded file vs the uploaded original, the result would be different as the downloaded file will be of the same size as the volume it was imported into. However, if you trim zeros using truncate -r somevol.img, it’ll be the same.
You can also import and export without virsh from raw files, assuming that on import there is an existing zvol that is of equal or larger size than the raw image being dd’d. To import:
dd if=your_raw_file.raw of=/dev/zvol/<pool>/<volume> bs=512K
dd if=/dev/zvol/<pool>/<volume> of=file.raw bs=512K
If you wanted to convert to another format from raw….
-p shows progress
qemu-img convert -p -f raw -O qcow2 file.raw file.qcow2
You can also use zfs get all to check which properties incl. e.g. LZ4 compression were inherited correctly to you VM zvols.
This should be enough to get you started. Good luck!