Shell script to perform migration from one VM to another.
How to store your Qubes OS qubes in a ZFS pool
Combine the compartments of the most secure operating system with the safest file system.
Did you know that, as of today, there's a new storage driver for Qubes OS? Yes, that's right — a ZFS-backed qube storage driver will be available in Qubes OS 4.2 as a storage option for Qubes OS qubes.
Why this?
So why would you want to store your qubes in ZFS? We cover ZFS's advantages in the previous guide but, to make it short, your qubes will be safer and gain efficiency in a ZFS pool. This guide will help you greap those benefits by moving your qubes to a brand new storage device backed by ZFS.
But what about performance? Isn't ZFS higher-overhead?
If there's a performance hit, I can't feel it. Although anecdotal, here I share two timed runs of qube start and stop. The first is in a standard file-backed pool (both qube and its template in the pool):
[root@dom0 rudd-o]# time qvm-start shopping-tvl ; time qvm-shutdown --wait shopping-tvl
real 0m12.822s
user 0m0.047s
sys 0m0.014s
real 0m2.568s
user 0m0.073s
sys 0m0.021s
This is the same thing but with a ZFS-backed VM:
Here's the same timed run, but this time the qube and its template are backed by this ZFS driver:
[root@dom0 rudd-o]# time qvm-start shopping-tvl ; time qvm-run shopping-tvl xterm ; time qvm-shutdown --wait shopping-tvl
real 0m12.571s
user 0m0.048s
sys 0m0.017s
real 0m3.073s
user 0m0.071s
sys 0m0.028s
Paradoxically to some, ZFS compression often makes disk-intensive operations faster, so if you have a rotational disk and disk-intensive workloads, your qubes may end up running faster than using the built-in storage pool drivers.
Assumptions
This guide assumes:
- You have a Qubes OS 4.2 system. 4.2 is the first release that comes with the necessary storage driver.
- You want to move all of your qubes to a ZFS pool.
- Your primary disk is
/dev/sda
(typically partitioned, but not necessary, as 1 EFI partition, 1 boot partition, and then either the root file system or an LVM physical volume). - You have another disk
/dev/sdb
, that will contain your ZFS pool and has enough storage for all your qubes (tip: you can later pivot to this disk for your whole OS). - The commands will all be typed in a dom0 terminal window, and will all be run as
root
.
Some of these assumptions are mandatory — but all of them are useful to understand the examples below.
Install required software
- Your dom0 system must have the
zfs
andzfs-dkms
packages installed and working properly. The previous guide on ZFS and Qubes OS explains this in detail. - Your dom0 system must also have
rsync
installed.
Prepare the disk
In preparation to later pivot the entirety of your setup to ZFS, you should reserve some unpartitioned space on your new disk, equal to the partitions in your primary disk. The following procedure takes that into account. If you don't plan to do so, you can alter the procedure.
In detail:
- Using
fdisk
, find out the size and type of/dev/sda1
and/dev/sda2
.- In Qubes OS, these partitions contain the EFI system partition and the standard Linux
/boot
partition.
- In Qubes OS, these partitions contain the EFI system partition and the standard Linux
- Create two partitions equivalent to
/dev/sda1
and/dev/sda2
in/dev/sdb
. - Create a third partition
/dev/sdb3
to comprise the whole of the remaining unpartitioned space. - Clone the data from
/dev/sda1
and/dev/sda2
to/dev/sdb1
and/dev/sdb2
, respectively:-
mkfs.vfat /dev/sdb1
mkdir -p /tmp/x
mount /dev/sdb1 /tmp/x
rsync -vaHAXSP /boot/efi/ /tmp/x
umount /tmp/x
mkfs.ext4 /dev/sdb2
mount /dev/sdb2 /tmp/x
rsync -vaxHAXSP /boot/ /tmp/x
umount /tmp/x
-
Format the last partition to be encrypted, then open it:
-
cryptsetup luksFormat --type=luks2 --align-payload=4096 /dev/sdb3 cryptsetup luksOpen /dev/sdb3 luks-`blkid -s UUID -o value /dev/sdb3`
- Use your boot disk encryption password when prompted for a password!
-
Now you have a pool device, encrypted and open, ready to create a pool.
One more step — you must add the encrypted device UUID to /etc/crypttab:
dev=`blkid -s UUID -o value /dev/sdb3`
echo luks-$dev UUID=$dev none discard >> /etc/crypttab
Create the ZFS pool
Simply
sudo zpool create laptop /dev/disk/by-id/dm-uuid-*-luks-blkid -s UUID -o value /dev/sdb3`
sudo zfs set mountpoint=none laptop
# laptop is the name of the pool
Your pool should now be available. Verify with zpool status
.
Add your ZFS storage to a new Qubes pool
qvm-pool add -o container=laptop/qubes zfs zfs
# The first zfs is the name of the Qubes pool.
# The second zfs is the name of the storage driver.
# laptop/qubes is the name of the dataset within the
# ZFS pool laptop, auto-created to hold the entire
# qube volume hierarchy
# Confirm it has been created with
qvm-pool
Qubes now knows where to store its qubes in ZFS when requested.
Default all new VM creation to the zfs
pool
You are almost ready to begin moving qubes to the new pool. One final detail, however, is missing: we are going to make this pool the default for all relevant storage devices. Here it goes:
qubes-prefs -s default_pool zfs
And with that, your system is now ready.
Pivot!
This set of steps you'll do for each one of your qubes. Take an inventory of your qubes (qvm-ls
) before beginning. In the end, you should be left with the exact same listing.
You'll be getting some pivoting help here. I wrote a shell script that cleanly migrates qubes, quoted below this paragraph. Take this script to your dom0, put it in a file /usr/local/bin/qvm-migrate
— and make the file executable. You're going to use the qvm-migrate
script to migrate your qubes in a certain order:
- Standalones first.
- Templates next.
- AppVMs in the end.
- Skip the dom0!
For each one of your qubes, you will first run qvm-migrate <qube> <qube-zfs>
; that is, if you are migrating sys-usb
, you'll run qvm-migrate sys-usb sys-usb-zfs
. Then, if the process was successful, you will pivot back to the original name. E.g. if you were migrating sys-usb
, you'll run qvm-migrate sys-usb-zfs sys-usb
.
You can shorten the process with this function you can paste on you shell (substitute the name of your storage container below):
function m() {
qvm-migrate $1 $1-zfs && \
qvm-migrate $1-zfs $1 && \
(sudo zfs destroy -r laptop/qubes/$1-zfs || true)
}
Suppose you want to migrate qubes personal
, work
and email
. You'd then run m personal && m work && m email
.
Migration can take a while, especially in the case of large qubes. You can see sporadic progress reports during cloning by looking at the Qubes daemon log in a separate terminal window: sudo journalctl -fau qubesd
.
Optional: reclaim free disk space
Finally — entirely optional but highly recommended — you should run the following against the recently-migrated qube too (you can power off the qube afterwards if you don't need to keep it running):
vm=sys-usb # substitute your VM here
qvm-run -a -p --nogui $vm 'sudo fstrim -v / ; sudo fstrim -v /rw'
This simple command will reclaim a lot of disk space from the migrated qube, by telling ZFS that the empty data in the volume is long gone and doesn't need to be stored anymore. Those disk space savings are passed on to the ZFS pool. The longer the qube has been in use, the larger the potential disk space saving.
If you want to do this automatically as part of the pivot process, you can use this enhanced alias m()
that takes care of the process for AppVMs:
function m() {
qvm-migrate $1 $1-zfs && \
qvm-migrate $1-zfs $1 && \
(sudo zfs destroy -r laptop/qubes/$1-zfs || true) && \
qvm-run -a -p --nogui $1 'sudo fstrim -v / ; sudo fstrim -v /rw'
}
StandaloneVMs and TemplateVMs, in turn, benefit from that sudo fstrim -v /
.
So what does the script do?
Here below is the process explained, step by step, done manually.
Clone the qube to a new intermediate clone
Clone the qube to a new name. This clone will go into the ZFS storage, always, since we set those as default before.
# qvm-ls has a VM named financial, so you will
qvm-clone financial financial-zfs
Easy! After a few minutes, they should all be done. Another qvm-ls
should confirm the VM is there.
If you want to be 100% certain it all worked, what you're going to do now is start the recently-cloned VM, run a few applications, then power it off. It's optional.
Delete the original
After all is cloned, you can delete the original:
qvm-remove financial
In most cases, that's all that's necessary.
Once you are done, you no longer have the original qube — you only have the intermediate clone.
What if my VM is used by others on the system?
Removal will fail if you're removing a template qube used by other qubes, or the template is referenced in global Qubes preferences (qubes-prefs
).
A qube can be in use system-wide as system-wide clockvm
, system-wide updatevm
, default_template
, default_dispvm
, default_audiovm
, default_guivm
, default_netvm
and and emanagement_dispvm
. It can also be used as another qube's netvm
, management_dispvm
, audiovm
, guivm
or template
.
So, to remove a qube that fails removal because it's used by others:
- change all qubes using this qube as a template (e.g.
fedora-30
) to use its clone (fedora-30-zfs
), - change all qubes using this qube as a netvm (e.g.
sys-net
) to use its clone (sys-net-zfs
), - inspect the global Qubes preferences and change references to the qube about to be deleted, such that the preferences refer to the cloned qube
Clone the cloned qube to its original name, and delete the first clone
Simple as:
qvm-clone financial-zfs financial
qvm-remove financial-zfs
As with the previous step, you'll want to watch out for qubes referenced by other qubes' preferences or by global system preferences. Follow the same process, but in reverse -- this time, change the references from the intermediate clone to the final clone.
Your qube is now fully stored in ZFS. A look into the output of qvm-volume
shows so:
zfs:laptop/qubes/financial/private financial private Yes
zfs:laptop/qubes/financial/volatile financial volatile No
Success! You've pivoted your qube to ZFS.
Here's the space this qube took on disk before:
# qvm-volume i projects:private
pool zfs
vid laptop/qubes/projects/private
rw True
source
save_on_stop True
snap_on_start False
size 4294967296
usage 2613487599
revisions_to_keep 1
ephemeral False
is_outdated False
Here's what it takes now:
# qvm-volume i projects:private
pool zfs
vid laptop/qubes/projects/private
rw True
source
save_on_stop True
snap_on_start False
size 4294967296
usage 1539613184
revisions_to_keep 1
ephemeral False
is_outdated False
Huge savings!
"I can't shut off sys-usb — I'll lock myself out of the system!"
Some qubes must absolutely keep running no matter what.
No worries, here's what you do, in a single command line, without needing a reboot:
qvm-shutdown --wait --force sys-usb ; \
qvm-clone sys-usb sys-usb-zfs ; \
qvm-remove -f sys-usb ; \
qvm-clone sys-usb-zfs sys-usb ; \
qvm-remove -f sys-usb-zfs ; \
qvm-start sys-usb
Hit ENTER at the end. The VM will finish pivoting itself.
"I can't change the template of this VM because I need to keep it running!"
Some qube removals require you to change the template property of other qubes.
Do not worry —there is a way to make the change safely.
qvm-shutdown --wait --force sys-usb ; \
qvm-prefs -s sys-usb template fedora-30-zfs ; \
qvm-start sys-usb
You just rebooted a qube that needed a template change, and it worked. Now you can remove the older template that the qube was using.
Clean up older pools and reclaim storage
After you've pivoted all of your qubes, you can safely remove all old unused pools.
Suppose you have an old pool named lvm
— here is what you will run to remove it:
qvm-pool r lvm
A run of the qvm-pool
command will tell you which pools you have.
Reclaiming now-released storage
Once the pools have been removed, all the storage they were using can be reclaimed for other uses. This varies from pool type to pool type — perhaps you have to destroy some partitions — so that's up to you. We teach you how to do this for a standard Qubes OS system in a later guide.