long-running VMs to run services (such as OpenLDAP, Postfix, Jenkins),
to do development, and be the target of ongoing deploys. These should
be isolated, dependable, and have fast and consistent performance.
short-lived VMs to run single-node or multi-node tests. These should
be quick to bring up in a consistent known state, and scriptable.
I use KVM, because it is open source, part of the kernel, widely supported
and reliable. I've run it on a colo for years, and various ISP use it for
production cloud platforms (ByteMark,
I manage it with virt-manager
and script it with libvirt.
Not everything is rosy:
Documentation is scattered.
Virt-manager's UI is basic.
Libvirt's snapshot support is incomplete:
"snapshots of inactive domains not implemented yet"
"revert to external disk snapshot not supported yet"
the virt-clone tool canot write to a fresh LVM volume ("Clone onto existing storage volume is not supported").
on Ubuntu AppArmor gets in the way when you manage multiple snapshot image files (see this Launchpad bug).
libvirt's use of XML for domain configuration is a bit annoying to script.
QCow2's internal snapshots take longer than I would like; about 10 seconds instead of 1 second for external snapshots.
and whenever you do any virtualisation, networking details are always fiddly.
So here is the workflow that I settled on:
Create a base image with a standard OS install
For the long-running VMs, I use separate LVM volumes to serve as raw virtual disk,
clone a base image, and configure the guest. That takes under a minute.
For the short-lived VMs, I convert the disk of a long-running VM to qcow2 to serve
as a backing image, and then create qcow2 images for each domain that use this backing
store, configure that guest, and then create another backing store based on that, which
is what the domain is configured to use. Rolling back is then simply a matter of
re-creating the last qcow image. That takes a few seconds.
I'll illustrate these in more detail.
Base Image Creation
I like to install the base from a GUI, using the standard installer, and
complete it manually, so that it matches what users will see.
I run vn4server, and connect from my workstation. I install virt-manager,
define a storage pool (my /dev/vg_vms LVM volume group), then create an 8G
image base on ubuntu-12.10-server-amd64.iso, and name that domain ubuntu-base-vm.
For disk partitioning I use "Guided -- use entire disk" rather than my usual
"Guided -- use entire disk and set up LVM". There are three reasons for this:
I don't really need LVM on these fairly small virtual disks
the installer names the volume group after the host, which will look strange on the clones
the volume group ends up in an extended partition, where virt-resize doesn't resize logical volumes
In the Software selection I pick "OpenSSH server".
Finally I login to the console, and add my ssh public key to
.ssh/authorized_keys so that I can login to clones remotely later, and do an
aptitude update; aptitude upgrade for good measure.
At some point I might swith to virt-install for this step.
First, I'm making a list of VM names, IP addresses and MAC addresses.
I'll later use this list to configure the network interface of the KVM domain configuration,
and the networking configuration files in the guest. For convenience I match vm names with
ip addresses, such that vm111 is on 192.168.0.111. I wrote a little script that generates the address and the MAC (per this tip). For long-term VMs you can just edit the resulting file and change the name.
which I can use like this:
The instructions in this document do not rely on DNS configuration,
but it is nice to give your VMs DNS names. On my LAN I use OpenWRT,
and I can generate the configuration for its /etc/config/dhcp:
and copy/paste into my router.
Long-term VMs: Clone to raw LVM guests
Here are the steps to do a tichk provisioning of a VM.
I'll use bash variable $VM for the VM name.
First create the volume, of the same size as the original. For example, for a
VM named vm111, I create a volume named vms-vm111 in the volume group
Alternatively you can specify a large size, e.g.:
Next, I copy the base image:
The 'sda1' parameter indicates the partition inside the guest that should be exanded;
in this case the root partition.
Now we need to create a KVM domain to use that disk. I can copy the XML
definition of the base image, and update the device path, the MAC address, and
generate a new UUID. To make that easier, I use this python script:
so that I can do:
Finally, we need to configure the guest's networking details. The virt-sysprep tool
can help with that, but doesn't regenerate openssh keys, or /etc/hosts. So I wrap it
with some scripting. I use some some templates:
For the networking:
The hosts file:
And a script to run in the host:
Now we can use those templates to generate host-specific versions,
and prepare the image:
Now the guest is ready and can be started:
Now that you've gone through this once, and the templates are in place, you can use
this script to make this a one-liner:
Which is nice.
Short-term VMs: Thin-provisioning with Qcow2
Ubuntu's libvirt installation has AppArmor configuration which limits what guests can
write to. Here we'll be using different files, and you'll get permission errors.
The easiest way around this is to use this workaround and reboot:
For the short-lived VMs, we start by creating a clone of the base, in qcow2 format:
This can be used as backing file by multiple VMs, and must not be modified.
From that base image, create a thin clone image for this specific VM:
then define a KVM domain for it, and configure it:
at this point you can start the VM and check it works.
Next we make a snapshot to serve as a clean starting point for future runs:
Now we can use it. When we're done and want to reset the VM, we can do:
which takes less than 2 seconds:
We've seen you can automate thick-provisioning VMs through cloning in under a
minute, and rollback thin-provisioned VMs in seconds. Which is nice.
I'm really looking forward to future versions of libvirt/virt-manager adding
support for these things through their API/UI.
For now, I'll see how well this setup works in practice, and perhaps experiment
with some alternatives.