This post appeared originally in our sysadvent series and has been moved here following the discontinuation of the sysadvent microsite
For cattle purposes, it makes sense to follow a build-once-run-many principle. This is what we prefer for the machines powering our infrastructure. The current build method for deployments uses the tool-chain from the virt-manager project to achieve this.
Build targets
The combination of virt-install(1)
and
virt-builder(1)
provides a layered approach for generating
disk-images. Those images can then be used as a base for constructing images for
the different environments that we support:
- Ramdisk boot
- Vagrant
- OpenStack
- Docker
Step 1: Installation
The basis for each installation is provided by
virt-install(1)
:
virt-install \
--name ubuntu_xenial \
--ram 1024 \
--disk path=ubuntu_xenial.img,size=4 \
--location http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/ \
--initrd-inject=preseed.cfg \
--extra-args="DEBIAN_FRONTEND=text auto url=file:///preseed.cfg hostname=base-image.i.bitbit.net locale=en_US console-setup/ask_detect=false keyboard-configuration/layoutcode=no console=ttyS0,115200" \
--noreboot
This builds a libvirt-based virtual machine, using the installer from the distribution we choose to install. It injects a kickstart (RedHat) or preseed (Debian) configuration-file to achieve a hands-off installation.
The steps of an installation are typically:
- Boot from the installer
- Configure package management
- Partition the block-device
- Install packages
- Install boot-loader
- Post installation
After the installation is done, the resulting image is cleaned up with
virt-sysprep(1)
and an optimized compressed image is made
available on our internal virt-builder
repository:
$ virt-sysprep --domain ubuntu_xenial --enable abrt-data,bash-history,blkid-tab,crash-data,cron-spool,dhcp-client-state,hostname,logfiles,machine-id,mail-spool,net-hostname,net-hwaddr,pacct-log,package-manager-cache,puppet-data-log,random-seed,rpm-db,ssh-hostkeys,tmp-files,udev-persistent-net,utmp,yum-uuid
$ guestfish --domain ubuntu_xenial --inspector << EOF
zero-free-space /
fstrim /
EOF
$ xz --best --block-size=16777216 ubuntu_xenial.img
Step 2: Configuration Management
We then use virt-builder(1)
to run the initial part of our
configuration management tool of choice (i.e. Puppet) against a clean
installation, as provided by the image produced in step 1.
virt-builder \
--source http://virt-builder.i.bitbit.net/index.asc \
--firstboot firstboot \
--output xenial_builder.img \
xenial_builder
The configuration is started from a custom firstboot
script, which runs Puppet
in agent-mode and – after some cleanup – shuts down the virtual
machine:
...
puppet agent --onetime --no-daemonize --no-splay --verbose \
--configtimeout 600 --color false \
--certname ${PUPPET_NODE} \
--server ${PUPPET_SERVER} \
--environment ${PUPPET_ENVIRONMENT}
...
sync ; sync
poweroff
This results in a virt-builder
base-image for reuse within our
infrastructure, which is made available through our internal virt-builder
repository.
The base-image is then used for another run of
virt-builder(1)
with the configuration management tool. This
configures the image for the required destination environment, e.g. OpenStack
compute node, OpenStack network node, Ceph storage node, etc. This result is
also published on our internal virt-builder
repository.
Step 3: Deployment
We now have a virt-builder
image which just needs some minor final adjustments
for deployment on the actual target environment:
- Ramdisk boot
- extract kernel/modules
- extract root-file-system
- Vagrant (on VirtualBox)
- add virtualbox-tools
- add vagrant account
- OpenStack
- add cloud-init
- Docker
- extract root-file-system
Bonus
Following this procedure, the difference between an installation onto a local disk, followed by running Puppet is effectively identical to the end-result in one of the mentioned environments. This provides an extra level of flexibility, both to us as system operators, as to our partners/customers who provide their services based on our infrastructure.
Att bana väg för öppen källkod i offentlig sektor
Att få Skolverkets DNP-SS12000 Referens API släppt som öppen källkod har varit en både givande och komplex resa. Myndigheter har ofta mycket att överväga – från säkerhet och juridik till långsiktig förvaltning – vilket innebär att varje steg mot en öppen källkodsrelease har krävt noggrant arbete och övertygelse. För detta projekt, som utvecklades för att stödja digitala nationella prov, hade jag en stark övertygelse om att öppen källkod skulle gynna både Skolverket och marknaden, men processen var inte utan hinder.
... [continue reading]