Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

On the road to lean infrastructure

This article was last updated 5 years ago.


On April 24 2008, Ubuntu 8.04 LTS Hardy Heron was released. That was a decade ago, when the modern cloud computing era was dawning: Amazon’s EC2 was still in beta, Google had just released the Google App Engine and the word “container” was dominating the plastics industry rather than IT. A lot has changed since then, but it’s not uncommon to come across organizations with machines still running Hardy or other equally dated distributions.

The Gordian Knot of traditional, pre-DevOps IT infrastructure encompasses meticulously crafted, opportunistically documented and precariously automated “snowflake” environments. Managing such systems induces a slow pace of change, and yet in many cases rip and replace is not a justifiable investment. Invariably though, unabated progress dictates the reconciliation of today’s best practices with the legacy artifacts of the past. Lift and shift can be an efficient, reliable and automated approach to this conundrum.

LXD enables a straightforward path to consolidate services running on old VMs and physical servers, increase infrastructure density, improve resource utilization and performance, and add a layer of security, while keeping familiar operations primitives. Let’s see how easy it is to migrate an Ubuntu 8.04 VM into a LXD container running on a modern, Spectre/Meltdown patched kernel.

Depending on the virtualization platform in use, one may start with a VDI(VirtualBox), VMDK (VMware) or QCOW2 (QEMU/KVM) image. Our VM is currently stopped and it resides in an image named hardy.vmdk that has a single partition. We can extract the contents of the VM filesystem and create the rootfs for the LXD container:

$ sudo apt-get install qemu-utils libguestfs-tools
$ mkdir -p hardy/rootfs
$ sudo virt-copy-out -a hardy.vmdk / hardy/rootfs

There are two formats for LXD container images: (1) a unified tarball, which contains both the container rootfs and the needed metadata. (2) a split format, with two tarballs, one containing the rootfs, the other containing the metadata. We will be producing a unified tarball image here, and all its contents will reside under the hardy directory. The rootfs subdirectory contains the entire file system tree of what will become the container’s /.

We can modify the contents of rootfs to reflect that this is a container:

$ sudo sed -ri 's/^([^#].*)/#\1/g' hardy/rootfs/etc/fstab
$ sudo truncate -s 0 hardy/rootfs/etc/{mtab,blkid.tab}
$ sudo find hardy/rootfs/etc -name 'S??klogd' -type l -exec rm -f {} \;

The container metadata are captured in metadata.yaml and provide information relevant to running the image under LXD. Let’s create it:

$ cd hardy/
$ cat > metadata.yaml << EOF
architecture: "x86_64"
creation_date: 1523424242
properties:
    architecture: "x86_64"
    description: "Ubuntu 8.04 LTS server"
    os: "ubuntu"
    release: "hardy"
templates:
    /etc/hostname:
        when:
            - create
            - copy
        template: hostname.tpl
    /var/lib/cloud/seed/nocloud-net/meta-data:
        when:
            - create
            - copy
        template: cloud-init-meta.tpl
    /var/lib/cloud/seed/nocloud-net/user-data:
        when:
            - create
            - copy
        template: cloud-init-user.tpl
        properties:
            default: |
                #cloud-config
                {}
    /var/lib/cloud/seed/nocloud-net/vendor-data:
        when:
            - create
            - copy
        template: cloud-init-vendor.tpl
        properties:
            default: |
                #cloud-config
                {}
    /etc/network/interfaces.d/eth0.cfg:
        when:
            - create
        template: interfaces.tpl
    /etc/init/console.override:
        when:
            - create
        template: upstart-override.tpl
    /etc/init/tty1.override:
        when:
            - create
        template: upstart-override.tpl
    /etc/init/tty2.override:
        when:
            - create
        template: upstart-override.tpl
    /etc/init/tty3.override:
        when:
            - create
        template: upstart-override.tpl
    /etc/init/tty4.override:
        when:
            - create
        template: upstart-override.tpl
EOF

We also need to create a templates subdirectory, which will contain the pongo2-formatted templates for the container customization during the instantiation time.

$ mkdir templates && cd templates
$ cat << EOF > cloud-init-meta.tpl  
#cloud-config
instance-id: {{ container.name }}
local-hostname: {{ container.name }}
{{ config_get("user.meta-data", "") }}
EOF
$ cat << EOF > cloud-init-user.tpl  
{{ config_get("user.user-data", properties.default) }}
EOF
$ cat << EOF > cloud-init-vendor.tpl  
{{ config_get("user.vendor-data", properties.default) }}
EOF
$ cat << EOF > hostname.tpl  
{{ container.name }}
EOF
$ cat << EOF > interfaces.tpl  
iface eth0 inet {% if config_get("user.network_mode", "") == "link-local" %}manual{% else %}dhcp{% endif %}
EOF
$ cat << EOF > upstart-override.tpl
manual
EOF

Now our hardy directory should look as follows:

$ tree -L 1 hardy
hardy
├── metadata.yaml
├── rootfs
└── templates

2 directories, 1 file

Let’s assemble the container image tarball and import it to our local LXD image
store:

$ cd hardy
$ sudo tar --numeric-owner -zcvSf ../hardy.tar.gz *
$ lxc image import ../hardy.tar.gz --alias  hardy

Finally, we will configure a custom profile, and use it to spin up our new
Ubuntu 8.04 LXD container:

$ lxc profile create hardy
$ lxc profile set hardy environment.TERM linux
$ lxc launch local:hardy h1 -p default -p hardy

The first boot of the container will take a bit longer than usual, as LXD detects and automatically shifts the UID/GID of the rootfs to align with the user namespace.
That was it! Our VM with all its services and data is now running on LXD. The entire v2c conversion process is simple and can be easily automated. Actually, the LXD team has just released their latest LTS version, LXD 3.0, which includes a new tool called lxd-p2c that enables importing a (running) system’s filesystem, physical or virtual, into a LXD container using the LXD API. You can quickly build the tool as follows:

$ sudo apt-get install golang-go
$ mkdir ~/gocode && cd ~/gocode 
$ export GOPATH=~/gocode
$ go get -v -x github.com/lxc/lxd/lxd-p2c

The resulting binary can be found in ~/gocode/bin/lxd-p2c and can be transferred to any system that you want to turn into a container. Point it to a remote LXD server and it will create a new container, using the entire filesystem as rootfs.

LXD containers enable you to intelligently adapt to change. Converting virtual machines (or physical) to LXD containers infuse mobility to your infrastructure, improve resource utilization, add an additional layer of security and enhance performance. Give it a try and join the community.

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Containerization vs. Virtualization : understand the differences

Containerization vs. Virtualization : understand the differences and benefits of each approach, as well as connections to cloud computing.

Implementing an Android™ based cloud game streaming service with Anbox Cloud

Since the outset, Anbox Cloud was developed with a variety of use cases for running Android at scale. Cloud gaming, more specifically for casual games as...

LXD virtual machines: an overview

In this blog, we’ll explore some of the main VM features that help you run your infrastructure on LXD.