MAAS Setup: Deploying OpenStack on MAAS 1.9+ with Juju



on 23 January 2016

This is part 3 of my new “Deploying OpenStack on MAAS 1.9+ with Juju” series. It follows up my last post Hardware Setup: Deploying OpenStack on MAAS 1.9+ with Juju. I planned to write this post almost 2 months ago, and I know some readers were expecting it eagerly, so I apologize for the delay. We were (and still are) pretty busy working on the networking features of Juju. Some of those features are presented in this series of posts (as a sneak-preview of sorts, if you like), and will be available for general use with the upcoming Juju 2.0 release in April, 2016 (along with Ubuntu 16.04 – Xenial Xerus). In the past articles I explained the high-level OpenStack deployment plan and the hardware setup of the machines and switches. Now it’s time to install MAAS,configure the cluster controller and its managed subnets, spaces, and fabrics. Once that’s done, we can enlist and commission all 4 of the NUCs as nodes and deploy them via MAAS.

Setting up MAAS 1.9+ for Deploying OpenStack with Juju

There are 2 main ways to install MAAS on Ubuntu (14.04 or later): Using the “Multiple Server install with MAAS” boot option from the Ubuntu Server installer media, or by installing a few packages with apt-get inside an existing Ubuntu installation. Both options are well described, step-by-step with screenshots in the official MAAS documentation. There’s a slight wrinkle there though – the described steps apply for the latest stable version of MAAS (1.8 as I’m writing this), which is too old for our needs and does not support the advanced network modeling the current development version 1.9.0 (proposed for a release and will soon replace 1.8 as the latest stable). Fortunately, the installation steps are almost the same as the nice one-pager “MAAS | Get Started”, so I’ll just list them briefly below.

Installing MAAS

  1. We need to prepare the machine for MAAS by installing the current Ubuntu Server LTS (14.04) on it. It should be a simple matter of downloading the ISO from and burning it on a CD or better and quicker – on a bootable USB stick (using “Startup Disk Creator” in Ubuntu or any other tool).
  2. Once Ubuntu is up and running, log into the console prepare the machine by adding the ppa:maas/next PPA, installing OpenSSH (so we can manage it remotely), VLAN (so we can create virtual VLAN NICs), and updating/upgrading all packages on the system:

    $ sudo add-apt-repository ppa:maas/next
    $ sudo apt-get update
    $ sudo apt-get install openssh-server vlan
    $ sudo apt-get update
    $ sudo apt-get dist-upgrade

    NOTE: If you’re not using en_US locale (like me), you’ll need to also add the line LC_ALL=C to /etc/default/locale otherwise some of the packages (e.g. postgresql) MAAS depends on will FAIL to install properly!

  3. I found it’s better to first configure all network interfaces (NICs) in /etc/network/interfaces and then install MAAS, as it will discover and auto-create the subnets linked to each interface of the cluster controller. The machine needs 2 physical NICs – one for the managed nodes and one for providing external access for the nodes and MAAS itself. Since the HP laptop I’m using does not have more than 1 Ethernet controller, I plugged in a USB2Ethernet adapter to provide access to the nodes network. We need those NICs configured like this:
    • Primary physical NIC of the machine (eth0) is the on-board Ethernet controller, configured with a static IP from my home network ( in my case) and uses the home WiFi router as default gateway (
    • Second physical NIC (eth1) is the USB2Ethernet adapter, configured with a static IP address ( from the managed network.
    • 7 Virtual VLAN NICs on top of eth1 for all the VLANs we created earlier (eth1.50, eth1.100, eth1.150, eth1.200, eth1.250, eth1.30) – each of these VLAN NICs have static IPs with the same format (10..0.1/20 – e.g.
    • I’ve edited the /etc/network/interfaces file as root (use your favorite editor or even simply pico) on the MAAS machine and it looks like this now: The iptables rules we add on eth0 up/down are to enable NAT so nodes can access the Internet.
  4. Reboot the machine (both to pick up any kernel updates that might have happened during the apt-get upgrade / dist-upgrade call earlier, and to make sure all NICs come up in the right order).
  5. Now let’s install the needed MAAS packages:
    $ sudo apt-get install maas maas-dns maas-dhcp maas-proxy

    NOTE: When asked for the Ubuntu MAAS API address, double check the detected URL uses eth0’s (external) IP address: You can later change this by running

    $ sudo dpkg-reconfigure maas-cluster-controller

    Also, double check that running

    $ sudo dpkg-reconfigure maas-region-controller

    shows the IP address of eth1 (managed NIC), if not set it to!

  6. Create an admin user (I used “root” for username):
    $ sudo maas-region-admin createadmin
  7. You should be able to access the MAAS Web UI at now. Login with the admin username and password you’ve just created.
  8. While a lot of the following configuration steps can be done from the web UI, a few important ones can only be done via the MAAS CLI client, so let’s install it now (on the client machine you use to access MAAS – e.g. your laptop). You’ll need the MAAS API key for the admin user – copy it from the UI’s top-right menu > Account (or go to Alternatively, from inside the MAAS server you can run
    $ sudo maas-region-admin apikey --username root

    to get it (assuming the admin user you created is called “root”).

  9. Once you have the API key run these commands:
    $ sudo apt-get install maas-cli
    $ maas login <profile> '<key>'

    Pick a meaningful name for <profile> (e.g. I use “19-root” as I run multiple versions of MAAS with multiple users created on them, so I’ll use $ maas login 19-root http://…). Replace the ‘<key>’ above with the string you’ve copied earlier from the Account Web UI page (it’s a long string that should contain 3 parts separated with colons, e.g. ‘2WAF3wT9tHNEtTa9kV:A9CWR2ytFHwkN2mxN9:fTnk723tTFcV8xCUpTf85RfQLTeNcX7C’ You should be able to use the CLI after this – to test, try running version read and you should see something like this:

    $ maas 19-root version read
    Machine-readable output follows:
        "subversion": "trusty1", 
        "version": "1.9.0+bzr4533-0ubuntu1", 
        "capabilities": [

MAAS UI may complain there are no boot images imported yet, but that’s fine – we’ll get to that once we need to add the NUCs as nodes.

Configuring Cluster Controller Interfaces

Now we have MAAS up and running and it’s time to configure the manged cluster controller interfaces before we continue with the rest (zones, fabrics, spaces, subnets). Either from the web UI (as outlined in the Get Started quick guide) or from the CLI, we need to update all cluster controller interfaces so that eth1 and all VLAN NICs on it are managed for DNS and DHCP, have default gateway and both DHCP and static ranges set. Here’s a screenshot of how it looks like after we’re done:

MAAS Cluster Controller Interfaces after finishing their configuration

To achieve this using the CLI, run the following commands:

  1. Get the UUID of the controller, e.g. 5d5085c8-34fe-4f86-a338-0450a49bf698:
    $ maas 19-root node-groups list | grep uuid
  2. Update the external NIC eth0 to be unmanaged and set the default gateway:
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth0 ip= interface=eth0 management=0 subnet_mask= router_ip=
  3. Update the internal NIC eth1 used to control the nodes to be managed and have both DHCP ( and static IP ( ranges set:
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1 ip= interface=eth1 management=2 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
  4. Update the eth1.99 VLAN NIC – it needs to be unmanaged, as it will be used by OpenStack Neutron Gateway to provide DHCP for OpenStack guest instances:
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1.99 ip= interface=eth1.99 management=0 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
  5. Update all remaining VLAN NICs the same way (DHCP and static IP ranges, default gateway, managed):
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1.30 ip= interface=eth1.30 management=2 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1.50 ip= interface=eth1.50 management=2 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1.100 ip= interface=eth1.100 management=2 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1.150 ip= interface=eth1.150 management=2 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1.200 ip= interface=eth1.200 management=2 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
    $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 eth1.250 ip= interface=eth1.250 management=2 subnet_mask= ip_range_low= ip_range_high= static_ip_range_low= static_ip_range_high=
  6. Verify the changes by listing all NICs of the cluster controller again:
    $ maas 19-root node-group-interfaces list 5d5085c8-34fe-4f86-a338-0450a49bf698

    The last command should return output similar to this one: For the VLAN NICs the VLAN ID part of the IPs, ranges, and the interface name changes.

Setting up Fabrics, VLANs, Spaces, and Subnets

Next step is to set up 2 MAAS fabrics: I’ve chosen “maas-external” (containing the external subnet for eth0) and “maas-management” (containing everything else). By default MAAS creates one fabric per physical NIC it discovers in /etc/network/interfaces during installation. So at this point you should have fabric-0 containing an “untagged” VLAN and the external subnet linked to eth0 ( and fabric-1, which also contains an”untagged” VLAN and as many “tagged” VLANs as discovered from the /etc/network/interfaces.


$ maas 19-root fabrics read

should give you output like this This is almost what we need, but let’s change the names of the fabrics to reflect their intended usage:

$ maas 19-root fabric update 0 name=maas-external
$ maas 19-root fabric update 1 name=maas-management

You might have noticed MAAS created a default space called space-0 and all subnets are part of it, as you can see the Subnets page in the UI or by running:

$ maas 19-root subnets read

This space-0 will be used when no explicit space is specified for any (new) subnet. We’ll rename it to “default” and also create all the other spaces we need for deploying OpenStack:

  • Rename space-0 to default

    $ maas 19-root space update 0 name=default

  • unused space will contain the external subnet only
    $ maas 19-root spaces create name=unused
  • admin-api space will contain VLAN 150
    $ maas 19-root spaces create name=admin-api
  • internal-api space will contain VLAN 100
    $ maas 19-root spaces create name=internal-api
  • public-api space will contain VLAN 50
    $ maas 19-root spaces create name=public-api
  • compute-data space will contain VLAN 250
    $ maas 19-root spaces create name=compute-data
  • compute-external space will contain VLAN 99
    $ maas 19-root spaces create name=compute-external
  • storage-data space will contain VLAN 200
    $ maas 19-root spaces create name=storage-data
  • storage-cluster space will contain VLAN 30
    $ maas 19-root spaces create name=storage-cluster

Now we can update all subnets to set meaningful names and a default gateway for each and also to associate them with the correct spaces. To do that we need to use the MAAS IDs for spaces, same for subnets, but fortunately there’s a neat trick we can use here: prefixed references – e.g. instead of “2” (a subnet ID) use “vlan:50” (i.e. the subnet in VLAN with ID 50 – if there is more than one subnet in VLAN 50, it won’t work as it won’t uniquely identify a single subnet). Another prefixed reference for subnets is for example “cidr:” to select the unmanaged external subnet. We still need the space IDs, so we’ll first list them all and then copy their IDs in the subsequent commands to update each subnet. If we created the spaces in the order given above, they will have increasing IDs starting from 2, so that’s makes it slightly easier.

  • List all spaces and get their IDs:
    $ maas 19-root spaces read$ maas 19-root spaces read
  • Move the unmanaged subnet of eth0 to space “unused” and call ot “maas-external”:
    $ maas 19-root subnet update cidr: name=maas-external space=1
  • Rename the managed subnet (used for PXE booting the nodes) to “maas-management”:
    $ maas 19-root subnet update cidr: name=maas-management
  • Move all VLAN subnets to their respective spaces, set a name and default gateway for each:
    $ maas 19-root subnet update vlan:150 name=admin-api space=2 gateway_ip=
    $ maas 19-root subnet update vlan:100 name=internal-api space=3 gateway_ip=
    $ maas 19-root subnet update vlan:50 name=public-api space=4 gateway_ip=
    $ maas 19-root subnet update vlan:250 name=compute-data space=5 gateway_ip=
    $ maas 19-root subnet update vlan:99 name=compute-external space=6 gateway_ip=
    $ maas 19-root subnet update vlan:200 name=storage-data space=7 gateway_ip=
    $ maas 19-root subnet update vlan:30 name=storage-cluster space=8 gateway_ip=

After those commands your Subnets page in the MAAS UI should look like the following screenshot:

MAAS Subnets showing fabrics and spaces

Importing Boot Images and Next Steps

We’re almost ready to use our new MAAS. Three more steps remain:

  • Importing boot images to use for deployments of nodes
  • Enlisting all 4 NUCs as nodes.
  • Accepting and commissioning all nodes.

The first step can be done from the UI or the CLI. We’ll need amd64 Ubuntu Trusty images only for now. Go to the web UI “Images” page, check “14.04 LTS” for Ubuntu release and “amd64” for Architecture, then click “Import images”. Sit and wait – with a reasonably fast Internet connection it should take only a few minutes (less than 800 MB download for the 14.04 amd64 image).
Alternatively, with the CLI you can run:

$ maas 19-root boot-resources import

No need to change the boot images selections as by default 14.04/amd64 is selected. You can watch as the UI auto-updates during the 2 phases – region and cluster import. When done the UI should look like this:

MAAS Boot Images Imported

Next Steps: Nodes Networking

I’ll stop here as the post again got too long, so if you’re still following – thanks! – and stay tuned for the next post in which I’ll describe adding the nodes to MAAS, including the Intel AMT power parameters needed by MAAS to power the nodes on and off, as well as how the node network interfaces should be configured to deploy OpenStack on them.

Original blog post

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Sign up for email updates

Choose the topics you're interested in


Related posts

MAAS 2.4.0 Alpha 2 released!

This originally appeared on Andres Rodriguez’s blog Hello MAASters! I’m happy to announce that MAAS 2.4.0 alpha 2 has now been released and is available for Ubuntu Bionic. MAAS Availability MAAS 2.4.0 alpha 1 is available in the…

MAAS for the home

This article originally appeared on Chris Sanders’ blog MAAS is designed to run in a data center where it expects to have control of DNS and DHCP. The use of an external DHCP server is listed as ‘may work but not supported’ in the…

Deploying Ubuntu OpenStack to ARM64 servers

At Canonical, we’ve been doing work to make sure Ubuntu OpenStack deploys on ARM servers as easily as on x86. Whether you have Qualcomm 2400 REP boards, Cavium ThunderX boards, HiSilicon D05 boards, or other Ubuntu …