Announcing The Canonical Distribution of Kubernetes 1.5.1

Jorge O. Castro

Jorge O. Castro

on 16 December 2016

We’re proud to announce support for 1.5.1 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise, bare metal, and developer laptops. Kubernetes 1.5.1 includes a ton of new features and bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.1 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next
sudo apt update
sudo apt install conjure-up
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page:

Source code:

New features

  • Full support for Kubernetes v1.5.1.
  • The charms now support the Container Network Interface (CNI).
    • The changes to the flannel integration are incompatible with the old method, as a result you must redeploy your cluster to get the latest release.
    • Flannel is now CNI only. The SDN plugin interface has been removed.
    • CNI support will allow us to support other CNI based Software Defined Network (SDN) applications, such as Calico and Weave.
  • In order to provide a more pure upstream Kubernetes experience, and support the wide range of amazing integration work being done by the community, the Elastic Stack is no longer included by default. Deploying and integrating with the Elastic Stack is still supported and is provided as a post deployment option.
  • Added debug actions to the kubernetes-master and kubernetes-worker charms. See the debugging section below.


General Fixes


  • #94SDN Plugin relationship not properly cleaning states on remove-relation
  • #121Use the ginkgo resource in the e2e layer
  • #122DNS may not be available during kubedns-relation causing failures
  • #124 DNS may not be available during kubedns-relation causing failures
  • #126 etcd backup
  • #127 persistent ESB volume for etcd
  • #130 Addons are not upgraded during charm upgrades
  • #136 Test improvements, and bumping versions of master and worker.
  • Juju 2.0 instruction update fixes #128
  • #147 Need to integrate better in the testgrid
  • #149 Include containers logs in the collection by filebeat so docker images spilling info to its stdout will be collected too
  • #150 Update for 1.5.0 - needs new flag 'anonymous-auth'
  • #155 remove references to the elastic stack from the docs
  • #156 Update the local.yaml with the CNI release
  • #157 update local.yaml
  • #158 Bump revision of kubernetes-master

Etcd layer-specific changes

  • #61 Adds the restore action
  • #38 Timeout Error during automated testing
  • #46 The openssl configuration code is broken
  • #50 layer-etcd needs to integrate with an CA instead of performing the CA operations itself
  • #58 Add snapshot/restore actions
  • #59 Adds the snapshot action
  • #60 Strip out the (leader) status message appender
  • #45 The etcd charm pulls easy-rsa from
  • #62 Fixed lint error
  • #63 Adds initial persistent external storage support
  • #64 Fixing the source with the new flake8 rules.
  • #65 Layer tls-client rework
  • #67 Fixes for #66
  • #68 Update the readme with current action names
  • #69 Rename non-existant action to the proper package-client-credentials action

Docker layer-specific changes:

  • #54 Updating the charm state from events inside containers
  • #96 Add support for dockerhost
  • #97 Add debug script to layer-docker

Unfiled/un-scheduled fixes:

  • Adds addon tactic to generate addon manifests from template in the repository at build time
  • Fix for addons racing to deploy before kube-dns is ready
  • Adds warning message when service-cidr is changed (immutable option)
  • Added code docstrings improved


To use a debug action on master: juju run-action kubernetes-master/0 debug

To use a debug action on a worker: juju run-action kubernetes-worker/0 debug


juju run-action kubernetes-master/0 debug

Will run the debugging command and return an action id:

Action queued with id: eb1c95dc-fe05-4e2e-8824-fb5d985f475e

You can then query the action output which includes the debug output and instructions on how to download the data:

juju show-action-output eb1c95dc-fe05-4e2e-8824-fb5d985f475e

 command: juju scp kubernetes-master/0:/home/ubuntu/debug-20161216153022.tar.gz .
 path: /home/ubuntu/debug-20161216153022.tar.gz
status: completed

The above commands will execute several debug scripts on the unit. It produces a tree with the following information:

  • charm-unitdata
  • docker images, version, ps, info
  • filesystem information
  • inotify  
  • juju-logs  
  • All kubectl and cluster information
  • Kubernetes-master-services
  • Kubernetes-worker-services
  • Network Information
  • Package Information
  • Systemd logs and dumps.

This feature is also encapsulated via the framework in layer-debug. This means you can easily, consistently, and efficiently add your own debugging routines. Consider submitting useful ones to us!

How to contact us:

We're normally found in these Slack channels and attend these sig meetings regularly:

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: We hope you enjoy this release as much as we enjoyed bringing it to you!

Talk to us today

Interested in running Ubuntu Desktop in your organisation?

Sign up for email updates

Choose the topics you're interested in


Related posts


Event Information Date: March 27 – 29 City/State: San Jose, CA Location: San Jose McEnery Convention Center Booth: #1227 GTC is the premier AI and deep learning conference, providing unparalleled training, industry insights, and…

Externally exposing a LXD-based Kubernetes service

This article originally appeared on Rye Terrell’s blog   So you’ve conjured up a Kubernetes cluster on top of LXD on your dev box. Cool. You’ve created a deployment, you’ve got a service directing traffic to it, and you can…

Monitor your Kubernetes Cluster

This article originally appeared on Kevin Monroe’s blog Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can…