Often, existing infrastructure means the VPC and/or subnets that I have been allocated to do my work on AWS.
What is better than a little hands on to explain how Juju can interact with your infrastructure, and leverage a predefined network environment?
The rest of this post assumes that:
About this last point, as a reminder, if you have not yet installed Juju, do it by entering the following commands on an Ubuntu 16.04 machine or VM
sudo apt-add-repository ppa:juju/stable sudo apt-add-repository ppa:conjure-up/next sudo apt update && apt upgrade -yqq sudo apt install -yqq juju conjure-up
for other OSes, lookup the official docs
Then to connect to the AWS cloud with your credentials, read this page
Now that you are ready, this is what we are going to do:
The target is to deploy a multi AZ cluster to achieve a proper level of HA between our worker nodes and the control plane.
The design below shows
This Cloudformation template defines all these. We deploy it using the GUI of AWS, going to CloudFormation and creating a new stack as shown on the images below.
Juju will require the following setup:
First let us select the JSON file
Add a few information
Add a few options
and voilàààà! Now we have a nice setup with private and public subnets in a given VPC.
Now let us bring up Kubernetes in our new setup
Bootstrap a Juju Controller with:
juju bootstrap aws/us-west-2 k8s-us-west-2 \ --config vpc-id=vpc-fa6dfa9d --config vpc-id-force=true \ --to "subnet=subnet-bb1ab2dc" \ --bootstrap-constraints "root-disk=128G mem=8G" \ --credential canonical \ --bootstrap-series xenial
Which will output something like:
WARNING! The specified vpc-id does not satisfy the minimum Juju requirements, but will be used anyway because vpc-id-force=true is also specified. Using VPC "vpc-fa6dfa9d" in region "us-west-2" Creating Juju controller "k8s-us-west-2" on aws/us-west-2 Looking for packaged Juju agent version 2.1-beta5 for amd64 Launching controller instance(s) on aws/us-west-2... - i-001a9dce9beb162fd (arch=amd64 mem=8G cores=2) Fetching Juju GUI 2.2.7 Waiting for address Attempting to connect to 220.127.116.11:22 Attempting to connect to 10.0.1.254:22 Logging to /var/log/cloud-init-output.log on the bootstrap machine Running apt-get update Running apt-get upgrade Installing curl, cpu-checker, bridge-utils, cloud-utils, tmux Fetching Juju agent version 2.1-beta5 for amd64 Installing Juju machine agent Starting Juju machine agent (service jujud-machine-0) Bootstrap agent now started Contacting Juju controller at 10.0.1.254 to verify accessibility... Bootstrap complete, "k8s-us-west-2" controller now available. Controller machines are in the "controller" model. Initial model "default" added.
At this point, Juju doesn’t know about the mapping of private and public subnets, we need to teach it what it is. Create 2 spaces with identifiable and meaningful names:
juju add-space public added space "public" with no subnets juju add-space private added space "private" with no subnets
Now identify the subnets that are private from those that are public. Use the MapPublicIpOnLaunch property of the subnet as a discriminating factor
# Resolves private subnets: aws ec2 describe-subnets \ --filter Name=vpc-id,Values=vpc-56416e32 \ | jq --raw-output \ '. | select(.MapPublicIpOnLaunch == false) | .SubnetId' subnet-ba1ab2dd subnet-f44486bd And # Resolves private subnets: aws ec2 describe-subnets \ --filter Name=vpc-id,Values=vpc-56416e32 \ | jq --raw-output \ '. | select(.MapPublicIpOnLaunch == false) | .SubnetId' subnet-ba1ab2dd subnet-f44486bd
Add the subnets to their respective space
juju add-subnet subnet-ba1ab2dd private added subnet with ProviderId “subnet-ba1ab2dd” in space “private” juju add-subnet subnet-f44486bd private added subnet with ProviderId “subnet-f44486bd” in space “private” juju add-subnet subnet-bb1ab2dc public added subnet with ProviderId “subnet-bb1ab2dc” in space “public” juju add-subnet subnet-f24486bb public added subnet with ProviderId “subnet-f24486bb” in space “public”
Now Juju knows the mapping of your design in AWS. You are now ready to deploy Kubernetes.
Quick summary of the design targets
Ready? In juju 2.0.x, you have to deploy manually so network spaces constraints are taken into account
First, deploy your support applications with:
juju deploy --constraints "instance-type=m3.medium spaces=private" cs:~containers/etcd-23 juju deploy --constraints "instance-type=m3.medium spaces=private" cs:~containers/easyrsa-6 Now enforce your constraints and scale out etcd juju set-constraints etcd "instance-type=m3.medium spaces=private" juju add-unit -n2 etcd Now, deploy the Kubernetes Core applications, enforce constraints and scale out: juju deploy --constraints "cpu-cores=2 mem=8G root-disk=32G spaces=public" cs:~containers/kubernetes-master-11 juju deploy --constraints "instance-type=m4.xlarge spaces=public" cs:~containers/kubernetes-worker-13 juju deploy cs:~containers/flannel-10 juju set-constraints kubernetes-worker "instance-type=m4.xlarge spaces=public" juju add-unit -n2 kubernetes-worker
Create the relations between the components:
juju add-relation kubernetes-master:cluster-dns kubernetes-worker:kube-dns juju add-relation kubernetes-master:certificates easyrsa:client juju add-relation etcd:certificates easyrsa:client juju add-relation kubernetes-master:etcd etcd:db juju add-relation kubernetes-worker:certificates easyrsa:client juju add-relation flannel:etcd etcd:db juju add-relation flannel:cni kubernetes-master:cni juju add-relation flannel:cni kubernetes-worker:cni juju add-relation kubernetes-worker:kube-api-endpoint kubernetes-master:kube-api-endpoint
and expose the master, to connect to the API, and the workers, to get access to the workloads:
juju expose kubernetes-master juju expose kubernetes-worker
You can track the deployment with
watch -c juju status --color
and get a dynamic view on:
Model Controller Cloud/Region Version default k8s-us-west-2 aws/us-west-2 2.1-beta5 App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 6 ubuntu etcd 2.2.5 active 3 etcd jujucharms 23 ubuntu flannel 0.7.0 active 4 flannel jujucharms 10 ubuntu kubernetes-master 1.5.2 active 1 kubernetes-master jujucharms 11 ubuntu exposed kubernetes-worker 1.5.2 active 3 kubernetes-worker jujucharms 13 ubuntu Unit Workload Agent Machine Public address Ports Message easyrsa/0* active idle 2 10.0.251.198 Certificate Authority connected. etcd/0* active idle 1 10.0.252.237 2379/tcp Healthy with 3 known peers. etcd/1 active idle 6 10.0.251.143 2379/tcp Healthy with 3 known peers. etcd/2 active idle 7 10.0.251.31 2379/tcp Healthy with 3 known peers. kubernetes-master/0* active idle 0 18.104.22.168 6443/tcp Kubernetes master running. flannel/0* active idle 22.214.171.124 Flannel subnet 10.1.37.1/24 kubernetes-worker/0* active idle 3 126.96.36.199 80/tcp,443/tcp Kubernetes worker running. flannel/3 active idle 188.8.131.52 Flannel subnet 10.1.11.1/24 kubernetes-worker/1 active idle 4 184.108.40.206 80/tcp,443/tcp Kubernetes worker running. flannel/1 active idle 220.127.116.11 Flannel subnet 10.1.43.1/24 kubernetes-worker/2 active idle 5 18.104.22.168 80/tcp,443/tcp Kubernetes worker running. flannel/2 active idle 22.214.171.124 Flannel subnet 10.1.68.1/24 Machine State DNS Inst id Series AZ 0 started 126.96.36.199 i-0a3fdb3ce9590cb7e xenial us-west-2a 1 started 10.0.252.237 i-0dcbd977bee04563b xenial us-west-2b 2 started 10.0.251.198 i-04cedb17e22064212 xenial us-west-2a 3 started 188.8.131.52 i-0f44e7e27f776aebf xenial us-west-2b 4 started 184.108.40.206 i-02ff8041a61550802 xenial us-west-2a 5 started 220.127.116.11 i-0a4505185421bbdaf xenial us-west-2a 6 started 10.0.251.143 i-05a855d5c0c6f847d xenial us-west-2a 7 started 10.0.251.31 i-03f1aafe15d163a34 xenial us-west-2a Relation Provides Consumes Type certificates easyrsa etcd regular certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular cni flannel kubernetes-master regular cni flannel kubernetes-worker regular cni kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker regular cni kubernetes-worker flannel subordinate
Here we can see how our nodes are spread across private and public subnets.
As you can scale network spaces and subnets by your own, you can also label nodes in specific areas, in order to run specific workloads on them.
First download kubectl & the kubeconfig file from the master
mkdir ~/.kube juju scp kubernetes-master/0:/home/ubuntu/kubectl ./ juju scp kubernetes-master/0:/home/ubuntu/config ./.kube/ chmod +x kubectl && mv kubectl /usr/local/bin/
Test that the connection is ok with:
kubectl get nodes --show-labels NAME STATUS AGE LABELS ip-10-0-1-54 Ready 18m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-10-0-1-54 ip-10-0-1-95 Ready 18m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-10-0-1-95 ip-10-0-2-43 Ready 18m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-10-0-2-43
Deploy the demo application (microbots)
juju run-action kubernetes-worker/0 microbot replicas=5 Action queued with id: 1a76d3f7-f82c-48ee-84f4-c4f77f3a453d
check the output
juju show-action-output 1a76d3f7-f82c-48ee-84f4-c4f77f3a453d results: address: microbot.18.104.22.168.xip.io status: completed timing: completed: 2017-02-06 15:51:54 +0000 UTC enqueued: 2017-02-06 15:51:52 +0000 UTC started: 2017-02-06 15:51:53 +0000 UTC
Now you can go to the DNS endpoint, refresh the app and see how the the application is deployed.
Using the Canonical Distribution of Kubernetes and AWS CloudFormation, we simulated the deployment of a Kubernetes cluster in an existing environment, instead of a Juju-generated network set up.
There are many other ways to leverage the automated deployment post completion. For example, Juju will allocate tags to instances that match the current controller, model and units. You can therefore reuse that information in other CloudFormation templates to create ELBs and map them to units or groups of units.
Of course, this doesn’t only apply to Kubernetes, and you can use the same mechanism for all the other workloads Juju can deploy, such as Big Data solutions. Same tool, different purposes…
Interested in running Ubuntu Desktop in your organisation?
Event Information Date: March 27 – 29 City/State: San Jose, CA Location: San Jose McEnery Convention Center Booth: #1227 GTC is the premier AI and deep learning conference, providing unparalleled training, industry insights, and…
This article originally appeared on Chris Sanders’ blog MAAS is designed to run in a data center where it expects to have control of DNS and DHCP. The use of an external DHCP server is listed as ‘may work but not supported’ in the…
Storage Made Easy (SME) today announced the availability of the Storage Made Easy™ Enterprise File Fabric™ charm through Canonical’s Juju charm store. The store provides access to a wide range of best practice solutions which…