Spicule - Data Processing Experts

How to Deploy Applications at Scale in Kubernetes

How to Deploy Applications at Scale in Kubernetes

Kubernetes is the hot new property on the block, but its tooling leaves a little to be desired. Kubectl is okay for checking the state of your cluster and deploying basic Docker containers(pods), but what happens if you want to deploy a multi-container application that can scale, distribute and understand the environment in which it lies? The current incumbent is Helm which solves some of these problems, but not all of them. For the end-user it often involves editing a bunch of YAML to configure the application correctly and Helm charts can be a little hit and miss as to their stability and also the version of the software it's deploying.

What about software in multiple locations? - Software at scale

Kubernetes is great, but not everything runs in Kubernetes, also some workloads are just not designed for containerisation. So what do we do with these? We can deploy them into Public or Private Cloud or we can deploy them on bare metal somewhere, but in both cases, they live outside the container ecosystem and the containers will need configuring to point at these services.

Let me introduce Juju to you.  Juju is a software orchestration platform from Canonical which for a number of years has happily supported deploying software, of pretty much any variety, to Public Cloud services, Openstack, Baremetal, LXD and more. Just recently though it developed a new super power... being able to manage software in Kubernetes. This includes configuring the container, updating container configuration, creating the required services for exposing the containers and so on.

Find out how Juju can help you

But why use Juju?

Let us look at this purely from an end-users perspective.

A bare-bones Kubernetes deployment and service would look something like this

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: hello
    tier: frontend
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
    matchLabels:
      app: hello
      tier: frontend
      track: stable
  replicas: 1
  template:
    metadata:
      labels:
        app: hello
        tier: frontend
        track: stable
    spec:
      containers:
      - name: nginx
        image: "gcr.io/google-samples/hello-frontend:1.0"
        lifecycle:
          preStop:
            exec:
              command: ["/usr/sbin/nginx","-s","quit"]

Which, whilst it does the job, isn't the most readable bit of code ever. But also other than deploying the software, if an end user wants to make changes, they have to edit that file, re-save it, tear down the old one and update it with the new deployment and service.

This is where Juju excels. Sure, developers still need to write Charms(code that encapsulates the container being deployed), but once that is done to deploy Charms to Kubernetes you do something like

juju deploy ~spiculecharms/saiku-k8s

which, and I might be a little biased here, is much easier to both read, and understand. Right?

What black magic is this?

So let's just look quickly at whats going on here. Juju has already been bootstrapped on my Kubernetes cluster, so they are aware of each other. I've then told it to deploy a charm I've written called Saiku. Juju then deployed a control pod and a Replica Set for Saiku from a container I'd defined with my charm. This means as newer versions of the software & charm are released, the deployment code should always stay in sync with my container, which is often a stumbling block when testing out new software you're just trying to get spun up quickly to test.

The other thing it does, because I asked it to as a charm author, is create a service and attach the replica set to that service. It will also deal with creating and assiging volume claims should you need it to. In my Kubernetes cluster it is connected to Openstack and specifically the Loadbalancer setup in it, when the service is deployed Kubernetes will then go grab an IP address to make my container publically accessible. And I've Written One Line!

It's also worth mentioning, of course, once this is setup, I can deploy charms from pretty much anywhere as long as I have a console, I don't need to git clone some YAML file or SSH back into a workstation somewhere.

How Juju allows hooks into other software

So we've got our first piece of software deployed. Big wow.... (okay, I think it is quite impressive but bear with me!). Now we want to be able to do what we do with Juju outside of Kubernetes and deal with events when pieces of software find out about each other. For this reason, Juju has concepts called interfaces and relations. These allow us to join multiple pieces of software together over pre-defined interfaces that provide bi-directional metadata transfer. You can only relate software that has the specific interfaces incorporated in the charm and what they do with this metadata is up to the charm developer.

Take a web app and database as a good example. In most cases, you'd want to be able to deploy your web application and database and then configure the web application to write it's config to a database with a unique username/password, on a specific IP address with the permissions set up so it can only be accessed from the client side. Sound familiar? But then when setting this stuff up, who gets lazy and can't be bothered figuring out the client side IP, so just wildcards it? (Sound familiar again?) Juju can remove all this manual work for the charm developer, and therefore the charm consumer, by implementing the relations.

juju deploy ~spiculecharms/saiku-k8s
juju deploy ~charmed-osm/mariadb-k8s

Expanding on our previous example, let's deploy MariaDB from a completely separate charm author alongside Saiku. This brings up a second pod set, with, you guessed it, MariaDB inside.

Thats cool, but they still dont' know of each other. To do that you then do

juju add-relation saiku-k8s mariadb-k8s

At this point, the hooks fire, and the charms configure themselves as has been defined in the code. MariaDB will create a new database for the client and the client will get access to the database name, username and password to allow it to configure itself appropriately.

Cool huh?

You'll recall earlier in this post, I mentioned that the various pieces of software might not be in the same environment. For example, your MariaDB database might be in AWS somewhere. As such, Juju allows for Cross Model Relations. As long as your database is also deployed by Juju, juju can employ "cross model relations" to provide the same metadata transfer from one service to another, even though they are in separate environments.

This means you can deploy your software into the place that makes the most sense, and still manage them as if they were in the same place.

How do I scale?

Simple!

juju add-unit -n x ​saiku-k8s

How do I tear it down?

Also simple!

juju remove-application saiku-k8s

How do I find out more?

Also simple!

Visit http://jaas.ai for the cloud stuff

And the discourse forum for more indepth information about the Kubernetes charms ecosystem, for example: https://discourse.jujucharms.c...

I hope you find this useful!

Categories:

JuJu Containers Deployment Canonical Kubernetes Automation Docker Enterprise