Deploying a docker container of a CherryPy application onto a CoreOS cluster

Previously, I presented a simple web application that was distributed into several docker containers. In this article, I will be introducing the CoreOS platform as the backend for clusterizing a CherryPy application.

CoreOS quick overview

CoreOS is a Linux distribution designed to support distributed/clustering scenarios. I will not spend too much time explaining it here as their documentation already provides lots of information. Most specifically, review their architecture use-cases for a good overview of what CoreOS is articulated.

What matters to us in this article is that we can use CoreOS to manage a cluster of nodes that will host our application as docker containers. To achieve this, CoreOS relies on a technologies such as systemd, etcd and fleet at its core.

Each CoreOS instance within the cluster runs a linux kernel which executes systemd to manage processes within that instance. etcd is a distributed key/value store used across the cluster to enable service discovery and configuration synchronization within the cluster. Fleet is used to manage services executed within your cluster. Those services aredescribed in files called unit files.

Roughly speaking, you use a unit-file to describe your service and specify which docker container to execute. Using fleet, you submit and load that service to the cluster before starting/stopping it at will. CoreOS will determine which host it will deploy it on (you can setup constraints that CoreOS will follow). Once loaded onto a node, the node’s systemd takes over to manage the service locally and you can use fleet to query the status of that service from outside.

Setup your environment with Vagrant

Vagrant is a nifty tool to orchestrate small deployment on your development machine. For instance, here is a simple command to create a node with Ubuntu running on it:

Vagrant has a fairly rich command line you can script to generate a final image. However, Vagrant usually provisions virtual machines by following a description found within a simple text file (well actually it’s a ruby module) called a Vagrantfile. This is the path we will be following in this article.

Let’s get the code:

From there you can create the cluster as follows:

I am not using directly vagrant to create the cluster because there are a couple of other operations that must be carried to let fleet talk to the CoreOS node properly. Namely:

  • Generate a new cluster id (via https://discovery.etcd.io/new)
  • Start a ssh agent to handle the node’s SSH identities to connect from the outside
  • Indicate where to locate the node’s ssh service (through a port mapped by Vagrant)
  • Create the cluster (this calls vagrant up internally)

Once completed, you should have a running CoreOS node that you can log into:

To destroy the cluster and terminate the node:

This also takes care of wiping out local resources that we don’t need any longer.

Before moving on, you will need to install the fleet tools.

Run your CherryPy application onto the cluster

If you have destroyed the cluster, re-create it and make sure you can speak to it through fleet as follows:

Bingo! This is the public address we statically set in the Vagrantfile associated to the node.

Let’s ensure we have no registered units yet:

Okay, all is good. Now, let’s push each of our units to the cluster:

As you can see, the unit files have been registered but they are not loaded onto the cluster yet.

Notice the naming convention used for webapp_app@.service, this is due to the fact that this is will not be considered as a service description itself but as a template for a named service. We will see this in a minute. Refer to this extensive DigitalOcean article for more details regarding unit files.

Let’s now load each unit onto the cluster:

Here, we asked fleet to load the service onto an available node. Considering there is a single node, it wasn’t a a difficult decision to make.

At that stage, your service is not started. It simply is attached to a node.

It is not compulsory to explicitely load before starting a service. However, if gives you the opportunity to unload a service if a specific condition occurs (service needs to be amended, the chosen host isn’t valid any longer…).

Now ce can finally start it:

You can see what’s happening:

Or alternatively, you can request the service’s status:

Once the service is ready:

Starting a service from a unit template works the same way except you provide an identifier to the instance:

The reason I chose 1 as the identifier is so that it the container’s name becomes notes1 as expected by the load-balancer container when linking it to the application’s container. As described in the previous article.

Start a second instance of that unit template:

That second instance starts immediatly because the image is already there.

Finally, once both services are marked as “active”, you can start the load-balancer service as well:

At that stage, the complete application is up and running and you can go to http://localhost:7070/ to use it. Port 7070 is mapped to port 8091 by vagrant within our Vagrantfile.

No such thing as a free lunch

As I said earlier, we created a cluster of one node on purpose. Indeed, the way all our containers are able to dynamically know where to locate each other is through the linking mechanism. Though this works very well in simple scenarios like this one, this has a fundamental limit since you cannot link across different hosts. If we had multiple nodes, fleet would try distributing our services accross all of them (unless we decided to constraint this within the unit files) and this would break the links between them obviously. This is why, in this particular example, we create a single node’s cluster.

Docker provides a mechanism named ambassador to address this restriction but we will not review it, instead we will benefit from a flat sub-network topology provided by weave as it seems it follows a more traditional path than the docker’s linking approach. This will be the subject of my next article.

Leave a Reply

Your email address will not be published. Required fields are marked *