A cluster of Tuleap with Docker Swarm

Docker swarm was announced during the last DockerCon Europe. It’s a docker clustering solution designed to answer the question, “What do I use when I need a cluster between 2 and 100 machines?”

What is swarm

For single host deployment there is fig (which is planned to become compose) but for large clusters, fleet, mesos or kubernetes would probably better fit your needs.

The good thing with swarm is that it’s dead simple to get started with (pretty much like docker itself). My first attempt took 45 min between the time I started to get the binaries and when I had a working container on the cluster.

You can watch a video of Victor Vieux and Andrea Luzzardi presenting an early version of swarm, just after docker global hack day.

Case study

For now, I’ll use Tuleap as an example. My case is pretty straightforward. I just want to run several instances of Tuleap All In One. That’s pretty much what we do for mytuleap.com except we have to glue all the pieces together with ansible for the time being.

On the cluster side, I got two servers @rackspace with the latest docker version (1.4+ is mandatory to get swarm working).

The cluster manager will be run on my laptop. The cluster manager is a specialized docker server that is able to distribute the workload across the cluster.

Getting started

First step is to get the swarm binary. As I don’t have a Go build environment, I’ll use a docker container for that.

/! I’m completely new to Go & stuff so there are probably better ways to do it but … yay it works.

$> docker run -ti --name=buildswarm golang bash root@9f70c50631da:/go$> go get -u github.com/docker/swarm root@9f70c50631da:/go$> exit $> docker cp buildswarm:/go/bin/swarm . $> ./swarm -v swarm version 0.0.1 

Copy the swarm binary to each cluster node and set up docker daemon to expose on a public interface. Edit /etc/default/docker, set DOCKER_OPTS="-H 0.0.0.0:2375" and restart docker server service docker restart.

/! WARNING: by doing this anyone can run any docker command on your server (hence do anything). Do not do it on a server not meant to be deleted at the end of test. For future production usage, there is TLS to manage communication between client/server.

The rest of the setup is very well described in the swarm documentation. So:

root@node1:$> ./swarm create 256306bca12a54833b32569162db184e root@node1:$> ./swarm join --discovery token://256306bca12a54833b32569162db184e --addr=node1.ip.address:2375 root@node2:$> ./swarm join --discovery token://256306bca12a54833b32569162db184e --addr=node2.ip.address:2375 manuel@laptop:$> ./swarm manage --discovery token://256306bca12a54833b32569162db184e -H=127.0.0.1:2375 

And that’s all !

Note: if you ever need to connect to your docker daemon on nodes, do not forget to set the DOCKER_HOST environment variable, e.g. export DOCKER_HOST=node1.ip.address:2375.

Starting now you can use your regular commands to start and stop containers. You only need to specify the docker socket to run a command:

manuel@laptop:$> docker -H 127.0.0.1:2375 run -d -e VIRTUAL_HOST=localhost -p 80:80 enalean/tuleap-aio 

The command will not return immediately (AFAIU it will only return when the image is fetched on the running node).

Once it’s done, you can get the info about the running container with docker ps:

manuel@laptop:$> export DOCKER_HOST=127.0.0.1:2375 manuel@laptop:$> docker ps CONTAINER ID        IMAGE                       COMMAND              CREATED             STATUS              PORTS                                       NAMES 0d5db96f2271        enalean/tuleap-aio:latest   "/root/app/run.sh"   53 seconds ago      Up 41 seconds       22/tcp, 443/tcp, 134.213.53.91:80->80/tcp   swarmtestmv/gloomy_stallman 

Notice the IP address as well as the special naming scheme (nodename/random). Open the IP address in your browser and “tadaa”, it works.

Current limits

As you have seen, it’s dead simple to get a swarm cluster running.

I think @docker took the right approach: make it simple so people jump in very easily.

For the time being, the key thing I was not able to do properly is volume management. I would expect that my DB could be run on any part of the cluster but if the data volume that allows the persistency is not there, I’m stuck. However, I’m confident there will be ways to do that properly soon, maybe linked to the ongoing work on 1st class citizen volumes in docker.

I’m looking forward to the first official releases.

About the Author

How great is the challenge of creating economic value for a company with a libre software. I enjoy this! It encourages me to think business and communication in a disruptive way. I believe in the core value of FLOSS and agile spirit: open minded listening, transparency and co-creation. I'm Marketing Manager at Enalean.

Write Your Comment

fifteen + 4 =

You may use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Websites located at enalean.com and other enalean.com subdomains need to store and access cookies on your device. We need your acceptance. Get more information. Yes, I agree No, I disagree