This content has been written a long time ago. As such, it might not reflect my current thoughts anymore. I keep this page online because it might still contain valid information.

Yesterday, I gave a talk on how I use Docker to deploy applications at Clermont’ech API Hour #12, a French local developer group. I explained how to create a simple yet robust infrastructure to deploy a web application and a few services with zero downtime.

In order to monitor my infrastructure, and especially the HTTP responses, I gave the famous ELK stack a try. ELK stands for Elasticsearch, Logstash, Kibana. As I did not really talk about this part, I am going to explain it in this blog post.

The ELK Stack

I wrote a Dockerfile to build an ELK image. While you can directly use this image to run a container (mounting a host folder as a volume for the configuration files), you should probably extend it to add your own configuration so that you can get rid of this mapping to a host folder. This is one of the Docker best practices. Last but not least, Elasticsearch’s data are located into /data . I recommend that you use a data-only container to persist these data.

$ docker run -d -v /data --name dataelk busybox $ docker run -p 8080:80 \ -v /path/to/your/logstash/config:/etc/logstash \ --volumes-from dataelk \ willdurand/elk

Logstash Forwarder

In my opinion, such a stack should run on its own server, that is why its logstash configuration should only receive logs from the outside (the production environment for instance) and send them to Elasticsearch. In other words, we need a tool to collect logs in production, and process them elsewhere. Fortunately, that is exactly the goal of the logstash-forwarder (formerly lumberjack) project!

Below is an example of logstash configuration to process logs received on port 5043 thank to the lumberjack input, and persist them into Elasticsearch. You may notice that Hipache logs are filtered (I actually took this configuration from my production server :p).

input { lumberjack { port => 5043 ssl_certificate => "/etc/ssl/logstash-forwarder.crt" ssl_key => "/etc/ssl/logstash-forwarder.key" } } filter { if [type] == "hipache" { grok { patterns_dir => "/etc/logstash/patterns/nginx" match => { "message" => "%{NGINXACCESS}" } } } } output { elasticsearch { host => "127.0.0.1" cluster => "logstash" # Uncomment the line below if you use Kibana 3.1.0 # embedded => false } }

It is worth mentioning that logstash-forwarder requires SSL.

Back to the production environment, I wrote a Dockerfile to run logstash-forwarder . You need the same set of SSL files as seen previously in the logstash configuration, and a configuration file for logstash-forwarder . Then again, using this image as a base image is recommended, but for testing purpose, we can mount host folders as volumes:

$ docker run \ --volume /path/to/your/ssl/files:/etc/ssl \ --volume /path/to/your/config/file:/etc/logstash-forwarder \ --volume /var/log/nginx:/var/log/nginx \ willdurand/logstash-forwarder

The logstash-forwarder config.json file contains the following content. It tells logstash-forwarder to send hipache logs (found in /var/log/hipache/access.log ) to logstash.example.org:5043 :

{ "network": { "servers": [ "logstash.example.org:5043" ], "ssl certificate": "/etc/ssl/logstash-forwarder.crt", "ssl key": "/etc/ssl/logstash-forwarder.key", "ssl ca": "/etc/ssl/logstash-forwarder.crt" }, "files": [ { "paths": [ "/var/log/hipache/access.log" ], "fields": { "type": "hipache" } } ] }

Then again, having data-only containers everywhere would be better, even for logs (and you would use --volume-from datalogs for instance).

Kibana

You are all set! You can now create your own dashboards in Kibana. Here is mine to monitor the HTTP responses of the load balancer:

Need inspiration? Watch this video if you speak French…

Also…

Twelve Factors has a point saying that logs should be sent to std[err|out] , which seems not always possible to me, but if you do that, then you will probably be interested in logspout.

Don’t hesitate to share your point of view :-)