ELK stands for Elasticsearch, Logstash, and Kibana. These are three tools that can be used together to implement log parsing, aggregation, analysis. Elasticsearch is a database with a builtin search engine provided by Apache Lucene. Logstash is a log spooler and parser with a plugin architecture. It can in various inputs (UDP, TCP, file, programs), parse and filter them, and output them (UDP, TCP, Elasticsearch, file). Kibana is a user interface that can work with Elasticsearch to provide log searching and analysis, as well as dashboards. The usage of these services is outside the scope of this walkthrough.

The first half of this is a recipe for a simple NixOS ELK stack. The second half will be extending this using the Nix language.

Note: I’m currently using nixos-version: 16.03.948.a96c308 (Emu) .

Deploying an ELK stack on NixOS

Enabling our services

The first steps is to enable each service. Luckily NixOS has each component already setup as a NixOS service. We will define this as elk.nix:

{ config, pkgs, ...}: { services.logstash = { enable = true; } services.elasticsearch = { enable = true; } services.kibana = { enable = true; } }

An attempt to configure this will /fail/. Authors Note: I’m not sure if I consider this a feature or a bug.

We can include this in our main configuration via the imports line:

configuration.nix:

imports = [ ... path/to/elk.nix ];

Configuring Logstash

Logstash has a series of options for configurating its configuration file: inputConfig, outputConfig, and filterConfig.

As noted above, the NixOS configuration will fail. We must define inputConfig and outputConfig. Additionally, the filterConfig is defined as the ‘noop’ filter. This filter does not exist in core logstash as of this writing (or exists in the contrib package). Regardless, we need a functional filter.

We are going to use the configuration described in the service examples, with some changes:

elasticsearch output will be using the http protocol. This avoids multicast service discovery which is great for a simple setup.

We will eventually do per-unit logging

elk.nix:

{ config, pkgs, ...}: { services.logstash = { enable = true; plugins = [ pkgs.logstash-contrib ]; inputConfig = '' pipe { command => "${pkgs.systemd}/bin/journalctl -f -o json" type => "syslog" codec => json {} } ''; filterConfig = '' if [type] == "syslog" { # Keep only relevant systemd fields # http://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html prune { whitelist_names => [ "type", "@timestamp", "@version", "MESSAGE", "PRIORITY", "SYSLOG_FACILITY" ] } } ''; outputConfig = '' elasticsearch { host => "127.0.0.1" protocol => "http" } ''; }; ... }

Each section is wrapped in its named field. So we leave of input {}, output {}, and filter{} that we are used to writing in logstash.

Configuring Elasticsearch

Elasticsearch needs no additional configuration. We are listening on port 9200 on IP address 127.0.0.1. This is the default and nothing additional is needed.

Configuring Kibana

The only thing that needs to be done in kibana is expose it on a non-internal IP address, and this is optional.

elk.nix:

{ config, pkgs, ...}: { ... services.kibana = { enable = true; listenAddress = "..."; }; }

The kibana defaults match the elasticsearch defaults, so not much has to change.

TLS: We can add TLS support via services.kibana.cert and services.kibana.key. We will ignore this for now.

Testing

We can now test our services:

$ curl http://localhost:9200 ... elasticsearch JSON payload ... $ curl http://localhost:5601 ... redirect to /app/kibana ...

If you open kibana at http://localhost:5601 , you should immediately see items coming in from systemds journal.

Updating our logstash configuration

If we look at the list of messages coming in, there is no information on where these messages are coming from (which systemd unit, specifically). Lets fix this by updating our whitelist.

filterConfig = '' if [type] == "syslog" { # Keep only relevant systemd fields # http://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html prune { whitelist_names => [ "type", "@timestamp", "@version", "MESSAGE", "PRIORITY", "SYSLOG_FACILITY", "_SYSTEMD_UNIT" ] } mutate { rename => { "_SYSTEMD_UNIT" => "unit" } } } '';

We store the systemd unit and then we rename it to ‘unit’, for readability.

Introduction to Nix functions and NixOS services

Making inputConfig generic via a custom Nix function

Now that we’ve gotten simple logging of our systemd journal, we can extend this with some Nix scripting.

The first step is to build a function which can simplify our inputConfig and make it generic based on the given unit.

{ config, pkgs, ... }: let fromUnit = unit: '' pipe { command => "${pkgs.systemd}/bin/journalctl -fu ${unit} -o json" tags => "${unit}" type => "syslog" codec => json {} } ''; in ...

This is a very gentle introduction into Nix scripting, and by no means idiomatic. fromUnit is our function, and it has one argument unit. The function outputs a string that is our ‘pipe’ logstash input.

Then we can define our input config as a concatenation of this function called on the units we need.

{ services.logstash = { inputConfig = (fromUnit "kibana") + (fromUnit "unit2"); }; }

Finally we can simplify the whole thing via lib.concatMapStrings.

{ config, pkgs, lib, ... }: { services.logstash = { inputConfig = lib.concatMapStrings fromUnit [ "kibana" "unit2" ]; }; }

Notice how we now have to import lib so we can use this concatMapStrings command.

We can even append additional configuration easily:

{ services.logstash = { inputConfig = lib.concatMapStrings fromUnit [ "kibana" "unit2" ] + '' udp { ... } ''; }; }

Turning our elk.nix into a Nix service!

Above, we used the line options services.elasticsearch.enable = true to enable elasticsearch with not much effort. This is because elasticsearch is defined as a service within the NixOS source tree.

We want this same approach for our elk service.

configuration.nix:

{ services.elk = { enable = true; systemdUnits = [ "kibana" "unit2" ]; }; }

Our ‘elk’ service can be enabled with a simple ‘enable’ flag and we can provide our list of units, as we did above.

A Service is made up of an interface and an implementation. The interface consists if the configuration options used to configure a service. The implementation consists of the details… the required packages, systemd units, users that are required.

The elasticsearch service, for example, requires the elasticsearch package, creates an elasticsearch users, and registers the elasticsearch.service systemd unit.

elk.nix:

{ config, lib, pkgs, ... }: with lib; # commands such as mkOption, types are within lib. let cfg = config.services.elk; # this is an alias to config.services.elk that is used in our implementation below fromUnit = unit: '' pipe { command => "${pkgs.systemd}/bin/journalctl -fu ${unit} -o json" tags => "${unit}" type => "syslog" codec => json {} } ''; in { ###### interface options.services.elk = { enable = mkOption { description = "Whether to enable the ELK stack."; default = false; type = types.bool; }; systemdUnits = mkOption { description = "The systemd units to send to our ELK stack."; default = []; type = types.listOf types.str; }; listenAddress = mkOption { description = "The IP address or host to listen on Kibana."; default = "127.0.0.1"; type = types.str; }; additionalInputConfig = mkOption { description = "Additional logstash input configurations."; default = ""; type = types.str; }; }; ##### implementation config = mkIf cfg.enable { services.logstash = { enable = true; plugins = [ pkgs.logstash-contrib ]; inputConfig = (concatMapStrings fromUnit cfg.systemdUnits) + cfg.additionalInputConfig filterConfig = '' if [type] == "syslog" { # Keep only relevant systemd fields # http://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html prune { whitelist_names => [ "type", "@timestamp", "@version", "MESSAGE", "PRIORITY", "SYSLOG_FACILITY", "_SYSTEMD_UNIT" ] } mutate { rename => { "_SYSTEMD_UNIT" => "unit" } } } ''; outputConfig = '' elasticsearch { protocol => "http" host => "127.0.0.1:9200" } ''; }; services.elasticsearch = { enable = true; }; services.kibana = { enable = true; listenAddress = cfg.listenAddress; }; }; }

NOTE:The implementation section of this is the elk.nix file we had been building before. The interface section provides our configuration options and our let section defines the custom code we had written. This makes the service powerful, as we can define a service as a composition of packages, services, and configurations.

The final piece is enabling this service within our configuration.nix: