In this tutorial we will ship our logs from our containers running on docker swarm to elasticsearch using fluentd with the elasticsearch plugin.

We will also make use of tags to apply extra metadata to our logs making it easier to search for logs based on stack name, service name etc.

Building our Image

Our Dockerfile which we have at fluentd/Dockerfile , where we will install the fluentd elasticsearch plugin:

FROM fluent/fluentd USER root # https://docs.fluentd.org/output/elasticsearch RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-rdoc", "--no-ri"] USER fluent ENTRYPOINT ["fluentd", "-c", "/fluentd/etc/fluent.conf"]

The fluentd configuration that we have available at fluentd/fluentd.conf :

<source> @type forward port 24224 bind 0.0.0.0 </source> <filter docker.*.*> @type record_transformer <record> hostname "#{Socket.gethostname}" tag ${tag} stack_name ${tag_parts[1]} service_name ${tag_parts[2]} fluentd_hostname "#{ENV['FLUENTD_HOSTNAME']}" </record> </filter> <match docker.*.*> @type copy <store> @type elasticsearch host elasticsearch port 9200 logstash_format true logstash_prefix fluentd logstash_dateformat %Y.%m.%d include_tag_key true type_name access_log tag_key @log_name <buffer> flush_interval 1s flush_thread_count 2 </buffer> </store> <store> @type stdout </store> </match>

Let's build our fluentd image:

$ cd fluentd $ docker build -t ruanbekker/fluentd-elasticsearch .

Create Swarm Networks

I am referencing a private and a public overlay network in my compose files, if you don't have them already, you can create them like below:

$ docker network create --driver overlay private $ docker network create --driver overlay public

Deploy Fluentd

And then finally our docker-compose.yml to deploy the fluentd service:

version: "3.7" services: fluentd-elasticsearch: image: ruanbekker/fluentd-elasticsearch environment: FLUENTD_CONF: 'fluent.conf' FLUENTD_HOSTNAME: '{{.Node.Hostname}}' ports: - 24224:24224 - 24224:24224/udp user: root configs: - source: fluent-elasticsearch-conf.v1 target: /fluentd/etc/fluent.conf networks: - private deploy: mode: global restart_policy: condition: on-failure elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.0 environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "cluster.name=es-cluster" - "discovery.zen.minimum_master_nodes=1" - "discovery.zen.ping.unicast.hosts=elasticsearch" - "node.master=true" - "node.data=true" - "node.ingest=true" - "node.name=es-node.{{.Task.Slot}}.{{.Node.Hostname}}" - "LOGSPOUT=ignore" networks: - private ports: - target: 9200 published: 9200 protocol: tcp mode: host deploy: endpoint_mode: dnsrr mode: replicated replicas: 1 restart_policy: condition: on-failure kibana: image: docker.elastic.co/kibana/kibana-oss:${ELASTIC_VERSION:-6.7.0} networks: - private ports: - target: 5601 published: 5601 protocol: tcp mode: host environment: - SERVER_NAME=kibana.${DOMAIN:-localhost} - ELASTICSEARCH_URL=${ELASTICSEARCH_HOST:-http://elasticsearch}:${ELASTICSEARCH_PORT:-9200} - ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOST:-http://elasticsearch}:${ELASTICSEARCH_PORT:-9200} deploy: mode: replicated replicas: 1 networks: private: external: true configs: fluent-elasticsearch-conf.v1: file: ./fluentd/fluentd.conf # source: https://github.com/bekkerstacks/elasticsearch-fluentd-kibana

To deploy our fluentd service:

$ cd .. $ docker stack deploy -c docker-compose.yml logging

Deploy a Application with Logging

Now that we have our fluentd service running we can deploy a service and instruct it to use the fluentd log driver. The docker-compose.yml for our gitea service:

version: "3.7" services: gitea: image: gitea/gitea:latest networks: - public - private deploy: placement: constraints: - node.role==manager logging: driver: fluentd options: tag: docker.ci.gitea fluentd-async-connect: "true" networks: public: external: true private: external: true

Notice that we are using the tag name to enrich the log entry with docker stack name and service name: docker.ci.gitea , the fluentd configuration shows that we are using the values after the period seperation, 1st one being the stack name and 2nd one being the service name.

Now we want to deploy our gitea service, and the logs for our gitea service will be pushed to elasticsearch, via fluentd:

$ docker stack deploy -c docker-compose.yml gitea

Heading over to Kibana, creating the fluentd-* index mapping and we will be able to view the logs for gitea:

Viewing the log entry:

Resources

The source code can be found on https://github.com/bekkerstacks/elasticsearch-fluentd-kibana