Nowadays, the myriad of data processing possibilities, ranging from in-house solutions to open-source tools to enterprise-grade third-party services, can be overwhelming. The regulatory demands are no joke either. There are laws dictating what you must and mustn’t log, what has to be durably stored for years, and what must be discarded the next day or on a user’s demand. All of this makes up the daily work of operators, provided they find the logs in the first place.

If you find that you’re in need of log management, a unified logging layer gives a better alternative to in-house solutions. In this post, we’ll give an overview of one tool that can help you achieve this: Fluentd. But first, a bit of background.

Good-old logging

In the good old days of bare metal machines or pet virtual machines (VMs), logs of interest were accessed by the administrator via ssh and tail. This proved fairly sufficient, as the workloads were usually scheduled, run manually, and bound to a machine. Nowadays, this would be called a log pipeline made up of an app writing to a disk and sometimes passing through syslog or periodically being backed up – nothing fancy by modern standards. With time, business requirements grew. The logs were aggregated, crunched, and analyzed to provide valuable insights. Each step of the way was typically paved with a script, and those scripts were meticulously glued together into bigger systems.

Then came the age of containers, disposable VMs and PaaS environments with the promise of availability far exceeding that of a single machine. Currently, we may not know on which VM (let alone physical machine) a particular service is running. And that’s absolutely fine – we’ve got the software to take care of that. Yet, a problem has arisen – how do you access a log if you don’t know its location?

The complicated pipelines from the past were crying out for a more elegant solution. There is simply a limit to the pressure you can apply before the glue starts to wear out.

Enter Fluentd

Fluentd promises to help you “Build Your Unified Logging Layer“ (as stated on the webpage), and it has good reason to do so. First of all, this is not some brand new tool just published into beta. Fluentd has been around since 2011 and was recommended by both Amazon Web Services and Google for use in their platforms. The latter even uses a modified version of Fluentd as a default logging agent!

But maturity is only one reason for choosing a particular solution over another. If the product is to be useful, it needs to integrate well with the rest of your system. And Fluentd’s integration capabilities are its strong suit. Thanks to its modular approach with the use of plugins, chances are you’re already covered by what the official distribution has to offer.

You can find plugins for data sources (such as Ruby applications, Docker containers, SNMP, or MQTT protocols), data outputs (like Elastic Stack, SQL database, Sentry, Datadog, or Slack), and several other kinds of filters and middleware. In case you’re still not satisfied because your custom-made network router is not supported, you can always write your own plugin in Ruby!

All the log parsing, filtering, and forwarding resides in an agent configuration file. The format looks similar to Apache or Nginx logs and should be thus familiar to the operators. Chances are, it looks much cleaner than most custom-made scripts glued together to form a pipeline.

Let’s Build a Pipeline

Whether you’re a fan of rsyslogd or you use application containers extensively, Fluentd has got you covered. Assuming you want to modernize your legacy solution and use Elasticsearch to store your rsyslog event logs, your example pipeline could look like the following:

<source> @type syslog port 32323 tag rsyslog </source> <match rsyslog.**> @type copy <store> @type elasticsearch logstash_format true host elasticsearch.local port 9200 </store> </match>

Is that all? Almost! You still have to configure your rsyslogd to point to the Fluentd agent. And then you’re done.

The configuration file can have many sources as well as multiple outputs. If you’ve just introduced Docker, you can reuse the same Fluentd agent for processing Docker logs as well. Just like in the previous example, you need to make two changes. First is to run Docker with Fluentd driver:

docker run --log-driver=fluentd --log-opt tag="docker.{.ID}}" hello-world

And the second is to add relevant changes to the Fluentd configuration:

<source> @type syslog port 32323 tag rsyslog </source> <source> @type forward port 24224 bind 0.0.0.0 </source> <match *.*> @type copy <store> @type elasticsearch logstash_format true host elasticsearch.local port 9200 </store> </match>

And just as with multiple sources, it’s possible to configure multiple outputs. Each of them can be filtered by tags, of course! Considering we want to forward both the rsyslogd and Docker logs to Elasticsearch and output the Docker logs to stdout for debugging purposes, we will go with the following configuration:

<source> @type syslog port 32323 tag rsyslog </source> <source> @type forward port 24224 bind 0.0.0.0 </source> <match *.*> @type copy <store> @type elasticsearch logstash_format true host elasticsearch.local port 9200 </store> </match> <match docker.**> @type stdout </match>

Do you think it delivers on the promise of a cleaner configuration? We think it does!

Part of CNCF

Since this article is a part of a series, we have to mention how Fluentd is relevant to the Cloud Native Computing Foundation (CNCF). Adopted by the CNCF in 2016, Fluentd is the sixth project that has proven mature enough to graduate. This means that it has joined a league with Kubernetes, Prometheus, Envoy, CoreDNS, and containerd.

So how well does Fluentd play with its CNCF friends? We’ve already covered the integrations for data sources and outputs. Quite naturally, Fluentd also supports Prometheus monitoring. It’s the recommended method to monitor how Fluentd behaves. Other available methods are Datadog or REST API. For deployment to Kubernetes clusters, there’s an official stable Helm chart that you can use. And yes, the Helm chart also features Prometheus monitoring, so you can configure it all in a single step.

The Alternatives

Fluentd solves many of the problems related to logging in distributed systems. It can handle everything from the networking hardware to the operating system and orchestrator events, all the way through to application logic. It’s stable, mature, and recommended by the CNCF. It also integrates well both with various data sources and stores, as well as other CNCF products. Still, it’s not the only product in its niche.

You may have often heard Elastic Stack referred to as ELK Stack. The middle “L” stands for Logstash, which is similar to Fluentd in many regards. Like Fluentd, it supports many different sources, outputs, and filters. The configuration file looks a bit exotic, although that may simply be a matter of personal preference.

There’s also a new contender in the space: Vector, which promises great performance and memory-efficiency. Unlike Logstash, which is written in JRuby, or Fluentd, written in Ruby, Vector is built with Rust, so it should present less overhead and better stability. You can write your filtering and transformation logic using Lua in Vector, which can also be helpful if you don’t feel like writing an entire plugin just for that. The main downside? It’s still actively being developed, and there’s not even a 1.0 release available at the time of writing this article.

Other possible alternatives worth considering are Filebeat, also a part of the Elastic Stack, and SaaS solutions such as Epsagon. A hosted offer may also require less setup if you want to start right away.

Conclusion

If you’re looking for a solution that fits well with other CNCF projects that you use, Fluentd would appear to be the best way to go. For new projects and the ones that lack a logging layer, it is a sensible choice. If you’ve already invested in Logstash, the differences between the two are not that big, so it’s better to keep your current setup. If you are growing weary of your Logstash installation, it might be best to keep your eyes peeled until Vector becomes stable. In most cases, the unified solutions presented above are better than creating and maintaining custom pipelines.

Read more about log management:

Why You Can’t Ignore Changes to Monitoring and Logging for Serverless

Epsagon Launches Agentless Tracing and Why That’s Important

5 Ways to Understand Distributed System Logging and Monitoring

Instrumentation for Better Monitoring and Troubleshooting

AWS CloudWatch – Part 1/3: Logs and Insights