In part one of the OpenTracing blog series, we provided a good OpenTracing overview, explaining what OpenTracing is and does, how it works and what it aims to achieve.

One of the key aspects of OpenTracing is that it is vendor-neutral, and also that OpenTracing is just a specification. In order to instrument an application via OpenTracing API, it’s necessary to have an OpenTracing-compatible tracer correctly deployed and listening for incoming span requests. The job of the OpenTracing API is to hide the differences between distributed tracer implementations, so you can easily swap them out at any time without needing to change your instrumentation.

In this blog series, we’ll cover two popular open-source OpenTracing-compatible distributed tracers, Zipkin and Jaeger, starting with the older of the two – Zipkin.

If you prefer to read the whole blog series as PDF you can also download it as a free OpenTracing eBook. Alternatively, follow @sematext if you are into observability in general.

Zipkin as distributed tracing system

Zipkin is a distributed tracing system implemented in Java and with OpenTracing compatible API. It’s responsible for span ingestion and storage by providing a number of collectors (HTTP, Kafka, Scribe) as well as storage engines (in-memory, MySQL, Cassandra, Elasticsearch).

The UI is also a self-contained web application (although it can be served separately) and is used to explore the traces and their associated spans. Spans may be sent to collectors out-of-band, i.e., the data is reported asynchronously to Zipkin since the span is completed and trace/span identifiers don’t have to propagate downstream, or in-band if context propagation is required and headers are used to transport the identifiers.

The component that is responsible for transporting the spans is called as reporter. Every instrumented application contains a reporter. It records timing metrics, associates metadata and routes it to the collector.

Zipkin architecture

Looking for an APM solution supporting fully distributed applications and distributed transaction tracing? Check Sematext Tracing! Get end-to-end visibility into your distributed applications so you can find bottlenecks quickly, resolve production issues faster and with less effort. Spot performance bottlenecks, identify hotspots, find root cause of latency problems. Plus, Sematext Tracing is integrated with Log Management, Infrastructure and Real User Monitoring. Learn more

Zipkin Tutorial: Getting Started

To get started with Zipkin, download and run zipkin-server as standalone jar (note JRE 8 is required to bootstrap the Zipkin server):

$ export ZIPKIN_VERSION=2.7.3 $ curl -SL https://jcenter.bintray.com/io/zipkin/java/zipkin-server/ $ZIPKIN_VERSION/zipkin-server-$ZIPKIN_VERSION-exec.jar > zipkin-server.jar $ java -jar zipkin.jar

After server startup is done, you should see the output like on the image below. Zipkin Server is a Spring Boot application that bootstraps Zipkin collector as default ingestion point for spans reported by tracer clients. It also stands up several endpoints such as health check, collector metrics, API spec accessible through Swagger UI, etc. By default, Zipkin Server initializes in-memory span store for trace storage.

Zipkin server started successfully

Running Zipkin in Docker

Alternatively, you can run the containerized Zipkin server from Docker image:

$ docker run -d –name zipkin -p 9411:9411 openzipkin/zipkin

The command above will:

fetch the latest Zipkin image from the remote Docker repository

expose the port 9411 on the host machine so you can browse the UI on http://localhost:9411

on the host machine so you can browse the UI on run the container in detached mode

Run docker ps to make sure container is in running state.

Docker command output

Main Zipkin UI

Span ingestion with Zipkin

Zipkin Collectors

The collectors are responsible for forwarding span requests to the storage layer. HTTP collector is the default ingress point for span stream. It accepts spans issued via POST requests to api/v1/spans or api/v2/spans endpoint for the new API specification.

Other than HTTP collector, Zipkin also offers Kafka, Scribe and RabbitMQ transports for span ingestion. When Kafka transport is enabled, collector acts as a consumer for spans that are routed to the Kafka topic. To enable Kafka transport it’s necessary to provide the address of the Zookeeper ensemble via KAFKA_ZOOKEEPER environment variable when starting Zipkin Server.

Similarly, RabbitMQ transport is activated by pointing RABBIT_ADDRESSES environment variable to RabbitMQ broker(s). You can also override other settings such as the parallelism level for RabbitMQ consumers, queue name from which spans are pulled, etc.

Storage with Zipkin

As mentioned above, Zipkin supports in-memory, MySQL, Cassandra and Elasticsearch storage engines. In-memory store comes in handy for dev environments and for the POC scenarios where persistence is not required. MySQL storage type is discouraged for production environments due to known performance issues.

For production workloads, Cassandra or Elasticsearch are more suitable options.

Using Zipkin with Elasticsearch

To enable the Elasticsearch storage, export STORAGE_TYPE and ES_HOST environment variables.

NOTE: if X-Pack is enabled (default option in official Elastic Docker image), you’ll need to provide the credentials for the REST API endpoint via ES_USERNAME and ES_PASSWORD environment variables.

The following mapping is used to describe the structure of the spans:

"mappings" : { "servicespan" : { "_all" : { "enabled" : false }, "properties" : { "serviceName" : { "type" : "keyword", "ignore_above" : 256 }, "spanName" : { "type" : "keyword", "ignore_above" : 256 } } }, "_default_" : { "_all" : { "enabled" : false } }, "span" : { "_all" : { "enabled" : false }, "properties" : { "annotations" : { "type" : "nested", "dynamic" : "false", "properties" : { "endpoint" : { "dynamic" : "false", "properties" : { "serviceName" : { "type" : "keyword", "ignore_above" : 256 } } }, "value" : { "type" : "keyword", "ignore_above" : 256 } } }, "binaryAnnotations" : { "type" : "nested", "dynamic" : "false", "properties" : { "endpoint" : { "dynamic" : "false", "properties" : { "serviceName" : { "type" : "keyword", "ignore_above" : 256 } } }, "key" : { "type" : "keyword", "ignore_above" : 256 }, "value" : { "type" : "keyword", "ignore_above" : 256 } } }, "duration" : { "type" : "long" }, "id" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "name" : { "type" : "keyword", "ignore_above" : 256 }, "timestamp" : { "type" : "long" }, "timestamp_millis" : { "type" : "date", "format" : "epoch_millis" }, "traceId" : { "type" : "keyword", "ignore_above" : 256 } } }, "dependencylink" : { "enabled" : false, "_all" : { "enabled" : false } }

These are some of the most relevant document’s fields:

traceId – an unique identifier for the trace

id – span identifier

name – name of the operation associated with the span

duration – duration of the span in microseconds

timestamp – span start time expressed in epoch microseconds

binaryAnnotations – an array of tags associated with the span

Here is an example of an indexed document related to instrumentation of the SQL statements:

{ "timestamp_millis" : 1502364354905, "traceId" : "176dde0179621d08", "id" : "176dde0179621d08", "name" : "create-app", "timestamp" : 1502364354905221, "duration" : 1988, "binaryAnnotations" : [ { "key" : "db.instance", "value" : "apps", "endpoint" : { "serviceName" : "opentracing-jdbc", "ipv4" : "192.168.1.23" } }, { "key" : "db.statement", "value" : "INSERT INTO apps (name) VALUES (slack)", "endpoint" : { "serviceName" : "opentracing-jdbc", "ipv4" : "192.168.1.23" } }, { "key" : "db.type", "value" : "sql", "endpoint" : { "serviceName" : "opentracing-jdbc", "ipv4" : "192.168.1.23" } }

Free OpenTracing eBook Want to get useful how-to instructions, copy-paste code for tracer registration? We’ve prepared an OpenTracing eBook which puts all key OpenTracing information at your fingertips: from introducing OpenTracing, explaining what it is and does, how it works, to covering Zipkin followed by Jaeger, both being popular distributed tracers, and finally, compare Jaeger vs. Zipkin. Download yours.

Conclusion

Zipkin is a mature, tried and tested open-source distributed tracing solution. It pre-dates OpenTracing by several years but is keeping up with the times by implementing OpenTracing -compatible tracers. It is fully open-source, comes with a built-in UI, and has pluggable backends.

In the next post, we’ll discuss Jaeger, another popular open-source distributed tracing system. Don’t miss out on our last opentracing series post, a head to head comparison between Zipkin vs Jaeger.

If you are into tracing and observability in general, check out @sematext.

Share Twitter

Facebook

LinkedIn

Reddit

Email

