Photo by Markus Spiske on Unsplash

Sometimes people want accessing raw time series data for various purposes: machine learning, neural network training, anomaly detection, statistical evaluation or correlation analysis. These tasks cannot be performed inside Prometheus, so it would be great exporting raw time series data from Prometheus. Unfortunately, Prometheus doesn’t support raw data export at the moment. But there are workarounds allowing exporting the needed time series data. Let’s look at these workarounds.

Exporting raw data via /query API

Prometheus supports /query API. The API returns an instant values for the given PromQL query at the given evaluation time . PromQL queries usually return interpolated or calculated values for the given time instead of raw values stored in the Prometheus TSDB. But there is a trick — just query a range vector. In this case Prometheus returns raw time series data on the given time range from square brackets. For example, the following query should return all the raw data for all the metrics during the last hour before the given time : {__name__!=""}[1h] .

Try this query in your Prometheus. There are high chances it will slow down or crash your Prometheus if it scrapes decent amount of time series. Why? Because the /query API isn’t optimized for raw time series data export. But this approach is still viable for exporting a few metrics at a time with small number of data points.

Exporting raw data via tsdb command-line tool

There is a tsdb command-line tool. Recently it gained support for raw data export from Prometheus TSDB files. But this approach has a few drawbacks:

It exports each data point in Prometheus text format. While this format is OK from interoperability point of view, it is too verbose for exporting big amounts of time series data. For instance a dump for a billion of typical data points with a few tags would take 100 GB or more. The same amount of time series usually occupies a few GBs in Prometheus TSDB.

It is unclear whether the tsdb tool can export data from running Prometheus or the Prometheus must be stopped before exporting the data.

Exporting raw data via remote storage

Prometheus may be configured to write data to remote storage in parallel to local storage. Then the raw data may be queried from the remote storage. For instance, Prometheus may write data to VictoriaMetrics, so later the raw data may be queried via /export API provided by VictoriaMetrics. The /export API is optimized for exporting huge amounts of time series data. It exports data in in JSON streaming format. Each time series is exported as a single JSON line containing metric name, tags and all the data points on the given time range. Timestamps are exported in milliseconds. Example output:

{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}

{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}

JSON streaming format has the following features:

It can be easily inspected by humans.

It can be easily grepped.

It can be easily parsed with any suitable JSON parser.

It can be easily parsed in streaming mode. There is no need in loading multi-TB dump into RAM.

It is more compact comparing to Prometheus text format, since metric name and tags are mentioned only once per each time series for all its’ data points.

It can be compressed well, since all the values and timestamps for a single time series are sorted by time, so they usually have many common substrings.

Conclusion

While Prometheus doesn’t support native raw data export, there are multiple approaches exist for this task. The best approach at the moment is to write data to remote storage and then export raw time series from the remote storage. Prometheus continues writing data to its’ local storage after enabling remote storage, so all the local data remains accessible. Remote storage has additional benefits besides raw data export:

Long-term storage. By default Prometheus stores data in local storage for 15 days. See default value for --storage.tsdb.retention.time . Remote storage solutions are usually optimized for much longer retention periods.

. Remote storage solutions are usually optimized for much longer retention periods. Global querying view. Remote storage systems may accept data from multiple Prometheus instances. Later the data from all the Prometheus instances may be queried at once. This is useful for building global dashboards for multi-datacenter setups.

Simplified operations. All the maintenance burden related to data persistence may be moved from Prometheus’ local storage to remote storage. This greatly simplifies Prometheus operations, effectively converting it to stateless service.

Scalability. Remote storage systems usually support horizontal scalability, i.e. they may transparently scale to multiple nodes.

Single-node VictoriaMetrics supports all these benefits except the last one, which is supported in custom cluster version for clients with huge amounts of time series data and in the upcoming SaaS version. So don’t hesitate and try free single-node VictoriaMetrics :)

Update: VictoriaMetrics is open source now!