In this blog post, we explore slow logs in Elasticsearch, which are immensely helpful both in production and debugging environments. We show how slow logs generated by Elasticsearch can be a critical information provider regarding numerous events and issues happening in your Elasticsearch cluster.

Slow Logs

Logging is an integral part of any application. When it comes to a distributed solution like Elasticsearch which has to process huge amounts of requests, the logging becomes unavoidable and its significance paramount.

Slow logs, as the name suggests, are used to log slow requests, whether it be a search or an index request. We can set a threshold of the ‘slowness’ so as to log only those requests which are higher than that threshold.

In case of Elasticsearch, slow logs are important mainly because:

they help determine whether the applications which are supposed to communicate with Elasticsearch are doing so. we can examine the queries that are hitting Elasticsearch and validate them. logging can provide crucial information about index/cluster health, and thus help maintain the cluster.

Now, as you understand the theory, let’s cover the main types of slow logs available, as well as their use cases.

Types of Slow Logs

There are two main kinds of slow logs in Elasticsearch: search slow logs and index slow logs. Let’s discuss them.

Search Slow Logs

Search slow logs are used to log the searches which are slow. The slowness threshold depends on the application and its Elasticsearch implementation details. Each application can have different thresholds.

Search in Elasticsearch happens in two phases:

Query phase – In the query phase, Elasticsearch collects document ids of the relevant results. Upon the completion of this phase, only the ids of the documents which are matched against the search are returned, and there will be no other information like the fields or their values, etc. Fetch phase – In the fetch phase, the documents ids from the query phase are used to fetch the real documents, and with this the search request can be said to be complete.

Search slow logs show the split times for both the query and the fetch phases with the query. Thus, we are able to get a complete picture of the time taken to complete the query and the fetch phases, and are able to inspect the entire query itself.

Index Slow Logs

Index slow logs are used to log the indexing process. When documents are indexed in Elasticsearch, index slow logs keep a record of the requests which took longer time to complete. Here, also, the time windows can be adjusted in the configuration settings for the index logs.

By default, when enabled, Elasticsearch logs the first 1000 lines of the document to the log file. This can be changed to null or to logging of the entire document depending on how we configure the settings.

In the next sections, let us see how to configure the logs and examine two types of slow logs discussed above.

Index Slow Logging Setup

First, create a test index for which the index slow logs are to be configured.

curl -XPUT 'localhost:9200/testindex-slowlogs'

Now configure search slow logs for the index "testindex-slowlogs" as shown below:

curl -XPUT 'localhost:9200/testindex-slowlogs/_settings' -H 'Content-Type: application/json' -d '{ "index.indexing.slowlog.threshold.index.warn": "0ms", "index.indexing.slowlog.threshold.index.info": "0ms", "index.indexing.slowlog.threshold.index.debug": "0ms", "index.indexing.slowlog.threshold.index.trace": "0ms", "index.indexing.slowlog.level": "trace", "index.indexing.slowlog.source": "1000" }'

The response of the above request is shown below:

{ "acknowledged": true }

In the above settings, all threshold time values are set at 0ms to log all index requests.

Next, put some sample documents into this index like in the example below:

curl -XPOST 'localhost:9200/testindex-slowlogs/' -H 'Content-Type: application/json' -d '{ "price": 9925, "name": "Nariko" }'

Once this operation completes, open the file "elasticsearch_index_indexing_slowlog.log" under the logs folder of Elasticsearch. We can see the logs from this file below:

[2017-09-10T12:07:26,683][WARN ][index.indexing.slowlog.index] [GOgO9TD] [testindex-slowlogs/yNbyYk1ARSW_hd0YRh6J0A] took[142.3micros], took_millis[0], type[product], id[105], routing[] , source[{"price":9925,"name":"Nariko"}]

These logs provide the following information:

time stamp

log level

type of log

node name

index name

time taken in micro/milli seconds

index type

document id

"_source" field

Using this information, we can understand how the indexing operation went, and can detect/debug any anomalies if present.

Search Slow Logging

Similar to index slow logs, search slow logs are applied to a specific index. An example configuration to log all search requests are given below:

curl -XPUT 'localhost:9200/testindex-slowlogs/_settings' -H 'Content-Type: application/json' -d '{ "index.search.slowlog.threshold.query.warn": "0ms", "index.search.slowlog.threshold.query.info": "0ms", "index.search.slowlog.threshold.query.debug": "0ms", "index.search.slowlog.threshold.query.trace": "0ms", "index.search.slowlog.threshold.fetch.warn": "0ms", "index.search.slowlog.threshold.fetch.info": "0ms", "index.search.slowlog.threshold.fetch.debug": "0ms", "index.search.slowlog.threshold.fetch.trace": "0ms" }'

Since the above settings have 0ms as a value for all parameters, all search requests will be logged. The log file is located in the logs directory under the name : "elasticsearch_index_search_slowlog.log"

Run the following query to the index:

curl -XPUT 'localhost:9200/testindex-slowlogs/_search?pretty' -H 'Content-Type: application/json' -d '{ "sort": [ { "price": { "order": "desc" } } ], "query": { "match": { "name": "Nariko" } } }'

Upon the successful execution of this query, open your search slow log file mentioned above. You will find logs which look something like this:

[2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][1] took[197.6micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}], [2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][3] took[279.3micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}], [2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][2] took[1ms], took_millis[1], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}], [2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][4] took[370.8micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}], [2017-09-10T12:35:53,355][WARN ][index.search.slowlog.fetch] [GOgO9TD] [testindex-slowlogs][0] took[150.6micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}]

As you see, there are five logs generated for a single search above. This is because the search is executed shard-wise and, by default configuration, we have 5 shards in our index. When we executed search, the request was passed to each of the five shards and five logs were generated.

In these logs, we can see details like type of search, node, and the shard number information with the detailed query.

Conclusion

In this tutorial, we explored the importance of the Elasticsearch slow logs. We covered both the index and search slow logs, along with configuration examples and log samples.

Other Helpful Tutorials

Give It a Whirl!

It’s easy to spin up a standard hosted Elasticsearch cluster on any of our 47 Rackspace, Softlayer, or Amazon data centers. And you can now provision your own AWS Credits on Qbox Private Hosted Elasticsearch.

Questions? Drop us a note, and we’ll get you a prompt response.

Not yet enjoying the benefits of a hosted ELK-stack enterprise search on Qbox? We invite you to create an account today and discover how easy it is to manage and scale your Elasticsearch environment in our cloud hosting service.