MySQL has a nice feature: MySQL slow query log, which allows you to log all queries that exceed a predefined about of time to execute. Peter Zaitsev first wrote about this back in 2006 – there have been a few other posts here on the MySQL Performance Blog since then (check this and this, too) but I wanted to revisit his original subject in today’s post.

Query optimization is essential for good database server performance and usually, DBAs need to ensure the top performance possible for all queries. In MySQL, the desirable way is to generate a query log for all running queries within a specific time period and then run a query analysis tool to identify the bad queries. Percona Toolkit’s pt-query-digest is one of the most powerful tools for SQL analysis. That’s because pt-query-digest can generate a very comprehensive report that spots problematic queries very efficiently. It works equally well with Oracle MySQL server. This post will focus mainly on pt-query-digest.

MySQL slow query log

MySQL slow query log is great at spotting really slow queries that are good candidates for optimization. Beginning with MySQL 5.1.21, the minimum value is 0 for long_query_time, and the value can be specified to a resolution of microseconds. In Percona Server additional statistics may be output to the slow query log. You can find the full details here. For our clients, we often need to identify queries that impact an application the most. It does not always have to be the slowest queries – queries that runs more frequently with lower execution time per call put more load on a server than queries running with lower frequency. We of course want to get rid of really slow queries but to really optimize application throughput, we also need to investigate queries that generate most of the load. Further, if you enable option log_queries_not_using_indexes then MySQL will log queries doing full table scans which doesn’t always reflect that the query is slow, because in some situations the query optimizer chooses full table scan rather than using any available index or probably showing all records from a small table.

Our usual recommendation is to generate the MySQL slow query log with long_query_time=0. This will record all the traffic but this will be I/O intensive and will eat up disk space very quickly depending on your workload. So beware of running with long_query_time=0 for only a specific period of time and revert it back to logging only very slow queries. In Percona Server there is a nice option where you can limit the rate of logging, log_slow_rate_limit is the option to handle it. Filtering slow query log is very helpful too in some cases e.g. if we know the main performance issue is table scans we can log queries only doing full table scans or if we see I/O is bottleneck we can collect queries doing full scans and queries creating on-disk temporary tables. Again, this is only possible in Percona Server with the log_slow_filter option. Also, you may want to collect everything on slow query log and then filter with pt-query-digest. Depending on I/O capacity, you might prefer one or another way, as collecting everything in slow query log allows us to investigate other queries too if needed. Finally, use pt-query-digest to generate an aggregate report over a slow query log which highlights the problematic part very efficiently. Again, pt-query-digest can bring up server load high so our usual recommendation on it is to move slow query log to some staging/dev server and run pt-query-digest over there to generate the report.

Note: changing the long_query_time parameter value only affects newly created connections to log queries exceeds long_query_time threshold. In Percona Server there is a feature which changes the variable scope to global instead of local. Enabling slow_query_log_use_global_control log queries for connected sessions too after changing long_query_time parameter threshold. You can read more about this patch here.

I am not going to show you a detailed report of pt-query-digest and explain each part of it here, because it is well defined already by my colleague Ovais Tariq in this post. However, I will show you some of the other aspects of pt-query-digest tool here.

Let me show you code snippets that enable slow query log for only a specific time period with long_query_time=0 and log_slow_verbosity to ‘full’. log_slow_verbosity is a Percona Server variable which logs extra stats such as information on query cache, Filesort, temporary tables, InnoDB statistics, etc. Once you are done collecting logs, revert back the values for long_query_time to the previous value, and finally run pt-query-digest on the log to generate a report. Note: run the below code in same MySQL session.

-- Save previous settings mysql> SELECT @@global.log_slow_verbosity INTO @__log_slow_verbosity; mysql> SELECT @@global.long_query_time INTO @__long_query_time; mysql> SELECT @@global.slow_query_log INTO @__slow_query_log; mysql> SELECT @@global.log_slow_slave_statements INTO @__log_slow_slave_statements; -- Keep this in safe place, we'll need to run pt-query-digest mysql> SELECT NOW() AS "Time Since"; -- Set values to enable query collection mysql> SET GLOBAL slow_query_log_use_global_control='log_slow_verbosity,long_query_time'; mysql> SET GLOBAL log_slow_verbosity='full'; mysql> SET GLOBAL slow_query_log=1; mysql> SET GLOBAL long_query_time=0; mysql> SET GLOBAL log_slow_slave_statements=1; -- Verify settings are OK mysql> SELECT @@global.long_query_time, @@global.slow_query_log, @@global.log_slow_verbosity; -- wait for 30 - 60 minutes -- Keep this one too, also for pt-query-digest mysql> SELECT NOW() AS "Time Until"; -- Revert to previous values mysql> SET GLOBAL slow_query_log=@__slow_query_log; mysql> SET GLOBAL long_query_time=@__long_query_time; mysql> SET GLOBAL log_slow_verbosity=@__log_slow_verbosity; -- if percona server mysql> SET GLOBAL log_slow_slave_statements=@__log_slow_slave_statements; -- Verify settings are back to previous values mysql> SELECT @@global.long_query_time, @@global.slow_query_log, @@global.log_slow_verbosity, @@global.slow_query_log_file; -- Then with pt-query-digest run like (replace values for time-since, time-until and log name) $ pt-query-digest --since='<time-since>' --until='<time-until>' --limit=100% /path/to/slow_query_log_file.log > /path/to/report.out -- If you're not using Percona Server then you need to remove all references to log_slow_verbosity, slow_query_log_use_global_control and log_slow_slave_statements (prior MySQL 5.6). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 -- Save previous settings mysql > SELECT @ @ global .log_slow_verbosity INTO @ __log_slow_verbosity ; mysql > SELECT @ @ global .long_query_time INTO @ __long_query_time ; mysql > SELECT @ @ global .slow_query_log INTO @ __slow_query_log ; mysql > SELECT @ @ global .log_slow_slave_statements INTO @ __log_slow_slave_statements ; -- Keep this in safe place , we ' ll need to run pt - query - digest mysql > SELECT NOW ( ) AS "Time Since" ; -- Set values to enable query collection mysql > SET GLOBAL slow_query_log_use_global_control = 'log_slow_verbosity,long_query_time' ; mysql > SET GLOBAL log_slow_verbosity = 'full' ; mysql > SET GLOBAL slow_query_log = 1 ; mysql > SET GLOBAL long_query_time = 0 ; mysql > SET GLOBAL log_slow_slave_statements = 1 ; -- Verify settings are OK mysql > SELECT @ @ global .long_query_time , @ @ global .slow_query_log , @ @ global .log_slow_verbosity ; -- wait for 30 - 60 minutes -- Keep this one too , also for pt - query - digest mysql > SELECT NOW ( ) AS "Time Until" ; -- Revert to previous values mysql > SET GLOBAL slow_query_log = @ __slow_query_log ; mysql > SET GLOBAL long_query_time = @ __long_query_time ; mysql > SET GLOBAL log_slow_verbosity = @ __log_slow_verbosity ; -- if percona server mysql > SET GLOBAL log_slow_slave_statements = @ __log_slow_slave_statements ; -- Verify settings are back to previous values mysql > SELECT @ @ global .long_query_time , @ @ global .slow_query_log , @ @ global .log_slow_verbosity , @ @ global .slow_query_log_file ; -- Then with pt - query - digest run like ( replace values for time - since , time - until and log name ) $ pt - query - digest -- since = '<time-since>' -- until = '<time-until>' -- limit = 100 % / path / to / slow_query_log_file .log > / path / to / report .out -- If you ' re not using Percona Server then you need to remove all references to log_slow_verbosity , slow_query_log_use_global_control and log_slow_slave_statements ( prior MySQL 5.6 ) .

My colleague Bill Karwin wrote a bash script that does almost the same as the above code. You can find the script to collect slow logs here. This script doesn’t hold a connection to the database session while you wait for logs to accumulate and it sets all the variables back to the state they were before. For full documentation view this.

Further, you can also get explain output into the report from the pt-query-digest tool. For that you need to use –explain parameter similar to as follows.

$ pt-query-digest --explain u=<user>,p=<password>,h=<hostname> /path/to/slow.log > /path/to/report.out 1 $ pt - query - digest -- explain u = < user > , p = < password > , h = < hostname > / path / to / slow .log > / path / to / report .out

Explain output in query report will get you all the information for query execution plan and explain output signal towards how that particular query going to be executed. Note that, if you execute pt-query-digest over slow query log other than originated server of slow query log as I mentioned above e.g. staging/dev you may get different execution path for the query in the report or lower number of rows to examine, etc., because usually staging/dev servers has different data distribution, different MySQL versions, or different indexes. MySQL explain adds overhead as queries need to be prepared on the server to generate the intended query execution path. For this reason, you may want to run pt-query-digest with –explain on a production replica.

It’s worth mentioning that logging queries with log_slow_verbosity in Percona Server is really handy as it shows lots of additional statistics and it is more helpful in situations when the explain plan reports a different execution path than when the query is executed. On that particular topic, you may want to check this nice post.

pt-query-digest also supports filters. You can read more about it here. Let me show you an example. The following command will discard everything apart from insert/update/delete queries in pt-query-digest output report.

$ pt-query-digest --filter '$event->{arg} =~ m/^(insert|update|delete)/i' --since='<time-since>' --until='<time-until>' --limit=100% /path/to/slow_query_log_file.log > /path/to/report.out 1 $ pt - query - digest -- filter '$event->{arg} =~ m/^(insert|update|delete)/i' -- since = '<time-since>' -- until = '<time-until>' -- limit = 100 % / path / to / slow_query_log_file .log > / path / to / report .out

If you’re looking for some GUI tools for pt-query-digest then I would recommend reading this nice blogpost from my colleague Roman. Further, our CEO Peter Zaitsev also wrote a post recently where he shows the comparison between performance_schema and slow query log. Check here for details.

In related news, Percona recently announced Percona Cloud Tools, the next generation of tools for MySQL. It runs a client-side agent (pt-agent) which runs pt-query-digest on the server with some intervals and uploads the aggregated data to the Percona Cloud Tools API which process it further. Query Analytics is one tool from the Percona Cloud Tools that provides advanced query metrics. It is a nice visualization tool. You may be interested to learn more about it here, and it’s also worth viewing this related webinar about Percona Cloud Tools from our CTO Vadim Tkachenko.

Conclusion:

pt-query-digest from Percona Toolkit is a versatile (and free) tool for slow query log analysis. It provides good insight about every individual query, especially in Percona Server with log_slow_verbosity enabled, e.g. log queries with microsecond precision, log information about the query’s execution plan. On top of that, Percona Cloud Tools includes Query Analytics which provides you with good visuals about query performance and also provides a view of historical data.

Hopefully, this blog gives insight into the MySQL slow query log.