Your question is quite general, so I will answer your concrete questions, and then send you where you can find more information.

The explain on logs are ok, but you are not alone on not being able to read them. Use the logs to identify your slow queries. Use EXPLAIN later (and other tools) to debug what is going on. It is nice to have it on the log, but please format it live on your database like this for better readability:

Answering your questions:

How do I know if an index is not being used?

type (and key ) columns will tell you. type ALL means a full table scan is being used, and possible keys/key will be NULL . That is for scanning. type const , ref or range is normally preferred (there are more strategies). For sorting (and other issues), you will find on Extra: the string Using filesort . That means a second pass to sort results is needed, and in some cases an index will help getting results automatically in order.

Here is an example of another query, this time using an index:

This is a simplification, because there are many ways in which an index can be used to speed up results (ICP, covering index, max(), ...).

There is no enough space here to talk also about JOIN s and subqueries, where order and rewriting is possible to get better strategies.

How do I identify the slow parts to they query?

There is 2 options:

profiling the query (which will give you the per-stage time spent on each query step), which can be done with show profile or enabling it with performance_schema for certain queries. Typical output: SHOW PROFILE CPU FOR QUERY 5; +----------------------+----------+----------+------------+ | Status | Duration | CPU_user | CPU_system | +----------------------+----------+----------+------------+ | starting | 0.000042 | 0.000000 | 0.000000 | | checking permissions | 0.000044 | 0.000000 | 0.000000 | | creating table | 0.244645 | 0.000000 | 0.000000 | | After create | 0.000013 | 0.000000 | 0.000000 | | query end | 0.000003 | 0.000000 | 0.000000 | | freeing items | 0.000016 | 0.000000 | 0.000000 | | logging slow query | 0.000003 | 0.000000 | 0.000000 | | cleaning up | 0.000003 | 0.000000 | 0.000000 | +----------------------+----------+----------+------------+

handler statistics, which will give you a time-independent metric of the scan strategy and number of rows scanned for each one:

This last one may seem a bit unfriendly, but once you understand it, you can see index usage and full scans much easily, by knowing what the internal engine calls are.

Is there a shortcut to quickly identify missing indexes?

Yes, if you have enabled performance_schema and have access to the sys database SELECT * FROM sys.statement_analysis; will give you a column called " full_scan " which will give you queries that perform full scans (scans not using indexes). You can then order by rows_examined , rows_examined_avg , avg_latency , etc. to go in order of importance.

If you don't want or cannot use performance_schema, use the logs to get those numbers aggregated with pt-query-digest from percona-toolkit:

If rows examined is very large compared to rows sent, an index is probably the cause.

In summary, the logs are ok for identifying queries- use them to aggregate them with performance_schema or pt-query-digest. But once you identify the worst offending queries, use other tools for debugging.

I talk more in extension on how to identify slow queries and the details on how do query optimization on my slides "Query Optimization with MySQL 8.0 and MariaDB 10.3". I do this for a living and query optimization is my passion, I suggest you have a look at them (I am not selling you a book, they are free and with a Creative Commons license).