Today I received a question which was a good example of systematic problem solving approach. It was about getting a long-running query to run faster. It took a long time as the correlated subquery in the query was not unnested, was re-visited many times, causing the whole subquery subtree in the plan to be executed again and again). The main part of the question was this:

Is there a way to avoid “NOT IN” conversion to “NOT EXISTS” by optimizer … My sub query, I would like it to be fully instantiated as view and then execute it as a Hash Anti join.

The first part of the above question sounds like the type of question I hear quite frequently (e.g. how to force Oracle use an index or adjust some feature of the execution plan – without any extra context information). These kind of questions are often the result of the developer not being familiar with the data and hoping to optimize the query by adding some “silver bullet” hint.

But the second part of the above question (in bold) clearly shows that the asker had done his homework and knew exactly what he was trying to achieve. In other words, the asker knew the data model, the data (amount and distribution), understood what the query was trying to achieve and finally knew roughly how many rows would be returned from different tables for the given query predicates. And based on that knowledge and understanding of how Oracle SQL plan execution works, he knew that it’s not a good idea to use the nested FILTER operator for correlated subquery lookups, as if the parent query returns millions of rows, you’d potentially have to re-visit the subquery branch of execution plan tree millions of times too, revisiting the same blocks again and again, driving up (logical) IOs, CPU usage etc.

The asker also knew that Oracle can use a special variation of hash join for performing anti-joins (NOT IN, NOT EXISTS style subqueries get unnested and are executed using a join mechanism instead of correlated lookups using FILTER loop). The asker also knew that it’s much more efficient to join / compare millions of rows with a single hash join run (visit source tables only once) as opposed to millions of correlated lookups revisiting “random” index & table blocks again and again.

As a result, the asker formulated what he was trying to change in the execution plan and then thought how to achieve it. This is the systematic way for tuning a SQL statement and it requires you to understand the query, database capabilities and definitely the data this query is using. The problem was that regardless of the hints he tried, he couldn’t achieve the desired plan. (Note the “desired plan” – you know in advance what you want and use tools like hints for achieving that, not the other way around).

I came up with a small test case (as I often don’t remember things off the top of my head) to illustrate the correct hints for changing the plan the way he wanted:

SQL> CREATE TABLE t1 AS SELECT * FROM all_users; Table created. SQL> CREATE TABLE t2 AS SELECT * FROM all_users; Table created. SQL> CREATE TABLE t3 AS SELECT * FROM all_users; Table created.

Now the minimal test query (its quite simplistic and note that it’s not even a correlated subquery as the subquery predicates do not reference parent query’s tables):

SELECT * FROM t1 WHERE t1.user_id NOT IN ( SELECT t2.user_id FROM t2, t3 WHERE t2.username = t3.username );

The CBO trace showed the final form of the query after transformations – and the uncorrelated NOT IN subquery was converted to a correlated NOT EXISTS subquery:

******* UNPARSED QUERY IS *******

SELECT /*+ */ "SYS_ALIAS_1"."USERNAME" "USERNAME","SYS_ALIAS_1"."USER_ID" "USER_ID","SYS_ALIAS_1"."CREATED" "CREATED" FROM "SYSTEM"."T1" "SYS_ALIAS_1" WHERE NOT EXISTS ( SELECT /*+ */ 0 FROM "SYSTEM"."T2" "T2","SYSTEM"."T3" "T3" WHERE "T2"."USER_ID"="SYS_ALIAS_1"."USER_ID" AND "T2"."USERNAME"="T3"."USERNAME" )

The execution plan uses a correlated FILTER loop for probing the subquery, even though the nature of this query doesn’t really require correlated access. As a result, the HASH JOIN in line #3 below and the table access rowsources under it were visited 26 times, in this case once per row returned from the parent query’s row sources (table T1in line #2):

------------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------ |* 1 | FILTER | | 1 | | | 0 |00:00:00.01 | 159 | | | | | 2 | TABLE ACCESS FULL | T1 | 1 | 26 | 2 (0)| 26 |00:00:00.01 | 3 | | | | |* 3 | HASH JOIN | | 26 | 1 | 5 (20)| 26 |00:00:00.01 | 156 | 1517K| 1517K| 301K (0)| |* 4 | TABLE ACCESS FULL| T2 | 26 | 1 | 2 (0)| 26 |00:00:00.01 | 78 | | | | | 5 | TABLE ACCESS FULL| T3 | 26 | 26 | 2 (0)| 351 |00:00:00.01 | 78 | | | | ------------------------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter( IS NULL) 3 - access("T2"."USERNAME"="T3"."USERNAME") 4 - filter("T2"."USER_ID"=:B1)

The pseudo-bind variable :B1 reported in predicate section gives a further clue that a correlated lookup loop is used, the FILTER operation takes the next parent query row’s USER_ID value and passes it in to the filter predicate on line #4 above with every loop iteration. The execution plan unparser (which generates human readable execution plan texts) uses a naming convention so that such FILTER loop variables are reported as bind variables – but the query doesn’t actually use any real (user supplied binds). It’s just how these query plan explainers report things…

Anyway, as I knew what kind of change we wanted to achieve, I recommended to use the following hints to see whether they give the desired result.

SELECT * FROM t1 WHERE t1.user_id NOT IN ( SELECT /*+ UNNEST HASH_AJ(t2) */ t2.user_id FROM t2, t3 WHERE t2.username = t3.username );

And now the important part – this wasn’t a wild guess (like “let’s see, maybe it helps”). I deliberately recommended these hints as they control the features we wanted to change.

The resulting unparsed query text after transformations was following – the subquery was gone (no NOT IN or NOT EXISTS anymore) and the subquery was unnested and converted into an inline view (in bold), joined to the parent query (using the where condition in the bottom of the below output):

******* UNPARSED QUERY IS ******* SELECT "T1"."USERNAME" "USERNAME","T1"."USER_ID" "USER_ID","T1"."CREATED" "CREATED" FROM (SELECT /*+ UNNEST */ "T2"."USER_ID" "$nso_col_1" FROM "SYSTEM"."T2" "T2","SYSTEM"."T3" "T3" WHERE "T2"."USERNAME"="T3"."USERNAME" ) "VW_NSO_1" ,"SYSTEM"."T1" "T1" WHERE "T1"."USER_ID"="VW_NSO_1"."$nso_col_1"

A little detour:

Note that the “unparsed” query text in CBO tracefile is FYI only. It is not the real query text actually executed! It doesn’t contain all the little details required for executing the query right and returning correct data. Unparsing means extracting information from the binary execution plan and converting it into human (or DBA) readable form, but some of the information gets lost in the process. If you ran the above unparsed query, you would get resultset of a regular join, not the NOT IN antijoin originally requested. This is why you should never assume that the unparsed query is the correct text to use in your application. It’s just FYI, for debugging.

One of the pieces present in binary execution plan, but not in the unparsed query, is highlighted below. See the HASH JOIN ANTI – this ANTI in the binary plan says that don’t use a regular join (returning matching rows) but do exactly the opposite (return non-matching rows).

_

_

----------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------------------------- |* 1 | HASH JOIN ANTI | | 1 | 1 | 7 (15)| 0 |00:00:00.02 | 9 | 1023K| 1023K| 1176K (0)| | 2 | TABLE ACCESS FULL | T1 | 1 | 26 | 2 (0)| 26 |00:00:00.01 | 3 | | | | | 3 | VIEW | VW_NSO_1 | 1 | 26 | 5 (20)| 26 |00:00:00.01 | 6 | | | | |* 4 | HASH JOIN | | 1 | 26 | 5 (20)| 26 |00:00:00.01 | 6 | 1179K| 1179K| 1180K (0)| | 5 | TABLE ACCESS FULL| T2 | 1 | 26 | 2 (0)| 26 |00:00:00.01 | 3 | | | | | 6 | TABLE ACCESS FULL| T3 | 1 | 26 | 2 (0)| 26 |00:00:00.01 | 3 | | | | ----------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("T1"."USER_ID"="$nso_col_1") 4 - access("T2"."USERNAME"="T3"."USERNAME")

When you look into the Starts column in the above plan, you see that now all the tables were scanned only once (that’s what hash joins do). So, no more revisiting the same blocks again and again for each row returned from table(s) in the parent query. Also, note the new VIEW row source which Oracle has injected in between the parent query and subquery tables – this is to make sure that the subquery tables don’t get merged with parent query and joined in a wrong order / possibly with the ANTI-join done in the wrong place. In the stored outline hints, such injection will show up as a NO_ACCESS hint (telling optimizer that the parent query block does not have access into a query block under it for merging its contents).

In any case, now my little test query visited much less buffers, did less work, did not repeatedly scan through the same tables again. Of course, this was just a synthetic test query – but nevertheless, applying the same hints on the real problem query made it run in 2 minutes instead of multiple hours.

Now, there are some outstanding questions here, like whether it’s a good idea to tune a query with hints and whether the optimizer statistics were representative of the reality and so on. Why didn’t optimizer come up with the best plan itself? Was this due to a bug, etc.

But all this isn’t the point of my blog post – my point is that whenever you want to systematically fix or improve something, you will have to know what you are trying to achieve first, also understand why should this actually work and then you’ll pick the best tools for achieving that goal (how). As the guy who asked the question had already done most of the work – figuring out how the plan should be executed for best efficiency – finding the way to achieve that was easy.

When manually optimizing SQL, this does require that you actually care about your work enough to take the time to undestand the capabilities of your database engine, the query and the data. Of course you have to be smart about where you optimize manually and where you should try to get Oracle to do the right thing by itself. Phew, I’ve got to write about that stuff some other day :-)

Update: Thanks to Jonathan Lewis’es comment I drilled down to this example further and it turns out that in this case just using UNNEST hint would be enough (and HASH_AJ is not needed) as after the unnesting is forced, the CBO figures the best (anti)join method out from there. Note that this post wasn’t about hints, it was about emphasizing that if you want to systematically tune a SQL statement execution plan, you should first figure out what it should be doing (which join order, -methods, access paths etc) and then find a way to achieve that – in my case it was done with hints.

P.S. I will schedule my Advanced Oracle SQL Tuning online course in september or october – stay tuned ;-)