Published by Dev Kordeš on 06/12/2019

How to deal with an inherited Laravel project that has numerous n + 1 query problems spread throughout it.

What is n + 1

Laravel provides eloquent relationships . They allow us to access related models from another model.

But if we're not careful, it's easy to introduce a n + 1 problem - which essentially means that we execute an additional query every time we access the relationship that was not eager loaded .

Let's say that we have multiple $posts and each post has an author, that we can access with $post->author . If we did not eager load the author, Laravel will still let us access, but it will execute a query to get it. Now imagine we didn't eager load it and we're doing something like this:

$posts = Post::all(); foreach ($posts as $post) { dump($post->author); }

What happens here is, every time the $post->author is called, Laravel will execute an additional query to get the author. Something like this for every post:

select * from authors where id = ?

This assumes the posts table has an author_id column, by which the author is queried.

If we have 10 posts, this will execute 1 query to get all the posts and 10 additional queries to get their authors. Obviously, this is bad! We want to get all the authors with 1 query. And Laravel solves this with eager loading (using with('relationship') ):

$posts = Post::with( 'author' )->all(); foreach ($posts as $post) { dump($post->author); }

In the example above Laravel will execute exactly 2 queries, regardless of the number of posts. The first query will still fetch all posts, and the 2nd query will fetch all their related authors.

Now, instead, the authors query will be something like this:

select * from authors where id in (?, ?, ?, ?...)

And Laravel will match the authors to their respective posts.

Existing n + 1 problems

We all agree that this should not happen. But anyone who has ever worked in the real world knows this does happen. A project is large, developers come and go, some senior, some less senior. Some are aware of this, some aren't, sometimes this gets caught in PR review, sometimes it doesn't. 3 years later there are way too many API calls that execute these bad boys.

A recent example I came across - which caused over 100 additional queries per API call - was something like this:

class CarResource { public function toArray () { return [ 'url' => $this ->getUrl(), ]; } }

And if we peak into the Car model, we see a method like this:

public function getUrl () { return url( "/cars/$this->company->name/$this->name" ); }

As we can see the getUrl() relies on the parent company's name! And some of our API calls returned over 100 of cars, without eager loading their company. Resulting in over 100 additional queries.

The solution itself seems simple, we add eager loading to the API and we're good. Except, there might be dozens of API calls that also return the CarResource and it can be extremely time consuming to search and find all.

My solution

I wasn't going to spend the week finding all bad actors. Especially since the product has been live for a long time and obviously, despite the inefficiencies, it works. Instead, my solution is to log them and kill one by one as they arise.

Make a custom exception

First, I created a new exception:

namespace App \ Exceptions ; use Sentry ; use Exception ; class LazyLoadingException extends Exception { public function report () { Sentry::captureException( $this , [ 'level' => 'warning' , 'message' => $this ->getMessage(), ]); } }

The exception implements Laravel's report() method. Laravel automatically calls the method whenever the exception is thrown - which lets use define how (or if) to report that the exception was thrown.

In the above example, I just send it to a dedicated error reporting service Sentry . We could also just do something like:

public function report () { Log::warning( $this ->getMessage()); }

Reporting the exception without throwing

Making the exception is a good start, but we also need to make use of it. And we also don't actually want to throw the exception, we still want the code to execute.

So let's revisit our bad offender and see how we can do so using Laravel's relationLoaded($relation) and report() helper :

public function getUrl () { if (! $this ->relationLoaded( 'company' )) { report( new LazyLoadingException( "'company' relation was not eager loaded" ) ); } return url( "/cars/$this->company->name/$this->name" ); }

First, we check whether the relationship was eager-loaded using the relationLoaded('company') method on the Model. And if we detect it wasn't - we simply report the exception to the Sentry and still continue executing the code below.

BAM! Easy and relatively elegant! We now have a custom exception we can use wherever we want to log potential offenders.

But Sentry will hate us

There is, however, 1 problem. As I said earlier, this caused some API calls to call the method over 100 times. If we go with this solution, this means that we will report the exception 100 times per API call! a) We'll hit Sentry API limits in no time and b) we don't actually want 100s of reports for the same problem.

So, what I did was, I cached the fact that the report was sent and prevented it from being sent again while the cache exists. But what to use as the cache key? Well, considering that we want to log every API call where this happens and knowing that my method was not used in non-request-based environments (CLI, queued jobs, ...) I simply used the current URL as the cache key.

Let's revisit the Exception:

public function report () { $cacheKey = 'lazy-exception-' .request()->url(); if (Cache::has($cacheKey)) { return ; } Sentry::captureException( $this , [ 'level' => 'warning' , 'message' => $this ->getMessage(), ]); Cache::put($cacheKey, 1 , now()->addHours( 24 )); }

BAM! The exception should now only get reported once every 24 hours per different URL! Meaning we're not littering our error reporting service or logs, but we'll still be getting reports for every URL where we forgot to eager load the company relation!