Last Friday marked the release of Unity 5.3.5p7, a minor patch that passed by without notice or fanfare. In this patch, however, there is a bullet point nestled amongst the multitudes of other fixes that deserves much more attention.

That’s right. Optimizing your compile time is back!* After upgrading, my compile time went from approximately 17 seconds to 6.3 seconds.

*(After Unity 5.2.4 most versions have a compiler bug which negates the most effective compile time optimization strategy)

But I’m not just here to describe how amazing it is to wait 10 seconds less every time you make a code change, I’m here to ask: Why aren’t more people excited to see this bug fixed? Does the thought of reduced compile times not incite joy in others?

Pictured: Not enough joy.

My only conclusion is that most people don’t realize they can optimize compile time. There’s no official documentation and relevant forum posts are out-of-date, so getting the big picture is not easy. But once you have that, reducing compile time is simple and the benefits are enormous.

So, here’s my personal strategy on battling compile time:

1. Get the Facts

First you need to find out how long it takes for your code to compile. For my own one-person codebase, it was hovering around 17 seconds. For larger teams, a fair estimate would be around 30 seconds. To put that in perspective, I make around 20–30 changes an hour and having to wait 30 seconds instead of 6.3 seconds would add about an extra 10 minutes of waiting around to my schedule per hour.

Imagine how much that adds up through the day. Throughout the week. Each month. Then multiply that number by the number of people on the team. And it’s not just about the wasted time: A longer compile time means you’re much more likely to get distracted and start checking your email or Facebook.

But first you need to know your compile time. And you want to always keep track of it, so you don’t check in anything that drastically increases it. Remember all the wasted time? To solve that, I’ve written an editor extension that you can download from my Github here.

2. Optimize!

Before we optimize, let me give you some background on how this all works. The moment you make a code change, Unity recompiles your code into a .dll (compiled code) called Assembly-CSharp.dll (or Assembly-UnityScript, etc. if you aren’t writing in C#).

To reduce the amount of time it takes to recompile, you can use special folders that Unity compiles into a separate .dll called Assembly-CSharp-firstpass.dll. The key part is that code in this .dll won’t recompile if you don’t change it. These special folders are named Plugins and Standard Assets and must exist on the top-level directory under Assets.

Things to note:

Due to the order these .dlls are compiled in, code inside this firstpass.dll won’t be able to reference code outside of it.

Changing code inside the special folders will recompile both .dlls and negate any optimization gains.

With that in mind, the best candidates for optimization are sections of code that aren’t dependent on other code and are unlikely to change. Third-party assets are perfect. Other parts of your codebase might make sense to optimize as well.

However, I did find a problem with simply drag-and-dropping all of my third-party assets to the Plugins folder. Some of those assets depend on their resources residing in their original imported path and break horribly when that is not the case. So to save myself time I bought Mad Compile Time Optimizer ($15)***, because it leaves the non-code files where they are and also comes with a nice Revert feature.

*** I am not associated with Mad Compile Time Optimizer in any way.

3. Look at Other Suspects

Now that your third-party assets have been optimized, you should see a huge reduction in compile time. This is normally where people stop, but there are still a few places we can check for performance gains.

The [InitializeOnLoad] attribute is one such place. For those who are unfamiliar, InitializeOnLoad is a Unity attribute that can be added to any class so that the static constructor is called on editor launch (or compilation finish). Any inefficient code there could add non-trivial amounts of time to your compile time.

After some experimentation, I’ve found that, yes, code that runs during InitializeOnLoad counts toward overall compile time.

To analyze instances of InitializeOnLoad, I logged time taken from the start to end of any static constructor with suspicious looking code. Most instances of InitializeOnLoad were harmless (0.0–0.2s), but I did come across one class that was abusing it to load and cache resources. I changed the offending piece of code to lazily cache when needed and moved on.

The last thing to check is your own code. Are there classes that aren’t being used? You probably don’t need them taking up time in your project. I haven’t figured out a good way to log compile time per class file, but as a general rule of thumb, file size is a good approximation for complexity.

A visualization is the easiest way to prioritize what files are worth taking a look at. I used GrandPerspective (Mac) because it can filter results by custom rules. I ended up making a filter that matches names ending with .cs and another to remove files with paths that contained ‘Plugins’ or ‘Standard Assets’.

This is what my project’s codebase looked like at the end of optimization. Note that my personal code is the only code compiling after a change!

Yes, this is all code. Some of those TextMeshPro classes are massive.

TL;DR: