At Snagajob we have regular tech talks, and in a recent one, there was a some discussion on the merits of dependency injection. In the Java world the pattern has been the standard for many years now, but this is not true for all programming languages. It simply isn’t a fit for some languages, and for other languages, the community has a different narrative that means they see the merits of the pattern differently.

I think that DI is a good pattern for C# (the language we currently use most at Snagajob, though we also have projects written in Python and Scala for instance), but I have a hunch that the pattern isn’t such a big deal with this language compared to Java. In this post, I’d like to get into what I think might be the historical reason for this, and would like to set out what the pattern exactly is, whether you need a framework for it (spoiler: you probably do, but only if it is a good one) and what alternatives are.

TLDR;

Often in discussions about dependency injection, people mix the pattern with how it is supported by containers. There is plenty to criticize when it comes to many of these containers, but the core pattern is easy to appreciate:

Instead of letting objects assemble their own dependencies,

classes state what dependencies they require from clients in order to be constructed and used.

While the pattern is very simple, it can have quite a large effect on your code organization. If done well, a positive one, in my humble opinion.

The Java Enterprise Edition DI rebellion

Until just a few years ago, a substantial share of the Java (corporate) community was pushing Java as the best choice for developing ‘Enterprise’ applications. Vendors such as IBM, Oracle, Sun (now Oracle), BEA (now also Oracle) and JBoss (RedHat) sold application servers to host software coded to the Java Enterprise Edition (JEE) specs and interfaces, providing value though packing these servers with a plethora of features like enabling live fine grained configuration, monitoring and application server specific goodies (like clustering and exclusive libraries), and limitless scalability while they were at it. And in order to make Java attractive to coders with a wide range of skills (or should I say limitations), they made sure the specifications were IDE friendly so that you could replace code with drag and drop interfaces and wizards. The vision where software engineering was headed back then consisted of 4th generation languages and commoditization of high level software components (e.g. IBM’s San Francisco project) that would eventually replace programmers with business analysts.

Unfortunately, writing software to these standards was just terrible. You’d need approximately 1,244 interfaces and 2,488 implementation classes and 4,302 configuration files to implement Hello World, and that was probably considered cutting corners and not quite up to Enterprise standards back then. Then code needed to be triple wrapped in zip files with fancy extensions and meta data files packaged with, and deployed via a pipeline that involved besides a lot of clicking at clunky web interfaces also involved jumping through flaming hoops and having your packages stamped and rewrapped by the International Committee of Astronaut Architects. It’s a wonder anything got built at all with Java during these years.

And then along came the Spring framework. Spring was a clever rebellion against JEE, providing an alternative container model that enabled it’s users to use plain objects rather than EJBs, and in general provided a much simpler programming model. It started as companion code with a book about JEE design, and kept enough silly things around — like setting up your object graph with XML and requiring interfaces for absolutely everything -, to fly under the radar of the “but does it scale for the enterprise?” crowd. But it was actually focused on empowering developers; giving them options instead of limitations and getting out the way where developers didn’t need the hand holding. Once developers figured that out, the Spring framework rocketed to the dominant position it still holds today.

While the most important selling point of Spring was probably that it wasn’t JEE as usual, Dependency Injection was a core element of it. Spring basically put DI on the map, and as Spring has been the most popular Java framework ever since, people in Java land are simply used to dependency injection as the default way of wiring objects together.

So, coming from my Java bubble, I was somewhat shocked to hear several people express their distaste for DI. They may have good reasons for that, but I believe DI has a few neat properties as well, which I’d like to go into next.

Hollywood and grades of magic

Before dependency injection, there was inversion of control (IoC). If you read up on these terms now, IoC is marketed as the umbrella term that DI falls under, but I remember DI as basically a rebranding of IoC. Regardless, early articles about IoC/ DI would mention “the Hollywood principal”, or: “don’t call us, we’ll call you”. The idea of this is that if you write your classes according to a particular interface, you should be able to rely on it’s clients to properly construct and initialize instances of, which means that these classes need very little knowledge about the environment they operate in, which neatly aligns with aims like the single responsibility principle and encapsulation.

And that’s all there is to it, really. Dependency Injection simply means that when you follow an agreed upon pattern when writing your classes, you can expect clients (or environment) of these classes to provide you with (inject) the dependencies you require.

But how?

There are two big flavors when it comes to how DI is typically implemented: property (setter or member) injection and constructor injection.

Simple property based DI looks like:

public class WindowWasher

{

public SoapBottle SoapBottle { set; private get; }

}

Here the assumption is that all relevant dependencies are set after construction and before using the class. Problems with this pattern are (without advanced container support):

it isn’t clear which properties are required to be set before use;

there is no guarantee that clients will satisfy dependencies before trying to use objects;

if dependencies rely on each other, initializing an object may be even more tricky.

Interface dependency tries to improve on the lack of clarity of this pattern:

public interface IWindowWasher

{

SoapBottle SoapBottle { set; }

} public class WindowWasher : IWindowWasher

{

public SoapBottle SoapBottle { set; private get; }

}

The assumption here is that clients satisfy the dependencies stated in the interface. The class can have many more properties; only the ones in the interface should be set before using the object.

Containers like Spring nowadays support better support for setter/ member injection, which makes the interface injection pattern moot. Realize though, that even when a container gives you a tighter contract, there is no guarantee that clients will actually use that particular container. That may or may not be an issue for you.

And finally, constructor injection:

public class WindowWasher

{

readonly SoapBottle soapBottle; public WindowWasher(SoapBottle soapBottle)

{

this.soap = soap;

}

}

In this case, all required dependencies are stated as constructor arguments, which the client has to provide in order to construct objects from the class. With constructor injection, there is no doubt what a class needs in order to be constructed for use, and as they are provided before the object can be used and passed in together, dependencies may rely on each other without magic or a more complicated contract. And the constructor implementation can enforce required vs optional dependencies.

In my humble opinion, only constructor injection is worth using. Though in a few cases, it may be convenient to have support for property/ interface injection as a workaround for when constructor injection doesn’t cut it, and most containers support all the flavors (so, imho you should aim for constructor injection, use alternatives as a last resort).

When it comes to container support, there are also several flavors to discuss.

No container

Yep, that’s right, it is perfectly acceptable to not use any container at all! This can mean quite a bit of hand coding, but the code will be easy to navigate and analyze.

For instance:

Scent scent = new Scent("flav", "flov");

bool isSoft = true;

Soap soap = new Soap(scent, isSoft);

SoapBottle soapBottle = new SoapBottle(soap);

WindowWasher washer = new WindowWasher(soapBottle);

washer.WipeItGood();

When you have many dependencies though, every time a class changes it’s constructor signature (assuming constructor injection), all clients and their clients and so forth will have to be changed. Good IDEs do this without much trouble, but in absence of that, it can be quite the hassle (and also result in larger change logs that can obscure what changes are relevant between commits).

DI containers are responsible for satisfying dependencies and can help you cut down on code required to wire objects together. Some containers do a better job at that than others, and some may make your coder life such a hassle that it is arguably better to do without.

Requiring explicit configuration

The earliest Spring versions required you to configure your dependencies in XML files. I’d avoid this at all cost, as it hardly saves you any code, and makes it harder without additional tooling to figure out how your object graph is put together.

There are also plenty of frameworks that allow you to bind interfaces to implementations and types to instances via code. This is then required to be done as part of the bootstrapping code and is hence easy to find. Having to declare everything you’ll potentially want injected elsewhere isn’t fantastic either though; it scales poorly for larger projects and makes it harder to modularize parts of your code.

Convention / meta data based

Spring at later versions loosened up the requirement to have to configure everything explicitly, and now supports bindings via class path scanning, where classes that should be picked up by the framework can be annotated, and Spring follows some simple rules, like when a single implementation is found for an interface, then that must be the intended binding for that interface.

My personal favorite framework in Java land is Guice, which provides a nice mix of explicit configuration in code with some heuristics. It will try to resolve instances lazily (for instance when referring a class, it’ll try to instantiate, and pick either the non-args constructor or the single constructor annotated with @Inject), even when that class is never explicitly configured. And another great feature imho (though not universally loved) is that you can annotate interfaces to refer to their default implementation, and then override that on a case-by-case based via explicit configuration if you need it.

Code comparable with Guice would look like

[ImplementedBy(WindowWasher)]

public interface IWindowWasher

{

void WipeItGood();

} public class WindowWasher : IWindowWasher

{

readonly SoapBottle soapBottle; [Inject]

public WindowWasher(SoapBottle soapBottle)

{

this.soapBottle = soapBottle;

} void WipeItGood() { ... }

}

In case you need to use an interface rather than the class directly. The above code would require no explicit configuration.

What’s not to like?

So what is it exactly that make people say they don’t like DI? I believe it has to do with the frameworks they used rather than the pattern. Because while the differences are minimal on the surface, they can have a pretty huge impact on your day to day experience. In a nutshell, I think some frameworks either:

Use too much magic. Ok, it is neat when the framework tries to help you out. But when it does, stack traces should remain small, resolution rules simple and predictable, and should not result increasing your boot time with more than just a few seconds. And when it fails, error reporting should make it easy to figure out quickly where things went wrong.

Use too little magic. So instead of writing some code here and there to instantiate your classes when needed you’re now stuck with a gigantic blob of code ‘registering’ all possible instantiations up front? Or worse, you have to declare all of that in a gigantic XML file? No thanks is what many developers will say, and I don’t blame them!

When the balance is struck just right, like is the case with Guice, I think that DI is definitively a boon. Even without a good framework, I think the pattern is useful, but much less convincing.

Not convinced?

Let’s briefly go over the alternatives to dependency injection.

just construct dependencies yourself

public class WindowWasher

{

SoapBottle soapBottle; public WindowWasher()

{

Scent scent = new Scent("flav", "flov");

bool isSoft = true;

Soap soap = new Soap(scent, isSoft);

soapBottle = new SoapBottle(soap);

}

...

}

No-one should do this except the simplest of simplest cases. In the above example, if you have another class that needs a soap bottle, you’ll have to go through the same motions again, and you’ll end up with lots of code duplication.

2. A more reasonable implementation is to use static factories.

public class SoapBottleFactory

{

static SoapBottle { get; set; }

}

The idea then is that you can get a soap bottle instance via the factory directly. But:

There is also no guarantee that outside code doesn’t change the instance of the factory when the system is running, which opens up a bunch of potential problems.

You’ll still need bootstrapping code to set up dependencies much like when using dependency injection, but without any guarantees. It may sound convenient to be able to use code without all dependencies being satisfied up-front, but it’s probably going to cause a few bugs at some point.

Basically, using the static factory pattern can work fine, but is dependent on the programmers using it being disciplined. And hell is other programmers :-)

3. Finally, there is the Service Locator.

What this exactly means seems to depend on who you ask, but basically, it is an indirection for getting your dependencies. Instead of getting directly to a factory or stating the dependencies you require, you ask the service locator for the instances you require.

If you get an instance to the locator statically, this isn’t much different from the static factory pattern with all the potential problems that come with that. This seems to be the common understanding of the pattern and is generally discouraged.

Concluding

Dependency Injection is a useful pattern for some languages like Java and C#. Other languages have different solutions for service assembly that might resemble DI somewhat or are entirely different, but what they’ll have in common (probably) is that good implementations are explicit about what they expect from their environment to function and keep assumptions about the environment the code runs in to a minimum.

When you decide to use DI, selecting a container that supports the pattern can help you cut down on plumbing code and may be able to do a few neat tricks for you (e.g. AOP). That said, DI without a container is just as viable, and even when you use DI, you shouldn’t make this your religion (using ‘new’ to create instances is not a crime).