D has a lot of features which I like very much. And it has a few design decisions that completely kill it for me.

In D I don’t like making my objects structs, and I don’t like making my objects classes. I would like to have more control over how my type behaves.

The garbage collector has too much of an impact on the core language and the standard library.

Everything has shared ownerhsip by default. C++11 introduced features that make it easier to clearly indicate who owns what. D went in the opposite direction.

The issue with all of these is that they are so fundamental to the language that you can not ignore them. In C++ I always have the option to ignore a feature. In D I do not.

The first one is a fairly small annoyance: I don’t have full control about what a class or a struct is in D. There are downsides to both the struct (no inheritance, no default constructor, undefined garbage collection behavior) and a class (everything is virtual, has more stuff in it than I want, shared ownership between all references). My usual object is a mix of the two. It has value type semantics but it often needs a default constructor or needs to implement an interface. You can make a class more struct-like by making it a scope class (allocated on the stack, no shared ownership) or a final class (no inheritance) but you can not make a struct more class-like, which is usually what I want.

A bigger issue is the garbage collector being a complete mess. The worst thing being that you can not really disable it. If you want to disable the GC you pretty much have to implement replacements for the entire standard library and much of the core language and then never use some core features again. But then you’d be using the language wrong. So the result is that you give up and use deterministic memory management where possible, and the GC where not using it would be too much work.

But it turns out that a garbage collector will often not do what you want, and you have to work against it. For example std.signals is working hard to escape the GC because it doesn’t want the signal to keep all connected slots alive. I wrote my own implementation for signals/slots (because the standard one only works for class member functions as slots) and ran into unpredictable bugs because I was escaping the GC wrongly. I was using closures inside memory that the GC didn’t know about so the GC went ahead and collected everything that those closures were referencing. The presence of the garbage collector makes memory management code even more difficult to write.

The alternative to working around the GC is doing reference management. When you want the object to be destroyed, you go through everything that is referencing it and remove the reference. I am specifically thinking about such things as an update or render loop or signals that you are connected to. Which you obviously also have to disconnect from in C++. But in C++ you can do it in the destructor. And there is a real difference in code quality when you can mostly rely on the default destructor which calls your member and base class destructors versus having to go through all of that yourself and calling a method.

The garbage collector is a ridiculously leaky abstraction: Beyond small programs almost everyone runs into it’s limitations. Yes, reference management is an easier problem than memory management, but I am convinced that you will never find a large garbage collected program that doesn’t also do manual memory management. Whether it be re-using instances with a free-list or calling such functions as assumeSafeAppend() on a D slice. Which could be fine, but don’t design your language as if you didn’t need to care about memory. If you work with the D language and the D standard library you will design your interfaces as if you didn’t care about memory. And then you have a problem when your program grows. I think that C++11 has shown much better ways to solve this than garbage collectors have.

And I would have still been fine with a language that has all that. I wouldn’t use that language with a big team, but it seemed fine for my own projects. I had written enough code that would mostly allow me to ignore the GC. But the big killer is that then everything is shared by default.

In D anyone who has a reference to something owns it. And since most things are references you always end up with a lot of people owning a lot of objects.

Read this article about slices in D and tell me that you are not surprised by the design decisions. In D there is no reliable way to tell the difference between an array and a slice of an array. This makes slicing enormously convenient and efficient, but makes appending slow by default and unpredictable in general. And I don’t just mean unpredictable performance, but also unpredictable behavior.

The slice article has this example:

char[] fillAs(char[] buf, size_t num) { if(buf.length < num) buf.length = num; // increase buffer length to be able to hold the A's buf[0..num] = 'A'; // assign A to all the elements return buf[0..num]; // return the result }

If you pass in two slices that are shorter than num, this function will either modify the original slice, or return a new one depending on how big the internal capacity is. The solution is to either pass the original in by reference and make it explicit that it will be modified, or make a copy explicitly and make it clear that you return new memory. But that the default behavior is unpredictable is a mess.

This would have been easy to prevent by making a slice a different type from an array. Where the array owns the memory and the slice does not. In fact I will probably adopt exactly that solution in C++ in the future because slices are such a good idea. But that solution would go against D’s philosophy of not having to worry about memory.

The problem looks similar for classes: All pointers share ownership, and you have to rely on convention to determine who can modify when. Since C++ introduced the std::unique_ptr I never want to go back to such a system. It reminds me too much of naked pointers and the lack of compile-time enforcement when you deal with them. Yes, the C++ approach of “in case of doubt, copy everything” has it’s own problems, but in my experience they tend to be easier to fix than sharing everything by default. Saving a std::vector by reference or pointer stands out more than saving it by copy, and I think that that is good. In D you can’t tell the difference.

The designers of D have realized that sharing is a problem across threads. So by default objects can not be accessed by other threads, unless you say that they are shared. Somehow they have failed to realize that the same problem exists within a single thread once you get past neat and clean examples. I think the only thing that explains this oversight is that the designers thought “without memory management you don’t care about ownership any more.” But clear ownership gives you many benefits beyond memory management.

Unfortunately I don’t think that these things can be fixed. They are too fundamental to D. You could write an alternative core runtime and standard library to fix many of the issues well enough that I’d like the language again, but repeating that debacle would be terrible for the D community.

I have other fundamental issues (for example with D’s const system and the pure and nothrow keywords) but those are more fixable and this article is already too long. As it is I don’t know who D is for. In the areas where I still find D appealing (small scale projects that don’t care about low level performance) I would just use Python instead and save myself some friction. But for big projects I’ll continue to use C++.