Object-Oriented Programming

“Java is the most distressing thing to happen to computing since MS-DOS.” - Alan Kay, the inventor of Object-Oriented Programming

Photo by Vanessa Bucceri on Unsplash

Object-Oriented Programming is a popular programming paradigm used for code organization. This section discusses the limitations of mainstream OOP as used in Java, C#, JavaScript, TypeScript, and other OOP languages. I’m not criticizing proper OOP (e.g. SmallTalk).

This section is completely optional, and if you think that using OOP is a must when developing software, then feel free to skip this section. Thanks.

Good programmers vs bad programmers

Good programmers write good code and bad programmers write bad code, no matter the programming paradigm. However, the programming paradigm should constrain bad programmers from doing too much damage. Of course, this is not you since you already are reading this article and putting in the effort to learn. Bad programmers never have the time to learn, they only keep pressing buttons on the keyboard like crazy. Whether you like it or not, you will be working with bad programmers, some who will be really, really bad. And, unfortunately, OOP does not have enough constraints in place that would prevent bad programmers from doing too much damage.

Why was OOP invented in the first place? It was intended to help with the organization of procedural codebases. The irony is that OOP was supposed to reduce complexity, however, the tools that it offers, only seem to be increasing it.

OOP non-determinism

OOP code is prone to non-determinism — it heavily relies on mutable state. Functional programming guarantees that we will always get the same output, given the same input. OOP cannot guarantee much, which makes reasoning about the code even harder.

As I said earlier, in non-deterministic programs the output of 2+2 or calculator.Add(2, 2) mostly is equal to four, but sometimes it might become equal to three, five, and maybe even 1004. The dependencies of the Calculator object might change the result of the computation in subtle, but profound ways. Such issues become even more apparent when concurrency is involved.

Shared mutable state

“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.” — Rich Hickey, creator of Clojure

Mutable state is hard. Unfortunately, OOP further exacerbates the problem by sharing that mutable state by reference (rather than by value). This means that pretty much anything can change the state of a given object. The developer has to keep in mind the state of every object that the current object interacts with. This quickly hits the limitations of the human brain since we can hold only about five items of information in our working memory at any given time. Reasoning about such a complex graph of mutable objects is an impossible task for the human brain. It uses up precious and limited cognitive resources, and will inevitably result in a multitude of defects.

Yes, sharing references to mutable objects is a tradeoff that was made in order to increase efficiency and might have mattered a few decades ago. The hardware has advanced tremendously, and we should now worry more about developer efficiency, not code efficiency. Even then, with modern tooling, immutability, barely has any impact on performance.

OOP preaches that global state is the root of all evil. However, the irony is that OOP programs are mostly one large blob of global state (since everything is mutable and is shared by reference).

The Law of Demeter is not very useful — shared mutable state is still shared mutable state, no matter how you access or mutate that state. It simply sweeps the problem under the rug. Domain-Driven Design? That’s a useful design methodology, it helps a bit with the complexity. However, it still does nothing to address the fundamental issue of non-determinism.

Signal-to-noise ratio

Many people in the past have been concerned with the complexity introduced by the non-determinism of OOP programs. They’ve come up with a multitude of design patterns in an attempt to address such issues. Unfortunately, this only further sweeps the fundamental problem under the rug and introduces even more unwarranted complexity.

As I said earlier, the code itself is the biggest source of complexity, less code is always better than more code. OOP programs typically carry around a large amount of boilerplate code, and “band-aids” in the form of design patterns, which adversely affect the signal-to-noise ratio. This means that the code becomes more verbose, and seeing the original intent of the program becomes even more difficult. This has the unfortunate consequence of making the codebase significantly more complex, which, in turn, makes the codebase less reliable.

I’m not going to dive too deep into the drawbacks of using Object-Oriented Programming in this article. Even though there probably are millions of people who swear by it, I’m a strong believer that modern OOP is one of the biggest sources of complexity in software. Yes, there are successful projects built with OOP, however, this doesn’t mean that such projects do not suffer from unwarranted complexity.

OOP in JavaScript is especially a bad idea since the language lacks things like static type checking, generics, and interfaces. The this keyword in JavaScript is rather unreliable.

The complexity of OOP surely is a nice exercise for the brain. However, if our goal is to write reliable software, then we should strive to reduce complexity, which ideally means avoiding OOP. If you’re interested in learning more, make sure to check out my other article Object-Oriented Programming — The Trillion Dollar Disaster.