Lets talk about Object Oriented Programming

Things you should know about OOP before you make a move

Recently I came across the “Goodbye, Object Oriented Programming” post in which the author has expressed his disappointment with OOP. Now as a Ruby and JS programmer who loves to work in both OO and FP, I was really excited to learn something new.

Unfortunately, as soon as I started reading the article I realised that there were so many mistakes in it. A lot of responses did talk about them but they felt a bit defensive.

My intention here is not to defend OOP and definitely not to start some religious war. I don’t even compare OOP with FP. The goal of this post is simply to prove that (almost) all of the arguments in the original post are due to misconceptions or incomplete understading of OO principles.

NOTE: If you have not read the original article, PLEASE read it first as this won’t make much sense without it. This post is written as a response to it and in the same structure. Sections in this article are responses to sections of original article. Quotes like the following

are a direct quote in the original article by the author

Most sections contain these quotes to articulate a response.

NOTE: To the author of the original post (Charles Scalfani): I would like to apologise in advance if you feel the tone of the post to be a bit harsh. I do hope you would read the whole post and provide your valuable feedback. I consider this as a discussion that allows all of us to become better software writers.

Lets start:

Inheritance:

The problem with inheritance is that its the most misunderstood concept of OOP. All the problems that are discussed in this section of the original article comes from an incomplete understanding of inheritance. So first, lets understand it a bit.

Inheritance was invented for Simula in 1967. On the wiki page for inheritance, the definition is:

“It is a mechanism for code reuse and to allow independent extensions of the original software via public classes and interfaces.”

The keyword in there is “code reuse”. That seems to be the primary motivation for using inheritance. Unfortunately, in a lot of languages today, its the wrong motivation. WHY? Keep reading…

There is a similar but distinct concept to inheritance known as “subtyping”:

Subtyping enables a given type to be substituted for another type or abstraction, and is said to establish an is-a relationship between the subtype and some existing abstraction.

The keyword here is “substitutability”. This ability of substituting one type for another is very important, as we will learn later. The important fact about subtyping is that we should follow behavioural subtyping which is enforced by Liskov Substitution Principle (LSP) introduced by Turing Award winner Barbara Liskov.

In a lot of languages, inheritance is implemented as a subtyping mechanism, eg. Java/C# etc, but in some languages its not. These languages decouple inheritance and subtyping, e.g. Go.

In languages where notions of code reuse and subtyping coincide (Java/C#), it is important to favour subtyping over code reuse.

So, with this newfound knowledge of inheritance, lets dissect the arguments in the article…

Banana Monkey Jungle Problem:

“A new project came along and I thought back to that Class that I was so fond of in my last project. No problem. Reuse to the rescue. All I gotta do is simply grab that Class from the other project and use it”

Lets get one thing clear: the example of reuse stated here is not exactly how the principle of Reusability works. The way the author is performing code reuse can be labelled as “Forked reuse” (best case scenario) or “Copy and paste programming” (worst case scenario). Both result in duplication and have their disadvantages.

Now in order to perform code reuse, the author is extracting one single class and expecting it to work. This will never work because that’s what a dependency is. I could say the same thing for (impure) functional languages. If I extract one single function out of JQuery library and expect it work, that would be foolish of me, because that function might depend on other functions to work (unless its a pure function). The other classes/functions are dependencies, because they are part of a whole.

And inheritance does create a static time dependency which is one of the disadvantages of it and a huge consequence of using inheritance that not many programmers know about.

When OO says we can do code reuse, it does not mean we can use one class anywhere without changes. What it means by code reuse is:

A group of objects that are part of a whole and perform a service/operation as a whole, usually termed as components or modules, can be reused in other systems.

That’s not to say that the “Banana Monkey Jungle” problem is not real. What Joe Armstrong meant by that quote is:

If all we have is state free programs (pure functions), everything becomes reusable.

But, do we really want to reuse every single function/class from our programs. That does not sound practical to me. And its not like we are not able to perform reuse in OO. I use multiple open source libraries in Ruby, that are being used by so many other people as well. If that’s not reuse, what is?

Classes which we do want to reuse can be refactored to a better abstraction allowing us to reuse them. A great example comes to mind: creation of Rails. Rails was extracted from an existing application (Basecamp) to be reused by programmers around the world, who have no idea about the code in Basecamp. Built in an OO language (Ruby), this was possible because of a good OO design (and awesomeness of Ruby).

The basic principle for making classes/modules/components reusable is: “Depend on abstraction and not concrete implementations.” In Java/C#, it means to use interfaces and abstract classes for referencing dependencies. In languages like Ruby, which allow duck typing, everything depends on an abstraction anyway.

In OO if we manage the dependencies properly, our ability to reuse increases as well. This requires knowledge of object design principles. So with correct OO design, we can mitigate the “Banana Monkey Jungle” problem.

Diamond Problem:

The primary motivation to inherit from two classes is not substitutability but code reuse. As we have established before, code reuse should not be our primary motivation for inheritance.

Also, inheriting from two classes means our class has two responsibilities, which violates Single Responsibility Principle (SRP).

“Most OO languages do not support this, even though this seems to make logical sense. What’s so difficult about supporting this in OO languages?”

The primary reason for not supporting it, is because usually it results in bad design. So instead some languages allow a way to provide multiple inheritance via mixins (e.g. Ruby). Mixin is defined by c2 as:

“A mixin class is a parent class that is inherited from — but not as a means of specialisation. Typically, the mixin will export services to a child class, but no semantics will be implied about the child “being a kind of” the parent.””

Can you see that “being a kind of” part in the definition? That’s talking about subtyping (substitutability). Mixins are not coupled with subtyping and that is why having multiple mixins in a single class is allowed. This decoupling with subtyping also allows to use mixins with primary motivation of code reuse.

As the Scanner, Printer, Copier example in the article uses multiple inheritance with the motivation of code reuse, this inheritance hierarchy is a bad design (as many people in the comments have pointed out.)

Even the contain and delegate implementation is not a very good one. Instead, a good design would be to extract the responsibility of performing the operation (scanning, printing) out of Scanner and Printer into something like “ScanOperation” and “PrintOperation”. Now Scanner, Printer, Copier can have their operation as a composition.

Now the above design is nowhere perfect, but it drives home the idea of using composition instead of inheritance for code reuse.

NOTE: I don’t mean Multiple Inheritance (MI) is evil and should never be used. Lot of programmers have used MI successfully. Most of the languages I have used professionally do not allow MI. So I have always used an alternate approach. But a consensus in the community is that the use of MI should be considered as a design feedback, to find if there is a better solution.

Fragile Base Class Problem:

This is indeed an issue, but again, this can be solved by a better design. Lets use Decorator Pattern.

Here, again, we use composition instead of inheritance to provide additional functionality (element count) around Array.

Once more, this is a design problem that arises due to incorrect use of inheritance.

The Hierarchy Problem:

“The Object Oriented Paradigm was predicated upon the real world, one filled with Objects.”

and that whole section in the article talks about the real-world aspect of OO paradigm. Now there is a problem with that view. I would like to quote one of my favourite OO design books: “Object Design — Roles, Responsibilities and Collaborations” written in 2002:

“Early object design books, including Designing Object-Oriented Software, speak of finding objects by identifying things​ (noun phrases) written about in a design specification. In hindsight, this approach seems naïve​. Today, we don’t advocate underlining nouns and simplistically modelling things in the real world. ​It’s much more complicated than that.​” — Rebecca Wirfs-Brock

The point here is creating good programs is much more than just modelling real world.

Categorical Hierarchy vs Containment Hierarchy:

Categorical Hierarchy is about classes and Containment hierarchy is about objects. One of the biggest problems when programmers are using OOP is that they think in terms of classes. Well, its not Class Oriented Programming. Smalltalk is a perfect example of OO implementation. It focusses on objects and messages passed between them. That’s what OO design is about. Once we change our focus from inheritance, code reuse and start focusing on how objects interact with each another to perform some operation and managing those interactions, then we will have a better OO design.

Inheritance Summary:

“Inheritance was supposed to be a huge win for Reuse.”

Indeed it was, but languages merged it with subtyping.

This resulted in new rules being applied to inheritance, which unfortunately were not explained to/learned by programmers using OOP.

So, if in your language implementation inheritance coincides with subtyping, then don’t use inheritance for code reuse. Instead go to alternate solutions for reuse (Composition, Mixins).

For an even better example of how inheritance made sense but instead turns out to be a design mistake, please checkout Sandi Metz’s talk “Nothing is Something”

To summarise, the problems that the author has stated for Inheritance pillar are just bad designs due to incorrect use of inheritance. Designing good programs is always a hard endeavour whether we use OO or functional languages.

Moving to second pillar…

Encapsulation:

The encapsulation argument revolves around this statement:

But the passed Object is NOT safe!

This statement is true, sort of. A quote from another favorite book of mine, Growing Object-Oriented Software: Guided by Tests (GOOS) by Steve Freeman and Nat Pryce:

Objects can break encapsulation by sharing references to mutable objects, an effect known as Aliasing. Aliasing is essential for conventional OO systems (otherwise no two objects would be able to communicate), but accidental aliasing can couple unrelated parts of a system so it behaves mysteriously and is inflexible to change.

Aliasing can only happen when the passed object is mutable. So by using immutable objects (GOOS calls them values) we can avoid breaking encapsulation, something a lot of people have pointed out in the comments of the original article. These comments have been countered by the author with the following statement:

“If an object via Dependency Injection passes an object by reference to a constructor and that constructor puts the passed object into a private variable, the calling function can break Encapsulation since it still holds a reference to that object. This means that the calling function can mutate the now “private” object without the permission of the container class and therefore without its knowledge.”

NO IT CANNOT! (Assuming that the object is immutable)

Let me demonstrate:

In our example here, Employee is immutable, so no one can mutate the state of the employee instances; neither the calling function (main) nor the constructor (SalaryCalculator). If we change the reference of jane to a new instance, it does not mutate the employee private variable of calculator in any way.

WHY? Because JAVA does not PASS employee BY REFERENCE, but instead it PASSES THE REFERENCE VALUE OF employee.

So the author’s statement:

Objects are passed to functions NOT by their value but by reference.

IS JUST WRONG. Java does not even have “Pass by reference” and if my above code example is not enough demonstration, just google “java pass by reference”. This is the result:

AND, AND even if it did, that would not be an encapsulation issue because that is a language implementation. OOP cannot be responsible for all the mistakes Java/C#/Ruby have made in their implementation, the same way FP is not responsible for mistakes in JavaScript.

Encapsulation Summary:

As we have learned, sharing references to a mutable object can break encapsulation, which necessarily is not a problem in many cases. Even when it is, it can be fixed by using immutable objects.

Moving to the final pillar…

Polymorphism:

The problem with polymorphism is: its an overloaded term. Since we are talking about OO, I am assuming “subtype polymorphism”.

“It’s not that Polymorphism isn’t great, it’s just that you don’t need an Object Oriented language to get this.”

This is a very true statement. Heck, we can even do Polymorphism in C using vtables, basically manually implementing the whole concept.

“So without much ado, we say goodbye to OO Polymorphism and hello to interface-based Polymorphism.”

I have to be honest here, I don’t exactly understand what the author means by interface-based polymorphism. I assuming one of two things:

Duck type ploymorphism, which only depends on interfaces and not type hierarchy. Java Interfaces, instead of Classes

both of which are language implementation detail.

As I have not properly understood what the argument here is, I will just try to explain some points about polymorphism.

Subtype polymorphism is highly dependent on the concept of subtyping (and not inheritance). So basically we can perform polymorphism with both Classes and Interfaces in Java. There is a very important concept of Dependency Inversion Principle (DIP), don’t confuse with Dependency Injection. It states:

“Abstractions should not depend on details. Details should depend on abstractions.”

DIP allows us to decouple modules. Polymorphism plays a very important part here. Lets understand it with an example:

We can just deploy Graphics, IShape, Rectangle, Triangle as a package (.jar) and anybody can create new shapes (Circle, Rhombus etc.) and Graphics program would still render it without requiring any code modification. This power comes from the fact that Graphics depends on an abstraction (IShape) instead of any specific concrete implementation of IShape, but shape will refer to concrete implementations of IShape (Rectangle, Triangle, Circle etc.) at runtime.

That was java. In Ruby, since variables don’t have types, it allows duck typing. What it means is: All method calls are polymorphic (in concept). The point here is Polymorphism implementation vary according to OO languages.

Polymorphism summary:

Polymorphism is a very important tool (definitely much more important than inheritance) in OOP. Polymorphism can be implemented in any language irrespective of paradigms.