You can read this article on my own blog, if you prefer that to medium.

Like many people, I have a sort of pet programming paradigm, which I enjoy adapting to the various codebases and languages I work with.

I wanted to standardize it a bit, and share it with the world. It was going to be called something along the lines of “explicitness-focused minimal-shared-state programming” (I’m not good at names). I was thinking about how to prove its worthwhileness, both to the world and to myself, when the applied statistician in me kicked in and taught: “Hey, I could try creating a small experiment”.

So I did, I wrote two pieces of code doing the same thing, a rather simple task. One was functional, the other one was written in this pet paradigm of mine. I made a website that would serve participants either one or the other pieces of code and ask them to explain (in a few paragraphs) what the code did.

The code was simple enough that any person that can read code would have been able to figure out what it does after a bit of head scratching. Indeed, after manually checking about 20% of the answer (randomly selected), none of them were wrong.

But, the answer itself was not the data I actually cared about. Instead, what I wanted to monitor, is how long people took to come up with the answer and how many people gave up after reading about the task and wrote nothing.

My thinking went along the lines of: “If a programming style is truly easy to read, that means a programmer should have an easier time translating it into abstract concept inside their brain and subsequently into words”. So, you can partially asses the difficulty of a programming styles by seeing how quickly someone can understand a “simple” piece of code, rather than seeing if he can understand a large codebase (which comes with a host of issues).

To my surprise and discontent, the study failed to prove my hypothesis. On a sample of a few hundred answers, there was no significant difference in how quickly people explained away the code written in my pet paradigm vs the code written in a purely functional style (after removing some questionable data and z score > 5 type of outliers).

So my plans for popularizing and contouring this new paradigm were despoiled by my poor attempt at statistical psychology.

However, this got me thinking about how we “validate” the programming paradigms we use.

In searching for reasons about why and where certain styles are preferential to other, one comes up with a nearly infinite amount of blog posts, reddit arguments, whitepapers, talks, anecdotes, etc.

Yet, there’s no actual studies to back any of this up.

A lot of what we do to back up a certain paradigm is waxing philosophical about it.

Instead, what we should be doing, is looking at its benefits through the critical lens of “science” (read: experiments).

A (thought) experiment

Front-end development is an in-demand field right now. Let’s assume the two “leading” paradigms in terms of tooling, style and syntax are Angular with code written in Typescript and React + Redux with code written in ES6.

I’m not saying they are the two leading styles, since I’m not that up-to date with browser technologies. But, for the sake of argument, please assume they are.

There’s no quantifiable difference between these two stacks of tools. Of course, I hear a choir of people screaming, there’s all the difference in the world:

Blah result in better isolation which can improve modularity.

Bleh leads to cleaner interfaces which makes the code easy to read.

Blax helps us get more uniform performance across browsers leading to a more predictable UX.

Blerg means that state is immutable from the perspective of X which makes modifications to Y easier to make.

Blif will help new people learn “our way” of doing things faster than they’d learn the “other way” of doing things.

… etc, etc, etc,

As I said before, you can wax lyrically to no end about why your paradigm is better and the opposite one is the root of all evil.