Bitcoin is no magic. It sacrifices all manners of efficiencies, which goes against our intuition and “best practices”, in order to give us something special.

Bitcoin is specifically inefficient in 2 dimensions:

It mandates that rate of blocks produced must be slow

It uses broadcast communication

To highlight how counter-intuitive this is. How often do you:

Purposely make a job slower, even if you figured out a way to make it faster?

Tell everyone you know about every single thing that you did, every minute of the day?

To do this in a network setting is even more insane. Not only you are slow, everyone else must be slow. Not only you scream at everybody, everybody screams at everybody else.

Furthermore, this network has hundreds of thousands of members. If you have a giant insane asylum in mind, you have the right mental image.

Doing things the Bitcoin’s ways is literally insane, in most contexts.

It turns out that being maximally inefficient has its advantages. By intentionally forcing things to be slow, Bitcoin makes it costly to cheat. By using broadcast communication, it minimizes the need to trust individual members (or maximize fault-tolerance, in computer science terms).

By doing both, slow blocks and broadcast communication, Bitcoin solves the Byzantine Generals’ Problem. A huge breakthrough in computer science.

But doing things Bitcoin’s ways comes at a heavy cost. It walks a thin line between brilliance and uselessness. Blockchain systems work well as long as the data flowing through them grow at a manageable rate.

A data growth rate of anything but linear is unsustainable, and a certain death sentence. Non-linear data growth will quickly kill the individual nodes one-by-one, and inevitably revert the system back to a less trust-minimized model.

As blockchain systems are already maximally inefficient, there is little to fall back on if the data grows too quickly. Blockchain systems, as they are, tread on very thin ice.

So when it comes to blockchain data, you need to be ruthlessly efficient. This is to compensate for being maximally inefficient in areas mentioned above.

This is precisely why Ethereum’s architecture of “rich statefulness” is such a bad idea. Ethereum states are needed purely for computation purposes, but they grow at an unmanageable rate.

Ethereum’s design deciscions are even more questionable when the reasons for embracing rich states at the core layer are vague and dubious.

To simulate Turing-completeness? There can be no real Turing-completeness on the blockchains as all programs must somehow be halted. So “Turing-complete” is a total gimmick. Vitalik himself admitted to this.

To make programmable smart contracts easier to write? Ease of use is the least of your worries when it comes to blockchain engineering. Backward priorities. Remember, with blockchains you’re already treading thin ice, without adding rich states.

So why? To support computations otherwise not possible with Bitcoin-style scripting alone? Not really. Any computations that can be done with Ethereum smart contracts can be done on Bitcoin, just at a higher layer.

And this is the crux of the problem. Ethereum is solving problems at the wrong layer and by doing so, needlessly bloating its core design.

Kicking the can down the road and hope difficult problems resolve by themselves is not a solution either. Sharding is not the solution, as sharding implies scaling down the level of broadcast communication — which is a feature in the context of blockchains, not a bug!

Pinning all hope on sharding as the magical cure-all typifies Ethereum’s attitude towards engineering: Hopium.

Ethereum’s problems are even more serious if you consider that Bitcoin, despite it being extremely conservative towards the type of data and growth of data it handles, still has very real chance of failure. IMHO, Bitcoin is very much still an experiment.

If you’ve read my recent article on Bitcoin’s incentive-scheme, you’ll notice that I’ve left a few questions open-ended which I still don’t know the answers to. I’m optimistic on Bitcoin, but cautiously optimistic.

To recap: Bitcoin is already stretching things to the limits to get something useful. Despite this its success is not guaranteed. Ethereum stretches things out much further, without having good justifications. Ethereum’s architecture is flawed from the get-go for this reason.

A few more words on the engineering side of things. The story of Ethereum is actually not uncommon. We’ve seen this movie before:

RISC vs. CISC in the 70s

Linux vs. Windows in the 90s

In these episodes we’ve learned that hardware and software work best when they are built as modular, simple layers, exemplified by RISC and Linux (another example is TCP/IP). The reason is that these systems tend to be more flexible, more elegant and can adapt easier to changing environments / use cases.

This insight is immortalized in the Unix design philosophy: “Flexibility, simplicity, and freedom are the foremost considerations”.

The Unix design philosophy also gets a vote of confidence from nature. Ant colonies exhibit emergent intelligence even though individually the ants are dumb and highly specialized. Similarly, our brain is composed of simple neurons that individually perform simple tasks.

Ethereum’s kitchen-sink approach at the core layer bears resemblance to the idea of a complex instruction set. Or the idea of building software out of big, complex components, instead of smaller and more specialized ones. Complexity for complexity’s sake, not through emergence.

In summary, Ethereum has questionable design decisions without strong justifications for them. We’ve also seen similar engineering mistakes to Ethereum before. My guess is that Ethereum will eventually be another lesson of what-not-to-dos in the history book.

Notes

I use “blockchain systems” to refer to Bitcoin-like blockchains that are based on Proof-of-Work.

Additional reading: Gregory Maxwell explained verification-not-computation here. Below is a snippet.

Bob McElrath also described the problem here.