You can read this article on my own blog, if you prefer that to medium.

If I can point to a singular idea that kills more products than any other, it’s future proofing.

Future proofing software comes in many flavors, however, most cases of future proofing follow the same pattern.

We need {X}, despite {Y} being a much easier alternative, because when {Z}, it will make our lives easier.

Where {Z} is an event that might happen sometime in the far future.

Here’s some examples:

We need to use a kubernetes & docker based solution for our infrastructure , despite a single large server being a much easier alternative, because when we will need to scale it up to 11 , it will make our lives easier.

, despite being a much easier alternative, because when , it will make our lives easier. We need a distributed design for our data processing , despite a centralized solution being a much easier alternative, because when a customer demands 5 9s of uptime in the SLA , it will make our lives easier.

, despite being a much easier alternative, because when , it will make our lives easier. We need to hire a team of developers and build in-house software , despite wordpress and shopify being a much easier alternative, because when our customer base grows to 100 times what it is now , it will make our lives easier.

, despite being a much easier alternative, because when , it will make our lives easier. We need to use an inheritance based design for our types , despite composition being a much easier alternative, because after 5 years of codebase growth , it will make our lives easier.

, despite being a much easier alternative, because after , it will make our lives easier. We need to write this code in C++ and have a materialized views based caching layer, despite a Python script that queries Postgres directly being a much easier alternative, because when our data volume increase by a lot, it will make our lives easier.

A while ago I wrote an article about imaginary problems , things that people solve to keep themselves entertained, rather than to add value.

Future proofing usually falls into this category. I’d go as far as to say that future proofing is the favorite imaginary problem of most small companies.

But it’s worth further discussion, because future proofing can actually help a product and it can be done correctly. Yet most people are doing it wrong and harming their work in the process.

Achieving success is harder than living with it

As outrageous as it may sound, I think celebrity culture is one of the leading causes of bad future proofing.

People have this strange obsession with trying to place themselves in the shoes of someone much more successful. They then make plans from that perspective, rather than thinking within their own means.

Everyone fantasizes about what they would do with some sort of amazing power they don’t have. Be that power to rule a nation, be a billionaire, be famous, be a virtuoso or be able to fly and punch people with super-human strength.

The problem with software developers is that they get to act out those fantasies too much. Software is a very equalitarian medium. You don’t need to be Facebook to build a social-media platform that can support “Facebook scale”… but you’d be wasting your time building it. Facebook’s magic consisted in being able to acquire billions of users, scaling the system was the easy part.

The problem here is twofold:

a) Achieving growth is much harder than supporting it

b) Most exceptional and popular engineers work on products that have to scale

Point a) is kind of obvious, when you think about. Out of all the software centric companies that reached a revenue in the billions or userbase in the millions, how many have failed ? Maybe 80%… if we are to define “failure” in a very strict way.

Yet, out of all software centric companies ever created, maybe 0.05% ever reached that kind of scale.

The problem with future proofing is that it’s usually meant to help in the scenarios a company or product won’t get to. Be that scenario having 1,000 team members, 10,000,000 users or 10 big budget clients with draconian requirements and an eye for perfection.

And saying NO to future proofing is hard, since it breaks everyone’s fantasy of success. It disrupts people from imagining taking on Amazon, it brings them back to thinking about the present. But presently you have 50 customers and 30 of those are family and friends, which is a rather discouraging state of affairs to think about.

Point b) doesn’t help with this delusion. It’s only natural that the best software engineers will work top jobs in top companies. Either because they helped create them or because they are getting paid millions to maintain them.

The Pareto principle works against us here, since it’s the top software engineers that are writing most of the books, giving most of the talks and writing most of the whitepapers.

Everyone hears this constant chatter about running services distributed on thousands of machines, handling petabytes of data, fighting for every single decimal of performance.

But most of us won’t have to think on the kind of scale or perfection that Facebook (the social media website, not the company) or Google (the search engine, not the evil cabal) require.

So if closing your eyes and imaging your company made it big 5 years from now isn’t helping, should we just not future proof ?

No, of course not, thinking about the future is important. Designing for the future is important, but we have to do it in a better way.

Design for flexibility, create the imperfect

When it comes to thinking about the future, less is often more.

Whilst a select few products actually go on to fulfill the exact needs they were envisioned for, most of them have to adjust on their way to success.

There’s hardly ever a match made in Heaven, where you provide A and 90% of your customers need A. Usually, you will provide A and 90% of your customers will need Z. But A is the closest alternative to Z , nobody is providing Z… so some customers decide to settle.

The nice thing about having customer settle for your product, is that you can then modify it to suit their exact needs. Essentially, your customers help you spot a gap in the market. Once you get better at filling that gap, you will experience growth.

This is a productive paradigm of thinking, because it encourages a “less is more” approach to future proofing. Preparing for the future doesn’t involve tacking on complexity, but rather removing as much of it as possible. Making yourself adaptable.

The simpler your codebase is, the easier it is to adjust to fulfill a different purpose.

“I hate code, and I want as little of it as possible in our product.” — Jack Diederich

If you’ve designed something to work perfectly, you’ve made some sacrifices along the way. Those sacrifices are usually around flexibility.

Often enough, it’s the imperfect software that goes on to solve the world’s problems, since the imperfect is more flexible. Being imperfect, by definition, leaves some room for improvement.

Design optimistically, the future may pleasantly surprise you

Another important thing to remember, is that the world around your project is not static.

The challenges that may pop up next year, will have next year’s technology to solve them.

A lot of people aren’t only designing without thinking about future tools, but they design around tools that are decades old. They are limiting themselves with constraints which are long gone.

Let me harp on about a particular issue here, to better explain this point: Designing distributed software in order to have enough computing power at your disposal.

One of the common reasons that people give for designing distributed software, is the fact that one machine won’t be able to scale up to the specs they want.

Whilst that is true in some situations, I find it hard to believe it for most projects, especially in startups which barely have any clients.

I think part of the reason is that most people building software in 2018, still think of the servers of 2005.

Computers are greatly improving every year, and there are plenty of providers out there selling cheap dedicated servers.

Let me describe a low-end large server to you:

Two Xeons E5–2680v4 (28cores & 56 threads, cloking at 2.4GHz to 3.3GHz)

512 Gigabytes of DDR4–2400 RAM

2 NVMe SSDs of 1.2TB each (~3GB/s read and ~1.5GB/s write each).

I would bet that much of the world’s distributed computing software has workload requirement that are, tallied up, less than half of what that amateurish server can pull off.

The kind of server I described above costs ~800 to 1300$/month depending on location. You could literally get 10 of these for the wage of an experienced DevOps engineer in London.

The amazing thing about this server, is that it will only be half as expensive in 2 or 3 years.

The wonderful thing with computers is that they keep getting better and they will continue to do so in a linear way until the late 2020s. At that point, who knows what new inventions will come to the scene. We may even see the open source hardware revolution take place by then.

Yet people keep designing software for the hardware specifications and pricing of the early 2000s, when really you should be designing your 2018 software for the machines of 2019.

But this applies to more than server. If you want to think about the future, think about all the peripherals that will come along. I’m pretty sure the guys who built a voice frontend for their product in 2016, are quite happy in 2018.

What is the peripheral to design for in 2018 ? Fuck if I know. I do however know that it’s one that’s yet to become popular. One that will help you have a monopoly when it gets big, because you future proofed your software for it.

And this goes beyond hardware, the advances in software are absolutely amazing. Web-browsers are becoming universal VMs due to the advent of WASM. In 2 years time, you will be able to build a high-performance app by compiling it for exactly 1 target: web assembly.

But, in spite of that, people still design for the home computers of 2012. They use Babel, despite the fact that 99%+ of their users have ES6 capable browsers.

New language are popping up everywhere, and some are actually very good. In only the last 8 years we’ve had Go, Rust, Scala and D come along, completely changing the playing field for systems programming. In the next 2 years I predict that Julia might lead a similar revolution in scientific computing… And those are only the fields that I personally care about, the sum total of upcoming amazing stuff is incredible.

But I digress…

It’s easy to get excited about the future. But, quite frankly, nobody knows what will come in 1 or 2 or 5 years from now. There are some collective and personal ideas, but they are naturally not perfect.

Still, if you actually want to “future proof” software, you first need to understand what the present can offer. Furthermore, it usually helps to make some conservative data based estimates about what the future will hold in store.

Future proofing software for 2020, whilst in the zeitgeist of the early 2000s, will not help you even in the best of scenarios.

So, don’t stop future proofing your software

Just start doing it the right way.

Design not only with the future of your product in mind, but with the future of all its surrounding ecosystems.

Design for flexibility, not for perfection. Flexibility is what ultimately helps you adapt your software for the future. It allows you to more easily take on the real challenges that come up, rather than protecting you from imaginary ones.