Software is like fruit. It tastes great when it’s fresh, but goes bad very quickly. In fact, it is ridiculous how quickly software rots.

How many times have you written a piece of software which works perfectly fine for months, only for it to mysteriously break one day? The reasons can be even more confounding. Perhaps a dependency was updated, and now you need to rewrite half your code. Or perhaps an API was changed in some irritating way which requires you to completely re-architect your code. More often than not, however, it is something obscure, like an environment default that was changed by something your package manager installed. I don’t know about you, but in some instances the bug will be in a bit of code that I didn’t even change, and after I fix it, I am left wondering how it ever worked in the first place!

Part of the problem is that a typical piece of software is dependant on so many other things. Dozens of libraries. A particular bugfix release of your chosen interpreter or compiler. The presence of specific versions of shared libraries. Kernel extensions. Specific versions of specific operating systems, and specific hardware.

This situation is so bad that I am surprised when something written a year ago still works today, let alone something written five years ago, or even older. If you wanted to run the software I was writing in the 90s, you’d need access to an old Macintosh Performa, or a decent VM, and a copy of OS 8. And even then, you’d need to find a copy of Netscape Navigator 4.0 or Internet Explorer 4.0 for Macintosh. In fact, I found a folder of some of my old DHTML experiments from around 1998, and I tried to run them in a modern browser. But of course, nothing worked.

This is called digital obsolescence, and it’s a big problem. Most of it is due to the horrifyingly complex stack of technology that keeps being reshuffled and added to. But some of it is purposefully baked in. They call this planned obsolescence, and it’s been going on since the 50s. Businesses realised that if the things they were making eventually broke, they could sell more of them. So that’s exactly what they do now. They purposefully design technology to break after a few years, so that you have to buy more of it.

But in the world of open source, the biggest source of obsolescence is maintainer atrophy. Without active maintenance, an open source project is going to rot and become useless. In fact, this is such a big concern for me, that when I am evaluating new software, I judge it on two criteria. Firstly, how actively maintained is it? And secondly, how well does it do what I need?

If a project does what I need today, but has been abandoned, I cannot be sure that it will continue to do what I need. Perhaps what I need will change tomorrow. Or perhaps the software will break in some way.

But if I find a project that seems close enough, and is actively maintained, there is a good chance it is going to attract more users, more contributions, and more documentation. This means over time I can expect more support, fewer bugs, and more features. Perhaps it doesn’t do exactly what I need today, but maybe it will tomorrow? With an active community of people around a project, anything is possible.

So I would say this: if you’re working on an open source project, think carefully about where you want to invest your time. Adding features, or adding people?

You can add a bunch of features to a project, but unless you are prepared to maintain them, they are going to be useless in 12 months, or perhaps six months. And that’s a waste of effort. But if you work to add an additional person to your project, you now have two people working on your project which means more will get done, and there is a slightly higher chance your software will still works in another year or two. What’s more, your project stands a chance that it will outlive your contributions to it.

What is more important for your project: features or a future? Pick one, and prioritise your efforts accordingly.