In Stop Preventing the Future!, I promised to talk about sane deprecation policies. I want to digress for this entry to give one more explanation of why keeping up with the future is important.

I believe that software under active maintenance should get easier to work with over time.

I realize that that statement contradicts the direct experiences from many, if not most, software projects. We use the term "legacy software" to imply something old, crufty, broken, and difficult to maintain. (When I use the term "legacy software", I mean "software without a future" or "software not receiving maintenance". The difference is subtle. Perhaps I should post about that, too.)

One of the persistent problems of project management is predicting the costs of change. One of the persistent temptations of Big Planning Up Front design methods comes from the idea that change is expensive, and it gets more expensive over the lifetime of a project.

While I agree that this rule is true for many projects, I believe it's a symptom of other problems, and not the source of those problems. (I also have trouble taking seriously any project without a systemic automated testing plan and regular refactoring. Do you people even care about your source code?)

I can't count how many bugs there are in Perl 5 programs because the return value of system() is a C-style return value, not a Perlish return value. Should someone have designed that API correctly from the start? Probably -- but that didn't happen. If someone had changed it in Perl 5.6 in 2000, it could have prevented a decade of those bugs. Would that change have been painful? Probably.

Is the pain of a single, well-informed change greater than the pain of uncountable multitudes of bugs? I doubt it.

Consider a more positive example. Perl 5.10 changed a diagnostic message such that when you use an undefined value in a catenation or interpolation, Perl reports the name of the variable containing undef . This is a tremendous benefit to debugging -- but it changes the format of a warning on which existing code may have relied.

In this case, Perl 5.10 is easier to work with, because a common warning is much, much easier to debug. It's a small change, but it's the kind of small change you quickly grow to rely on, similar to the strict pragma telling you that you've made a typo in a variable name. Sure, it only helps prevent silly little bugs, but the less time I spend chasing silly little bugs, the more time I have to solve real problems.

No one knows how much DarkPAN code parsed the text of the old warning. Maybe none. Maybe thousands of programs. Changing all of them may be a daunting task. Maybe it's worth it. Maybe it's not. Saving a few seconds of debugging time for a million Perl programmers is definitely an improvement.

That's what confuses me about the reticence to make other, larger improvements. The design choices of Perl 5.000 (released on October 17, 1994) are sunk costs. We can't go back in time and fix them. We can only fix them in modern versions of Perl, such as 5.10.1 and newer. The question is whether we should.

In my mind, that's not even a question. If Perl isn't getting easier to use or generally better over time, why bother to release new versions? Why maintain it?

Yes, change can be painful... but keeping up with modest changes on a predictable schedule makes it tolerable. The amount of changes you can make to a project you're maintaining is limited by the amount of work you can do in a day anyway. Why not keep up with the present, and stop preventing the future?