Corrections and useful facts for you. Windows NT was not a rewrite of Windows 95. In fact it was released in 1993, well before Windows 95 got released. It is unfair to say that Python 3 has adoption problems. In fact the rate of adoption is slightly ahead of what was initially expected. There are a lot more successful rewrites you can add to the list. For instance Perl 5 is a rewrite of Perl 4, vim is a rewrite of vi, and less is a rewrite of more.

Re Windows 95, thanks for the correction, I'll update the root node. It seems that Windows NT was a rewrite of Windows 3 and that Windows 95 was derived from the Windows 3 code base. Update: after two objections, I've now expunged all references to Windows NT in the root node. :) Re Python 3, I only claimed it was meeting "substantial resistance". Maybe that's unfair, depending on your interpretation of "substantial", but it's certainly meeting some resistance based on random web chatter on the subject. Well, I'm a Python user and I'm resisting it. ;-) My personal opinion is that breaking backward compatibility was unwarranted for a release with relatively modest improvements. Many businesses with large investments in Python 2.x code will resist Python 3 indefinitely because upgrading will prove too risky and/or too expensive. Thanks for the successful rewrite examples; I knew there must be a lot more but didn't spend enough time researching them.

It seems that Windows NT was a rewrite of Windows 3 Absolutely not. Windows NT was an entirely separate, new development of 32-bit code. Ie the Win32 API. Win32s was a thunked win32 emulation retrofitted to the 16-bit Windows 3. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error. "Science is about questioning the status quo. Questioning authority". In the absence of evidence, opinion is indistinguishable from prejudice.

Some notes below your chosen depth have not been shown here

Some notes below your chosen depth have not been shown here

I've been hesitant to respond because I know you've put a lot of work and thought into this essay, but there is something here that just isn't ringing true for me. I'm struggling to put it into words, but I think it might be this. The entire essay relies very heavily on three assumptions: There is a clearly recognizable distinction between refactoring and rewriting that is so obvious that it doesn't need to be explained. Rewriting is bad and loses information. Refactoring is good, but only if done continually and with the help of tests.



Satisfying the customer is good and over design is bad. The difference between "design what the customer needs and no more" and over design is clear and also doesn't need explaining.



Human beings, and particularly crack programmers, are largely incentivized by external rewards - customer statisfaction, money, BMW, etc. Read more... (12 kB)

ELISHEVA: Rewriting vs. Refactoring If you start from a blank source file, I'd consider that rewriting, even if you start transplanting subroutines from the original back into the new file. I feel that taking a working system and transforming it without breaking it is refactoring, while building a new thing (even when borrowing heavily) from the original is rewriting. Of course, you can do both in the same project, as you may refactor some bits and rewrite other bits. The problem I generally see with rewriting is that there's quite a bit of knowledge that's encoded into the system that's not immediately obvious. Things like:

Order dependencies: Systems consuming the output expect a particular sequence of records or operations and the order isn't documented.





Workarounds for problems in other systems: Sometimes bugs become part of the standard interface, and the documentation isn't updated. So writing fresh from the spec causes you to rediscover those issues. Refactoring addresses these (somewhat) by making a change at a time. By writing tests for all the changes you'll hopefully capture some of this hidden knowledge in your tests. They won't necessarily be documented any better, but when you put it into production and other systems break, you should be able to trace back to the original test and document accordingly. Of course, refactoring has its own issues. If you can imagine a cleaner structure for your program and try to refactor towards it, you'll find that "you can't get there from here", or you have to first go to Timbuktu before you can get back home. With my experience (quite a bit), I find that the less you know about the domain or the application, the more you should lean toward refactoring. Similarly, the closer you are to being a domain expert, the more sense rewriting can make. The problem is that it's often difficult to objectively judge just how much knowledge you have about the domain. My approach is normally to find some dividing lines in the system where I can break it apart with the fewest changes possible. Then I can choose to refactor some chunks and rewrite others. Overdesign and customer needs You've pretty much hit the nail on the head: If I'm paraphrasing your arguments correctly, the difference between overengineering or not is largely communication. If you're going off and doing anything beyond what you've discussed with the customer, you're overengineering things. If you think the system needs to do something specific, or that the architecture needs to go in a certain direction, you need to have a talk with the customer about anticipated future changes so you can shape things correctly. So if you discuss things with the customer and get buy-in, then you're doing your job correctly. If you have discussed things with the customer, and they're adamant about a particular direction, then you need to do what they want, or you're throwing their money away. (Gasp - they have a business need for a framework architecture?) I've heard it mentioned that "functionality is an asset, while code is a liability". Too often, programmers know they need some functionality, but build their own rather than buying it. (I, unfortunately, succumb to this temptation a bit too often, myself.) I've been trying to periodically take a break from design and/or coding so I can sit back and review the requirements so I can stop myself from going off into the rabbit holes. ...roboticus When your only tool is a hammer, all problems look like your thumb.

To be quite frank, I believe that the essential difference between “refactoring” and “rewriting” is that one term is politically expedient, while the other term is not. In both cases, you are doing the exact same thing in terms of the code: you’re replacing the existing code with something altogether new, which renders the code inoperable (un-compileable) for an extended period of time and which must, in the end, be re-validated to verify that the new code works the same as the old. The term, “refactoring,” is currently sexy and implies improvement ... “making an already-good thing better” ... whereas “rewriting” (wrongly...) implies previous failure. I do admit to the reality that, sometimes, in order to get approval and funding to do what badly needs to be done, you are obliged to resort to “necessary euphemisms.” Like it or not, computer software is very fragile (and therefore, costly) stuff, simply because it is riddled with functional and data dependencies. It is, so to speak, “a house of cards,” which can only stand up to a very limited amount of “remodeling.” I simply think that this is ... the essential and unavoidable nature of the beast. It obligates us to try to do the best that we can, knowing that there are serious limits to that. I submit that there is no silver-bullet language or technique to be had. (He would rightly be a gadzillionaire who discovered it.) With regard to the point of “overdesign and customer needs,” there is the consideration that (a) the customer does not always know just where his business will take him; and (b) in any case, he is not a professional software builder and does not profess to be. Sometimes you do need to “go beyond what you discussed with the customer,” because in your professional judgment as a software engineer, those additional elements (for example...) create the foundations for future characteristics of the system that are reasonably foreseeable as well as engineering-practical. But, you need to be sure that you get all points about what the customer requests, and of what you have in turn decided to do, and every single subsequent change to the foregoing, in writing and signed-off and filed away for all eternity. Part of the (successful) argument for “frameworks” is that the cost of developing and maintaining them can be cost-amortized (or simply “unpaid-effort amortized”) among many projects that employ them ... thus allowing all of those projects to enjoy the full benefits without incurring the full costs. The use of frameworks imposes a certain specific “world view” upon the project, however ... namely, the world-view of that particular framework’s designers, quirks and oddities and all. Choose your project’s spouse very carefully. The project’s entire future direction is necessarily molded around that of the framework, and in a very rigid way, except to the extent that the project’s actual implementation might be, by deliberate choice, architecturally divided into (framework-based) “client” and (non framework-based) “server” portions. The cost/benefit analysis of using frameworks usually prevails in spite of this consideration, because so much of the constituent code in so many projects isn’t unique at all.

IBM who saved itself by making the transition from type-writers to computers and business IT design in the late 80's and 90's Oh dear! Truth be told, a large part of NT is actually borrowing from *nix Oh dear, oh dear. If you are going to write authoritatively about history, it would really be better if you actually knew something about it. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error. "Science is about questioning the status quo. Questioning authority". In the absence of evidence, opinion is indistinguishable from prejudice.

Actually, IBM restructured itself (and thereby I guess saved itself) by moving from a hardware+OS vendor (OS360) to a consulting company (also see its purchase of PWC), so that part isn't that far fetched.

Some notes below your chosen depth have not been shown here

The comment about NT and *nix - that was based on my memory of press reports at the time it was being developed. If I recall correctly they originally wanted to do a green field system and then found that they had to borrow certain parts of the *nix architecture - what exactly I don't remember. I know many of the developers came from DEC, but the few things I'm finding on the web focus on the VMS influence. Business press reports on technology often get it wrong, so I might be remembering someone reporting the DEC hirings and just assuming it was DEC UNIX rather than VMS that ended up in NT.

Some notes below your chosen depth have not been shown here

Are all people driven by this velocity you talk about? By the opportunity to have a BMW in the right spot in the parking lot? Some are, but others aren't. If not, how does that affect the way you manage a project? Will focusing so much on velocity promote incentive or unintentionally kill it? I think it depends very much on the team. There is no formula and no way around tuning management practices to the individuals involved in the work. That is what makes good management hard work. *applause*. I couldn't agree more. I get intensely irritated when I see people being rewarded for writing unclean code at high velocity, being promoted to a new job, while I'm left to clean up their mess. If I can find the motivation (noone is offering me a BMW;-), I'll discuss intrinsic versus extrinsic motivators and other management issues at length in a future installment of this series.

I think 'scrum' has its place, but not as a model for all project management every where and anywhere. I agree. I hope this whole series of articles (especially the first one) has made that clear. If it's appropriate and the team wants to use it, knock yourself out with Scrum, but do not impose it on the team from outside. For the record, while I generally support agile and lean principles, I prefer to think for myself rather than blindly follow a "branded" methodology. If forced to choose a "branded" methodology, I'd choose Kanban.

This is partly due to it being open source and partly due to its small footprint and stability. Code bases come and go, but the core architecture hasn't changed in literally decades. I guess that depends how you define "core architecture". :) I'd say there are at least three competing core architectures for implementing Unix: Monolithic kernel





Microkernel





Hybrid kernel The infamous 1992 Linux is obsolete debate between old hand respected operating system researcher Andrew Tanenbaum and young upstart Linus Torvalds makes interesting reading. I guess it shows that "theoretical (academic) superiority" does not necessarily translate to success in the marketplace. I think it's fair to say that monolithic kernels still dominate the Unix arena, though Tru64 UNIX is built on top of the Mach microkernel and Mac OS X is built on the XNU Mach/BSD-based hybrid kernel. Though I'd like to see the microkernel based GNU Hurd succeed, sadly that now looks doubtful after more than twenty years of development -- yet another example of the perils associated with "writing new systems from scratch".

It is worth noting that I personally know the original GNU architect, and he claims that he thought at the time that the easiest way forward was to build on top of BSD. But RMS chose Mach, in part because academia was very fond of microkernels at that point. He says that RMS has acknowledged that this decision was a mistake.

The difference between "design what the customer needs and no more" and over design is clear and also doesn't need explaining. To clarify, I'm not an unthinking follower of "design what the customer needs and no more"; I feel that's a dangerous over-simplification. While writing code that is never used is certainly waste, and one has to beware of over-engineering, I don't view this as a black and white issue. I touched on this in the "Agile Design" section where I stated: Software design is an art requiring experience, talent, good taste, and deep domain, computer science and software usability knowledge. I feel there's a bit more to it than the four simple rules above. and then continued on to present my twenty tortuous rules. :)

This post is proof-positive that, when you see More... at the bottom of what seems to be a very short posting, it pays to click on it. Too bad I can only vote it up “once.”

A Mythical Perl-based Commercial Company For fun, and to better understand why this sort of thing happens, let's consider what might transpire if Perl 5 or Perl 6 formed the crucial core software of a commercial closed-source company writing customer-facing software in cross-functional Scrum teams. Such a company exists. It's the same company that hosts the official repository of the Perl sources, which has donated a large amount of money to TPF a few years ago, and has been sponsoring YAPC's both in North America and Europe. The company is called Bookings.

Out of curiosity, do Bookings staff members actively work on the Perl 5 C sources? Or do they just fund Perl development? To further clarify, the main point of my scenario was to ponder whether employees of such a mythical company, working in Scrum cross-functional teams with a goal of producing "customer value", would be eager to work on the Perl 5 C code or whether they would try to avoid doing that and instead focus on writing new Perl 5 systems to provide "better customer value at a higher velocity" (and so get to drive a new BMW and park in one of the fancy spots:-). In this mythical scenario, the customer does not know or care about Perl, they just want their systems delivered on time that satisfy their needs. Perl is mimicking the closed-source "infrastructure or core component" that caused so many headaches for Schwaber when implementing Scrum in cross-functional teams that are meant to be self sufficient; that is, each team is meant to be capable of maintaining the Perl C sources.

A cynical person could say that Booking.com is actively hindering Perl5 development by hiring so many (ex-)pumpkings and other people knowledgeable in Perl :-) As far as I'm aware, demerphq, BooK, Abigail and Rafaël (and likely many others whose names I just currently don't have in mind) work there.

I find myself pondering, more and more and more these days, precisely how much “really new” code there now needs to be in this world... and, how long we are going to continue to run that code on “our” machines. I am beginning to suspect that we might well see core business functionality becoming a “software service” that is “hosted in the cloud,” such that the role of traditional software development – scrum or otherwise or what-have-you – just might change quite radically. We might soon find ourselves being referred to as “assemblers,” tho’ not in the traditional computer sense at all. Having built more-or-less the exact same things so many times, we ought to be getting very good at being able to buy them, instead. We say that we build, applications. But, is that definition changing before our eyes? If the only thing that you need to do anything is a web browser . . .

We say that we build, applications. But, is that definition changing before our eyes? If the only thing that you need to do anything is a web browser . . . Well, speaking as someone who for a living writes things that happen in a web browser, the definition sure seems about the same. I have a pre-written cross-platform UI toolkit with a weak but usually sufficient set of control primitives... but the hard part of most applications isn't assembling the UI anyway. I tend to believe that if the set of pre-written inter-pluggable primitives ever actually becomes rich enough to do all the stuff we "program" to achieve, all we'll have really done is just made a new programming language; it's not qualitatively different, and it's still going to require people with the same skill set as "paleoprogramming".

I tend to believe that if the set of pre-written inter-pluggable primitives ever actually becomes rich enough to do all the stuff we "program" to achieve, all we'll have really done is just made a new programming language; I think that was one of the lessons of the 4GL (forth generation language) movement. Any toolkit sufficiently expressive to cover all of the client's business cases inevitably needed the full set of control flow statements. It quickly ceased being something just anyone could use and turned into something that required a programmer. What makes programming programming is not the units we work with - bits and bytes vs. complex objects. Rather it is the logic that binds them together into something useful. Once that logic begins to include conditionals, loops and the need to organize collection of data and functionality into discrete loosely coupled sub-systems or objects, it requires, as you say, "the same skill set as 'paleoprogramming'".

But as we progress in our development and add shiny new things to the top of Perl’s tower, we’re making the bottom more unwieldy. One of these days, at least some part - if not all - of the tower is going to collapse.



This is why we need Perl 6. We now know what our tower should look like, and we need to build it from that design right from the start. These are the last two paragraphs from "The Tower of Perl" article you link to and were published August 9, 2001 ... looks like the tower still did not collapse and the rewrite takes ... quite a lot of time. Jenda

Enoch was right!

Enjoy the last years of Rome.

Your thoughts about rewrites seem unorthodox to me. Let me clarify what I think of new projects, rewrites, and refactoring. Subversion, git, and Mercurial are not rewrites of CVS. They are new projects with similar goals. If they had the exact same feature sets they'd be called clones. There's no rewriting at all. There's just a fresh writing. A total rewrite is when you start writing the same project over from scratch. You throw your existing code base in the bin and plan to eventually ship a new version that started from different empty files. It probably won't pass the same external tests, and unit tests likely won't resemble the old ones. It likely uses an improved framework or a completely different one based on different concepts. A partial rewrite is when you rewrite some portion -- a module, a source file, a few functions -- over from scratch. Most of the external tests will still work so long as you don't change too many features at the same time. Unit tests for the rewritten portions will likely need to change unless you carefully stick to the same API and internal interfaces as before. Refactoring is when you clean up existing code and don't remove any code until you've got the replacement ready so it passes the same unit tests. You don't violate separation of concerns at all while refactoring. You just clean up what's there between change orders. The APIs between modules don't change. The internal interfaces stay the same except among very closely related functions or methods, and you end up with basically the same program. All external tests of the program pass without change. Most unit tests don't change, and the very few that do are just minor tweaks. The implementation is just clearer and maybe the execution path is shorter for the most common cases. Bugs probably don't even get fixed, although they are likely to be easier to notice by reasoning about the code. You're just cleaning the code, and you can generate a new ticket for the newly found bugs. A change order is executed from any feature requests or bug tickets. This is when functionality changes without a rewrite. Let's talk about bugs first. Generally just enough lines are changed to fix behavior for a bug, and the code around it is only cleaned up at this point if necessary to make the bug fix manageable. The test changes for the bug are to test the fixed behavior and to test for the buggy behavior as well to see if it returns. This often means boundary checking or a little fuzzing. The feature request might be to add, change, or remove a feature. The amount of code change can vary. The only tests that should need to change are those relating to the feature itself in the external tests. The unit tests should change for any new or removed APIs and internal interfaces adjusted for the feature. What I like to do with a project is to take all the bug-fix change orders and implement them. Then I validate against my tests. Then I refactor the whole program. Then I take the feature requests and apply those. Then I refactor the whole program again. Then, if necessary, I optimize. Then, if I can refactor the optimized code without killing the performance, I refactor again. Then the process starts over with new change orders. Does it always happen this way? Of course not. I'd like that, though. If I took the project over from another team, I'd try to refactor it all up front before making any changes in functionality. Then I'd start with the above process. This seems quite a bit different from the terminology you're using. I understand not throwing away an important code base. Saying that's what someone writing a new alternative to an unrelated project is doing doesn't seem quite accurate to me, though. Git and subversion are based on different ideas for accomplishing different but similar tasks compared to CVS for example. People wanting to rewrite CVS would be trying to end up with something that is CVS but with none of the original code. The other change tracking systems were written with something better than CVS in mind and didn't have any code already bugfixed and tested for their something better.

Thank you for a well thought out response. While the newer word "refactoring" seems to be pretty well-defined, I feel that the older word "rewriting" is not. From Martin Fowler's original Refactoring book: Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. ... In essence when you refactor you are improving the design of the code after it has been written. From refactoring.com: Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. Its heart is a series of small behavior preserving transformations. Each transformation (called a 'refactoring') does little, but a sequence of transformations can produce a significant restructuring. Since each refactoring is small, it's less likely to go wrong. The system is also kept fully working after each small refactoring, reducing the chances that a system can get seriously broken during the restructuring. Hopefully, most folks will agree with those definitions. Now it gets much harder. For example, your opinion: Subversion, git, and Mercurial are not rewrites of CVS. does not agree with mine. My personal view is that Subversion was a "rewrite" of CVS, while the other two were not. I don't feel strongly though. I may well be "unorthodox", as you claim, yet I was pleasantly surprised to discover that many others, including Joel Spolsky, share my opinion. From Joel Spolsky: You may also want to look into Subversion, a ground-up rewrite of CVS with many advantages. From Open Source Software Development (wikipedia): A good example of a complete rewrite was the Subversion version control system, whose developers started from scratch: they believed the codebase of CVS (an older attempt at creating a version control system), was useless and needed to be completely scrapped. From Concurrent Versions System (c2.com) SubVersion is a project to rewrite CVS from scratch, in a more flexible and extendible way - and then to extend it. Finally, a probing (and relevant to this thread) question from Shlomi Fish interviews Ben Collins-Sussman: Subversion was a re-write from the grounds up done by many of the original CVS workers. Do you think it could have been faster to replace CVS (or CVSNT) component by component, thus yielding Subversion? To take another example, while I view Perl 6 as a "rewrite" of Perl 5, I suspect many monks would disagree with that view; a couple of them have already made that plain in this thread. Note however that Larry Wall at least seems to view Perl 6 as a "rewrite" of Perl: Perl 5 was my rewrite of Perl. I want Perl 6 to be the community's rewrite of Perl and of the community. Admittedly, that quote was taken from State of the Onion, TPC4, and the direction of Perl 6 has changed a bit since then. I'd be interested to know if Larry still views Perl 6 as a "rewrite" of Perl 5. Open Source Software Development (wikipedia) neatly summarizes the available rewrite/refactor options: Often open source developers feel that their code requires a revamp. This can be either because the code was written or maintained without proper refactoring (as is often the case if the code was inherited from a previous developer), or because a proposed enhancement or extension of it cannot be cleanly implemented with the existing codebase. A final reason for wishing to revamp the code is that the code "smells bad" (to quote Martin Fowler's Refactoring book) and does not meet the developer's standards. There are several kinds of revamps: Refactoring implies that the code is moved from one place to another, methods, functions or classes are extracted, duplicate code is eliminated and so forth - all while maintaining an integrity of the code. Such refactoring can be done in small amounts (so-called "continuous refactoring") to justify a certain change, or one can decide on large amounts of refactoring to an existing code that last for several days or weeks.



"Partial rewrites" involve rewriting a certain part of the code from scratch, while keeping the rest of the code. Such partial rewrites have been common in the Linux kernel development, where several subsystems were rewritten or re-implemented from scratch, while keeping the rest of the code intact.



Complete rewrites involve starting the project from scratch, while possibly still making use of some old code. A good example of a complete rewrite was the Subversion version control system, whose developers started from scratch: they believed the codebase of CVS (an older attempt at creating a version control system), was useless and needed to be completely scrapped. Another good example of such a rewrite was the Apache web server, which was almost completely re-written between version 1.3.x and version 2.0.x. Apart from arguing over semantics, the interesting strategic decision we face is whether to extend an existing legacy code base or throw it away and start from scratch. There is no one "right" answer to that question: it depends on the project, the team, the quality of the existing code base, and many other factors. Perhaps the most important thing is striving to prevent legacy code degenerating into a tangled mess in the first place.

To take another example, while I view Perl 6 as a "rewrite" of Perl 5, I suspect many monks would disagree with that view; a couple of them have already made that plain in this thread. Note however that Larry Wall at least seems to view Perl 6 as a "rewrite" of Perl: Perl 5 was my rewrite of Perl. I want Perl 6 to be the community's rewrite of Perl and of the community. Sorry to be pedantic--it's not usually my thing--but I think you subtly reinterpreting Mr Wall's words in support of your argument. The man himself will set me straight if it is of interest to him, but I think that "Perl 6 to be the ... rewrite of Perl" is considerably different from "Perl 6 as a "rewrite" of Perl 5". 'Perl', unadorned by the version number, is neither an implementation that can be re-written, nor a design evolution that can be reimplemented. It is a 'only'--and precisely completely--a concept; an ethos; an idea. As such, Perl 5 wasn't a rewrite of the Perl 4 implementation; but rather a rewrite of the Perl design that was then implemented as Perl 5. Ditto for Perl 6 relative to Perl 5. The (one; but a good one) definition of 'rewrite' in the context of software is: A rewrite in computer programming is the act or result of re-implementing a large portion of existing functionality without re-use of its source code. When the rewrite is not using existing code at all, it is common to speak of a rewrite from scratch. .. On the basis of both that definition, and my limited expereince of both, calling the feature rich Subversion a rewrite of the CVS, is like calling the Ford Focus a rewrite of the Ford Model-T. They serve a similar niche and target audience; but the way they go about achieving it is so utterly different. The goal of re-implementing the same basic functionality is present; but the provision of so much additional functionality makes the term 'rewrite' an inadequate description of the reality. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error. "Science is about questioning the status quo. Questioning authority". In the absence of evidence, opinion is indistinguishable from prejudice.

My intent was not to start an argument over semantics nor over anything else. I merely intended to clarify where I think some imprecision and unnecessary disagreement has entered the thread. If we keep using words we define differently as a basis, then we at least need to know how those words are being used by each party. Otherwise we'll talk past one another and nobody really knows where we would agree and disagree no matter how civil or friendly the discussion. I also think it helps to remember that intentions toward a project can change over time. What one thinks will be a straightforward rewrite from the beginning can change in focus and gain features before the rewrite is done (or even really started). The new design can be a totally different sort of beast from the old, but since it's still in the same lineage the distinction is blurred. In fact, I suspect that the svn folks intended to rewrite CVS but looking back would only loosely use that term for what they finally did. I think Larry would say Perl 5 is a rewrite of Perl 4 from the point of view of both the language and the perl tool. I would probably say that, anyway. I think he intended originally for Perl 6 to be a rewrite of some sort, but the language is the only thing being rewritten IMO. I think Rakudo and Parrot are definitely not rewrites of perl 5.6 or 5.8 although the language implemented is still in the Perl family. How Larry actually does view things of course would be for Larry to say no matter what I think he might say.

Over the course of a great many years, I’ve noticed that the software industry seems to be locked in an ersatz science-fiction movie. The scene opens in a graveyard, beside an open grave, with shovels and picks all around ... and the grave is surrounded by cribs, and in each crib there is a happy young baby. Everyone in the scene wants desperately to grab a shovel and fill-in the grave, but no one can do so, because the life-force that is still sustaining everything is within the erstwhile “corpse” that has been consigned to the grave. (In fact, a baleful-looking old man is standing upright in that grave, and he ... the oh-by-the-way source of all that business sustaining life-force ... is far busier than all the rest of them combined, with nary a shovelful of dirt upon his head.) As the science-fiction movie progresses, an amazing thing happens. The bouncing, happy babies almost instantly turn into old men, and graves are promptly dug for them in which they calmly stand, busily doing the jobs for which they were intended, even as a brand new set of bouncing babies appear. (The engineers promptly turn their attention to the new set of babies, as a new crop of clever young publishers write and sell a new crop of books.) Perhaps... we should give more serious consideration to the fact that none of the “crappy, old” legacy code in any of our shops ever started out that way, and also to the fact that every bit of the “new and improved,” “Agile&trade, Scrum™ insert silver-bullet buzzword here ™” code that we are now writing will soon turn out that way. Let me say it again. The new-and-improved systems that we are writing today will become the legacy-code of next week, regardless of what we do. (Sux to be the bearer of bad news, but this old phart doth stand his ground, and who among ye will stand with me? Who shall stand to show me wrong?) If our methods (“It will be so much better this time! I promise!!”) were actually new and improved better, then “the legacy code problem” would cease to exist altogether, would it not? ... Perhaps... we should stop trying so hard to bury Caesar, and spend a lot more time figuring out to give the old boy a facelift and a shave. The “convoluted, incomprehensible” logic of a legacy system consists of two three four (unfortunately, inseparable) parts: The code that is specific to the exact representations of code-and-data that were chosen at some particular time (the “Y2K Problem™” being the most-obvious example of this) ... and ... The (representation independent) business logic that is buried in all of that rigid concrete ... but which actually represents the business, as it actually is. It is effing huge, consisting of not one but perhaps hundreds or thousands of individual parts. All of them are moving ominously. I t W o r k s . The “so what?” take-away that I would offer is that ... every piece of computer software that we have ever designed, and that we ever will design, is a similarly “concrete” structure. Oh, we can cast it in many different languages and dress it all up in many ways (calling every single one of those ways “a silver bullet™” if it suits our marketing purposes), but our essential modus operandi, from the point-of-view of the physical hardware, really has not evolved at all. The “new and improved™” working methods that we now use, are (I would suggest...) really not materially different from the “new and improved™” working methods that our parents used. Or (gasp! I am dating myself here!) ... we, ourselves used ... to create the “crufty, old, legacy” systems that we now decry. As an example of what I am saying ... consider the “New and Improved™ System” that your “New and Improved™ Agile™ Scrum™ New-Buzzword ™ team just developed. The realities of Business are upon ye, even as one-third of your development team just went to greener pastures while another two-thirds of your team just had their visas revoked due to some unforseen technicality . Your company just swallowed or got swallowed-up by another company in what was a truly excellent business deal, and their 1,650,000 paying customers must be none the wiser when the deal is consummated eight weeks hence. Can your “methodology” cope with that? I doubt. But is it pragmatic business reality? Yes. Perhaps we should all be focusing our collective attention on things like ... change control, or the merging of development teams, or the assimilation of totally-unrelated code bases that (while well-designed by their own teams at their own time) are now in “a Brady Bunch moment.” Perhaps we are staring too earnestly at the Eastern sky, waiting for a savior to come that will never come. (I cordially request “religious indulgence,” and promise that I mean no “religious slight” or disrespect for the sake of metaphor.) Maybe we are earnestly pursuing the wrong solution to the wrong problem, just as our predecessors did. Maybe we should take full ownership of “legacy code.” Both our predecessors’, and, soon enough, our own. /me straps on his hopefully flame-proof bunny suit and waits for the circus to begin...

Please rewrite that without using italics, underline, bold or <font color="lightblue">

There. Is that better ? “HTML. Professional drivers on a closed course. Do Not Attempt.”