It's (Not) All Been Done By Herb Sutter Every decade or so we have a major revolution in the way we develop software and the environments we develop for. But, unlike the object and web revolutions, we can see the concurrency revolution coming. A briefer version of this article appeared in Dr. Dobb's Journal, 31(9), September 2006. This is a wonderful time to be a software engineer. Every new frontier is the exhilarating domain of inventors and explorers. Life on the frontier can be primitive and even dangerous, but in return the pioneers have an overwhelming compensation: There’s so much room, so much uncharted territory, and everything is just waiting to be discovered and invented. The 1950s and 1960s were like that. Computer science was new, and you know the pioneers—names like Edsger Dijkstra and Sir Tony Hoare. But have you ever felt that all the cool stuff was already done back in the 1950s and 1960s, back when electronic computers were new? Consider that all of the major “new” technologies of the past 15 years were originally invented in those early days: Garbage collection? Java didn’t invent it; McCarthy invented it for Lisp in 1958, and published his paper in 1960. Object oriented programming? Smalltalk and C++ didn’t invent it; Simula did, vintage 1965, and released in 1967. Parameterized generic types? Ada and C++ didn’t invent those; Strachey did, also in 1967. Incidentally, 1967 was a banner year; it also saw the first design meetings for the ARPANET. But bringing those technologies to mainstream programmers required developing them further, including doing new research and development to make the technologies more broadly usable (e.g., think Mosaic), and making them into widely usable polished products. Languages like Smalltalk, C++, Java, C#, Python, and many others, and all the smart people who developed them and worked on their many implementations, deserve huge credit for making that huge engineering leap. So, yes, it surely is humbling to realize that all the hot stuff that seems so brand new today actually existed back when our parents still had to carry their stacks of punched cards to school in snow up to their armpits and uphill both ways. (And that assumes our parents were one of the lucky few to have shared access to a mainframe computer that was far less powerful than your PDA.) But Java brought garbage collection to the mainstream, Smalltalk did the same for objects, C++ did the same for generic types—and next we’re going to do the same for concurrency and parallel programming. For the first time in the history of computing, mainstream computers are no longer von Neumann machines, and never will be again Just think: For the first time in the history of computing, mainstream computers are no longer von Neumann machines, and never will be again—they are parallel. We have already largely succeeded with the quest to put a computer in every home and purse; now we’re effectively going to be putting a Cray into every den and pocket. Given that our applications are going to run on parallel machines, this is a time of enormous opportunity, along with a great deal of work. Sure, concurrency has been done before; parallel computing was researched by some the very people already mentioned (e.g., Hoare’s seminal paper on “Communicating Sequential Processes”), and companies like Cray have been doing it for years. But the mainstream programmer and mainstream environments have most certainly not been doing it routinely, and we have only now just begun the process of bringing concurrency and parallel programming to the mainstream. At the end of 2004 in "The Free Lunch Is Over — A Fundamental Turn Toward Concurrency in Software," I wrote that “the biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency.” The concurrency revolution now getting underway will be as significant as the object revolution of the late 1980’s and 1990’s in its impact on programming languages and development tools. Just as during the early 1990s we attended conference sessions with breathless topics like “what is an object” and “what is a virtual function,” in coming years we’ll attend conference sessions with breathless topics like “what is an active object” and “what is a future.” This has already begun. Centuries ago, Galileo invented the telescope, and Newton the theory of gravitation. For the century or three after that, undoubtedly some researchers felt that they were “merely” making incremental improvements. But the truth is that there was still much to do; just ask Albert Einstein, Robert Goddard, and Wernher von Braun. Yes, Galileo was the first to see the lunar surface was irregular and that the Milky Way was a densely packed gauze of stars. Yet some of our contemporaries have walked on the face of that moon, while others are mapping distant galaxies, superclusters of galaxies, and the universal background radiation. Galileo could see individual stars perhaps halfway across our galaxy, and so we know he could see on the order of 50,000 years into the past; today, we estimate that we can peer back in time as far as several billion years ago. We can be glad Einstein and von Braun didn’t decide to pursue some other field because it seemed Galileo and Newton had already done all the interesting stuff. And likewise Galileo and Newton happily didn’t decide physics was just sooo over because all the cool stuff had already been accomplished by Euclid and Archimedes. For the rest of this decade and into part the next, we’re going to do for concurrency what we did for objects and garbage collection Sometimes it might seem that all the cool stuff was invented in the 1950s and 1960s, and that the pioneers had all the fun when everything was new and waiting to be discovered. The truth is that our industry is still in its infancy. We can admire the contributions of the Galileos and the Newtons, but they themselves stood on the shoulders of earlier giants just as we will stand on theirs. Yes, this is a wonderful time to be a software engineer. For the rest of this decade and into part the next, we’re going to do for concurrency what we did for objects and garbage collection and the web: Stand on the results of past giants, develop their ideas still further, and bring the concepts into the mainstream. It’ll require using the best techniques of the past, developing new ones, and a pile of work to turn it all into broadly usable products. But, unlike the object and web revolutions, we can see the concurrency revolution coming. Every decade or so we have a major revolution in the way we develop software and the environments we develop for. Each time, we carefully navigate a turn and then accelerate down the next straightaway: We turned the corner into the structured programming revolution, then the PC revolution, then the OO revolution, then the Internet revolution. Now, we’re just navigating the twists and turns of Parallel Alley and getting ready to open our throttle again down the next chunk of open road. Pedal to the metal, everyone. The new world awaits.