Are there any computer programs that you wish were faster? Time was, you could solve that problem just by waiting; next year’s system would run them faster. No longer; Next year’s system will do more computing all right, but by giving you more CPUs, running at this year’s speed, to work with. So the only way to make your program faster is to work with more CPUs. Bad news: this is hard. Good news: we have some really promising technologies to help make it less hard. Bad news: none of them are mainstream. But I’m betting that will change.

The “Java Moment” · On my recent excursion to Japan, I had the chance for a few conversations with Bruce Tate. He advanced a line of thinking that I found compelling: Right now is the time when the concurrent-programming winners will emerge. He sees an analogy to Object-Orientation in the early nineties: Several O-O languages were in play (most notably C++ and Smalltalk), but it hadn’t penetrated the application-development mainstream. Then Java came along, and turned out to have just the right characteristics to push O-O into the middle of the road.

Thus the analogy. Right at the moment, we have a bunch of candidate technologies to fill in the concurrent-programming void; obvious examples include Erlang, Scala, Clojure, and Haskell. While there are common threads, they differ from each other in many essential ways. Between the lot of them, there are a whole lot of different characteristics. The fact is, we don’t know at this moment which laundry-list of features is going to turn some candidate into The Java Of Concurrency.

What I’m Up To · I think that right now is a good time to have a run at this problem. I’ve been scanning back and forth around the Internet, information-gathering, contrasting, and comparing. As I work through this, I’ll publish my research notes here on the blog as I go along.

Here’s what I have so far:

Who Should Care · The developers, actually; they have a direct incentive in that by doing concurrency well, their apps will run faster, and their employers do too in that the same performance level will require less iron.

In fact, I suspect the most likely candidates to get behind this are the chip builders; it’s traditionally been seen as their role to push the development tools that make their products shine. So I suspect that Intel, AMD, IBM, and the Sun part of Oracle are the most likely candidates to go all activist. In particular, the Sun SPARC processors have been leading the more-cores-and-damn-the-gigahertz charge for the last few years, so we ought to be the ones who care the most.

Linkography · I’ve started working on a Late-2009 Concurrency Linkography page over at Project Kenai; it seems something that fits more comfortably on a wiki than a blog. If there’s anyone else out there that wants to contribute, I’m open-minded. I’m leaning toward things that are contemporary rather than historic in value.

Non-Problem · I think the concurrency problem is pretty well solved for one class of application: the Web server and most of what runs in it. If you know what you’re doing, whether you’re working in an Apache module or Java EE or Rails or Django or PHP or .NET, given enough load you can usually arrange to saturate as many cores as you can get your hands on.

That’s a big chunk of the application space, but not all of it. And even in web apps, it’s not uncommon to have pure application code that needs to wrestle the concurrency dragon.

Assumption · I’m taking the following as an axiom: Exposing real pre-emptive threading with shared mutable data structures to application programmers is wrong. Once you get past Doug Lea, Brian Goetz, and a few people who write operating systems and database kernels for a living, it gets very hard to find humans who can actually reason about threads well enough to be usefully productive.

When I give talks about this stuff, I assert that threads are a recipe for deadlocks, race conditions, horrible non-reproducible bugs that take endless pain to find, and hard-to-diagnose performance problems. Nobody ever pushes back.