The age of the single-CPU computer is drawing to a close. Symmetric multiprocessing used to be found only in high-performance servers or scientific computing, but these days it’s in tablets and phones. Hardware designers have been making multi-core parts for years now to keep up with the expected pace of performance improvement. It would be surprising if this trend did anything but continue.

If you’re a programmer and you’re not already writing concurrent software, you should start. 100+ cores could be common ten years from now (or possibly even sooner). At that point, single-threaded software might as well come with a warning label that reads “runs at 1% speed.” This should come as no surprise, as people have been pointing this out for years.

Despite its benefits, and the inevitability that it will be increasingly necessary in the future, I hear a lot of concern about making use of shared memory multi-threading (the most common form of concurrent programming). Much of it is comments from colleagues and vague on-line discussions. However, there are also plenty of well-reasoned arguments against using threads from smart people who have clearly spent a lot of time thinking about this. A few examples:

As someone who has been writing and maintaining multi-threaded code for over 10 years, I find these sort of positions puzzling. Multi-threaded programming isn’t trivial, but there are plenty of non-trivial issues which programmers have to deal with.

What are some common problems people are concerned about which can be caused by bugs in a multi-threaded program?

Data corruption

Non-deterministic behavior

Deadlock

Let’s compare that to some common problems encountered due to bugs in programs using pointers and dynamic memory management:

Data Corruption

Non-deterministic behavior

Segfault crashes / null pointer exceptions

Memory leaks

I’m not convinced that threading is inherently more problematic. For the most part, both categories of problems can be avoided through a combination of following simple rules and diligent attention to detail. (And aren’t those the foundations of all good engineering?) Just as there are free and commercial tools to help you check for memory usage errors, there are also free and commercial tools to check for threading problems.

I believe multi-threaded programming is simply a set of skills that you need to learn. You have to learn a new set of tools (threads, locks, etc.). You need to understand the pitfalls and methods for avoiding them. You need know how to think about the behavior of your program in a different way. It can be daunting until you become accustomed to it. So can using a functional language if you’re used to global variables and other kinds of shared state (e.g. moving from Python to Haskell). So can explicit memory management if you’re used to garbage collection (e.g. moving from Java to C++).

There are some good reasons to be concerned, but they have more to do with insufficient understanding than with the nature of shared-memory multi-threading.

It can be difficult to get a good education in how to program with threads. Even today, concurrent programming remains somewhat esoteric. However I believe that will change before long. As the typical number of CPUs increases, concurrency is changing from a beneficial option to a necessity. That will increase demand for understanding of concurrent programming, and change concurrency from an ancillary topic to part of the core knowledge every programmer needs. That should lead to a greater supply of quality educational resources, as well as a larger community of programmers comfortable with concurrent programming.

It is also more difficult to incorporate multi-threading into existing software than it is to write software from the ground up using threads. You need to have a clear understanding of when shared mutable state is accessed by multiple threads and take steps to make this safe (e.g. protecting all such accesses with locks, giving each thread its own copy, etc.). This of course includes any calls to libraries which aren’t thread-safe. Unless you have a detailed understanding of all the code in the software you intend to make multi-threaded, it’s easy for bugs to slip through the cracks. Diligent programmers can succeed at this, but it takes careful attention to detail. There are some clever tricks which can help, such as transparently replacing non-thread-safe POSIX library calls (described here), but those will only get you so far. Anyone attempting such a transition for an existing piece of software should plan for thorough code audits, as well as re-writing a non-negligible portion of the code.

There are of course alternatives to shared memory multi-threading, and they are often touted as “solutions” to the “problems” of multi-threading. There’s software transactional memory built into languages such as Haskell and Clojure. (There has even been a credible attempt to add it to C++.) There’s also the message-passing actor model which is implemented by Erlang as well as several C++ libraries (e.g. Theron, libcppa). I’m wary of people offering silver bullets, and I think there’s good reason to be skeptical here. Without going into the gory details, there are some credible arguments that the typical “retry on conflicting access” method of implementing STM performs poorly when contention for shared data is frequent (which will tend to happen as a system scales up). Message passing systems typically copy data sent between actors, which can also be a performance issue. These approaches may have their uses (and some implementations show more promise than others). However they are not without their own problems, and they’re not a concurrency panacea.

The bottom line for me is that I can’t believe that shared-memory multi-threading is inherently incomprehensible or otherwise too difficult to use. I’ve written quality multi-threaded code, and I’ve seen others do it. The necessary skills may still be uncommon, but I don’t see how they can stay that way in the future.

Of course maybe I’m wrong and most programmers will never be able to handle multi-threaded programming. It could be even worse than that, as some people believe concurrent programming will never see widespread use in any form. If programmers can’t muster the ability to make use of the parallel processing capacity in future machines, our personal computers may seem to stop getting faster.