Threads, Not Just for Optimizations

Published on January 24, 2013 by Jesse Storimer

The Ruby community seems to be abuzz with people talking about threads. But often, the conversation is geared towards the fact that our machines all have multiple cores, and we (c|sh)ould be running our code in parallel, blazing fast on ALL the cores. I absolutely think this is a good idea, but I want to talk about the other side of the equation.

Threads aren't just for speed optimizations. Threads can help us organize our programs.

Threads to organize 'processes'

Threads can be a great way to organize our code into processes (I'm talking about operational processes here, not operating system processes), tasks, or functional units. I'll give you a few examples.

First, let's take a look at our beloved MRI. This example doesn't actually involve Ruby's Thread class, it's something internal to MRI itself. If I boot up an irb session and look at my Activity Monitor I see this:

Before I've run any code, MRI is already using 2 threads. You might think this is due to some gem that I've included in my ~/.irbrc , but it persists even if I start up irb with the -f option. So it's something inside MRI itself.

I asked a semi-related question on Twitter a while ago and got the answer to this:

@jstorimer MRI 1.9 uses a separate thread to handle signals, which starts on VM initialization. pipe may be used to communicate the info. — deepfryed (@deepfryed) March 31, 2012

So when MRI boots, it spawns a thread that registers handlers for Unix signals. You can see this for yourself, if you read C or are feeling adventurous, in MRI's thread_pthread.c . Why does it do this?

It's a smart way to organize one particular process of a program: signal handling. Rubinius uses the same strategy, as does the JVM.

It stems from two important facts:

Firstly, In MRI, Ruby threads are backed by native OS-level threads. Each time you do Thread.start , the OS will be spawning a new thread.

Secondly:

When a signal is delivered to a multithreaded process that has established a signal handler, the kernel arbitrarily selects one thread in the process to which to deliver the signal and invokes the handler in that thread.

Quoted directly from The Linux Programming Interface Section 33.2.1.

When we have our code running in multiple threads that are interacting, we need to wary of things that may go awry in terms of synchronization, locking, etc. Then throw into the mix that a signal may arrive at any time and asynchronously interrupt our program, perhaps this would add more problems related to re-entrancy, race conditions, etc.

Thankfully, MRI makes this a non-issue. MRI starts up a dedicated thread to handle these asynchronous signals, and feed them to the main thread in a synchronous fashion (using the aforementioned Unix pipe). This way, there's a little bit less uncertainty as to what the behaviour will be when a Unix signal arrives. This is actually the recommended behaviour for multithreaded programs as described in The Linux Programming Interface.

So Ruby uses a dedicated thread to handle incoming Unix signals. This has nothing to do with speeding things up, it's just good programming practice. One more example.

Threads to wait on Unix processes

When you spawn a new Unix process using fork , you really should either wait for it to finish using Process.wait , or detach from it using Process.detach . The reason is that when the process exits, it leaves behind some information about its exit status. This status info can't be cleaned up until it's been consumed by the parent process using Process.wait . When you use something like Process.spawn or backticks, Process.wait is called internally to cleanup the aforementioned status info.

Sometimes, if you're forking a process directly using fork , you don't care about the exit status info. In this case you can use Process.detach to register your disinterest in that status info. Whereas Process.wait is a pretty direct hook into the wait(2) system call, Process.detach is purely a Rubyland construct. We can look to Rubinius to show us the pure-Ruby implementation:

def self . detach ( pid ) raise ArgumentError , "Only positive pids may be detached" unless pid > 0 thread = Thread . new { Process . wait pid ; $? } thread [ :pid ] = pid def thread . pid ; self [ :pid ] end thread end

So Process.detach is just a thin wrapper around Process.wait , using a background thread to wait for the return value of Process.wait , while the main thread continues execution concurrently.

Again, this has nothing to do with speed, but allows the proper housekeeping to be done without burdening the program with extra state.

Speed is good too

Threads certainly can be used to make our code do things faster and more efficiently. My intention was to shed some light on other use cases for threads that don't get talked about as much.