At least once a year there's a maelstrom of posts about a new Ruby implementation with stellar numbers. These numbers are usually based on very early experimental code, and they are rarely accompanied by information on compatibility. And of course we love to see crazy performance numbers, so many of us eat this stuff up.

Posting numbers too early is a real disservice to any project, since they almost certainly don't represent the eventual real-world performance people will see. It encourages folks to look to the future, but it also marginalizes implementations that already provide both compatibility and performance, and ignores how much work it has taken to get there. Given how much we like to see numbers, and how thirsty the Ruby community is for "a fastest Ruby", I don't know whether this will ever change.

I thought perhaps a discussion about the process of optimizing JRuby might help folks understand what's involved in building a fast, compatible Ruby implementation, so that these periodic shootouts don't get blown out of proportion. Ruby can be fast, certainly even faster than JRuby is today. But getting there while maintaining compatibility is very difficult.

Performance Optimization, JRuby-style





We begin our exploration by running JRuby in interpreted mode, which is the slowest way you can run JRuby. We'll be using the "tak" benchmark, since it's simple and easy to demonstrate relative performance at each optimization level.

# Takeuchi function performance, tak(24, 16, 8)

def tak x, y, z

if y >= x

return z

else

return tak( tak(x-1, y, z),

tak(y-1, z, x),

tak(z-1, x, y))

end

end



require "benchmark"



N = (ARGV.shift || 1).to_i



Benchmark.bm do |make|

N.times do

make.report do

i = 0

while i<10

tak(24, 16, 8)

i+=1

end

end

end

end

And here's our first set of results. I have provided Ruby 1.8.6 and Ruby 1.9.1 numbers for comparison.

Ruby 1.8.6p114:

➔ ruby bench/bench_tak.rb 5

user system total real

17.150000 0.120000 17.270000 ( 17.585128)

17.170000 0.140000 17.310000 ( 17.946869)

17.180000 0.160000 17.340000 ( 18.234570)

17.180000 0.150000 17.330000 ( 17.779536)

18.790000 0.190000 18.980000 ( 19.560232)



Ruby 1.9.1p0:

➔ ruby191 bench/bench_tak.rb 5

user system total real

3.570000 0.030000 3.600000 ( 3.614855)

3.570000 0.030000 3.600000 ( 3.615341)

3.560000 0.020000 3.580000 ( 3.608843)

3.570000 0.020000 3.590000 ( 3.591833)

3.570000 0.020000 3.590000 ( 3.640205)



JRuby 1.3.0-dev, interpreted, client VM

➔ jruby -X-C bench/bench_tak.rb 5

user system total real

24.981000 0.000000 24.981000 ( 24.903000)

24.632000 0.000000 24.632000 ( 24.633000)

25.459000 0.000000 25.459000 ( 25.459000)

29.122000 0.000000 29.122000 ( 29.122000)

29.935000 0.000000 29.935000 ( 29.935000)

Ruby 1.9 posts some nice numbers here, and JRuby shows how slow it can be when doing no optimizations at all. The first change we look at, and which we recommend to any users seeking best-possible performance out of JRuby, is to use the JVM's "server" mode, which optimizes considerably better.

JRuby 1.3.0-dev, interpreted, server VM

➔ jruby --server -X-C bench/bench_tak.rb 5

user system total real

8.262000 0.000000 8.262000 ( 8.192000)

7.789000 0.000000 7.789000 ( 7.789000)

8.012000 0.000000 8.012000 ( 8.012000)

7.998000 0.000000 7.998000 ( 7.998000)

8.000000 0.000000 8.000000 ( 8.000000)

The "server" VM differs from the default "client" VM in that it will optimistically inline code across calls and optimize the resulting code as a single unit. This obviously allows it to eliminate costly x86 CALL operations, but even more than that it allows optimizing algorithms which span multiple calls. By default, OpenJDK will attempt to inline up to 9 levels of calls, so long as they're monomorphic (only one valid target), not too big, and no early assumptions are changed by later code (like if a monomorphic call goes polymorphic later on). In this case, where we're not yet compiling Ruby code to JVM bytecode, this inlining is mostly helping JRuby's interpreter, core classes, and method-call logic. But already we're 3x faster than interpreted JRuby on the client VM.

The next optmization will be to turn on the compiler. I've modified JRuby for the next couple runs to *only* compile and not do any additional optimizations. We'll discuss those optimizations as I add them back.

JRuby 1.3.0-dev, compiled (unoptimized), server VM:

➔ jruby --server -J-Djruby.astInspector.enabled=false bench/bench_tak.rb 5

user system total real

5.436000 0.000000 5.436000 ( 5.376000)

3.655000 0.000000 3.655000 ( 3.655000)

3.662000 0.000000 3.662000 ( 3.662000)

3.683000 0.000000 3.683000 ( 3.683000)

3.668000 0.000000 3.668000 ( 3.668000)

By compiling, without doing any additional optimizations, we're able to improve performance 2x again. Because we're now JITing Ruby code as JVM bytecode, and the JVM eventually JITs JVM bytecode to native code, our Ruby code actually starts to benefit from the JVM's built-in optimizations. We're making better use of the system CPU and not making nearly as many calls as we would from the interpreter (since the interpreter is basically a long chain of calls for each low-level Ruby operation.

Next, we'll turn on the simplest and oldest JRuby compiler optimization, "heap scope elimination".

JRuby 1.3.0-dev, compiled (heap scope optz), server VM:

➔ jruby --server bench/bench_tak.rb 5

user system total real

4.014000 0.000000 4.014000 ( 3.942000)

2.776000 0.000000 2.776000 ( 2.776000)

2.760000 0.000000 2.760000 ( 2.760000)

2.769000 0.000000 2.769000 ( 2.769000)

2.768000 0.000000 2.768000 ( 2.769000)

The "heap scope elimination" optimization eliminates the use of an in-memory store for local variables. Instead, when there's no need for local variables to be accessible outside the context of a given method, they are compiled as Java local variables. This allows the JVM to put them into CPU registers, making them considerably faster than reading or writing them from/to main memory (via a cache, but still slower than registers). This also makes JRuby ease up on the JVM's memory heap, since it no longer has to allocate memory for those scopes on every single call. This now puts us comfortably faster than Ruby 1.9, and it represents the set of optimizations you see in JRuby 1.2.0.

Is this the best we can do? No, we can certainly do more, and some such experimental optimizations are actually already underway. Let's continue our exploration by turning on another optimization similar to the previous one: "backtrace-only frames".

JRuby 1.3.0-dev, compiled (heap scope + bracktrace frame optz), server VM:

➔ jruby --server -J-Djruby.compile.frameless=true bench/bench_tak.rb 5

user system total real

3.609000 0.000000 3.609000 ( 3.526000)

2.600000 0.000000 2.600000 ( 2.600000)

2.602000 0.000000 2.602000 ( 2.602000)

2.598000 0.000000 2.598000 ( 2.598000)

2.602000 0.000000 2.602000 ( 2.602000)

Every Ruby call needs to store information above and beyond local variables. There's the current "self", the current method visibility (used for defining new methods), which class is currently the "current" one, backref and lastline values ($~ and $_), backtrace information (caller's file and line), and some other miscellany for handling long jumps (like return or break in a block). In most cases, this information is not used, and so storing it and pushing/popping it for every call wastes precious time. In fact, other than backtrace information (which needs to be present to provide Ruby-like backtrace output), we can turn most of the frame data off. This is where we start to break Ruby a bit, though there are ways around it. But you can see we get another small boost.

What if we eliminate frames entirely and just use the JVM's built-in backtrace logic? It turns out that having any pushing/popping of frames, even with only backtrace data, still costs us quite a bit of performance. So let's try "heap frame elimination":

JRuby 1.3.0-dev, compiled (heap scope + heap frame optz), server VM:

➔ jruby --server -J-Djruby.compile.frameless=true bench/bench_tak.rb 5

user system total real

2.955000 0.000000 2.955000 ( 2.890000)

1.904000 0.000000 1.904000 ( 1.904000)

1.843000 0.000000 1.843000 ( 1.843000)

1.823000 0.000000 1.823000 ( 1.823000)

1.813000 0.000000 1.813000 ( 1.813000)

By eliminating frames entirely, we're a good 33% faster than the fastest "fully framed" run you'd get with stock JRuby 1.2.0. You'll notice the command line here is the same; that's because we're venturing into more and more experimental code, and in this case I've actually forced "frameless" to be "no heap frame" instead of "backtrace-only heap frame". And what do we lose with this change? We no longer would be able to produce a backtrace containing only Ruby calls, so you'd see some JRuby internals in the trace, similar to how Rubinius shows Rubinius internals. But we're getting respectably fast now.

Next up we'll turn on some optimizations for math operators.

JRuby 1.3.0-dev, compiled (heap scope, heap frame, fastops optz), server VM:

➔ jruby --server -J-Djruby.compile.frameless=true -J-Djruby.compile.fastops=true bench/bench_tak.rb 5

user system total real

2.291000 0.000000 2.291000 ( 2.225000)

1.335000 0.000000 1.335000 ( 1.335000)

1.337000 0.000000 1.337000 ( 1.337000)

1.344000 0.000000 1.344000 ( 1.344000)

1.346000 0.000000 1.346000 ( 1.346000)

Most of the time, when calling + or - on an object, we do the full Ruby dynamic dispatch cycle. Dispatch involves retrieving the target object's metaclass, querying for a method (like "+" or "-"), and invoking that method with the appropriate arguments. This works fine for getting us respectable performance, but we want to take things even further. So JRuby has experimental "fast math" operations to turn most Fixnum math operators into static calls rather than dynamic ones, allowing most math operations to inline directly into the caller. And what do we lose? This version of "fast ops" makes it impossible to override Fixnum#+ and friends, since whenever we call + on a Fixnum it's going straight to the code. But it gets us another nearly 30% improvement.

Up to now we've still also been updating a lot of per-thread information. For every line, we're tweaking a per-thread field to say what line number we're on. We're also pinging a set of per-thread fields to handle the unsafe "kill" and "raise" operations on each thread...basically we're checking to see if another thread has asked the current one to die or raise an exception. Let's turn all that off:

JRuby 1.3.0-dev, compiled (heap scope, heap frame, fastops, threadless, positionless optz), server VM:

➔ jruby --server -J-Djruby.compile.frameless=true -J-Djruby.compile.fastops=true -J-Djruby.compile.positionless=true -J-Djruby.compile.threadless=true bench/bench_tak.rb 5

user system total real

2.256000 0.000000 2.256000 ( 2.186000)

1.304000 0.000000 1.304000 ( 1.304000)

1.310000 0.000000 1.310000 ( 1.310000)

1.307000 0.000000 1.307000 ( 1.307000)

1.301000 0.000000 1.301000 ( 1.301000)

We get a small but measurable performance boost from this change as well.

The experimental optimizations up to this point (other than threadless) comprise the set of options for JRuby's --fast option, shipped in 1.2.0. The --fast option additionally tries to statically inspect code to determine whether these optimizations are safe. For example, if you're running with --fast but still access backrefs, we're going to create a frame for you anyway.

We're not done yet. I mentioned earlier the JVM gets some of its best optimizations from its ability to profile and inline code at runtime. Unfortunately in current JRuby, there's no way to inline dynamic calls. There's too much plumbing involved. The upcoming "invokedynamic" work in Java 7 will give us an easier path forward, making dynamic calls as natural to the JVM as static calls, but of course we want to support Java 5 and Java 6 for a long time. So naturally, I have been maintaining an experimental patch that eliminates most of that plumbing and makes dynamic calls inline on Java 5 and Java 6.

JRuby 1.3.0-dev, compiled ("--fast", dyncall optz), server VM:

➔ jruby --server --fast bench/bench_tak.rb 5

user system total real

2.206000 0.000000 2.206000 ( 2.066000)

1.259000 0.000000 1.259000 ( 1.259000)

1.258000 0.000000 1.258000 ( 1.258000)

1.269000 0.000000 1.269000 ( 1.269000)

1.270000 0.000000 1.270000 ( 1.270000)

We improve again by a small amount, always edging the performance bar higher and higher. In this case, we don't lose compatibility, we lose stability. The inlining modification breaks method_missing and friends, since I have not yet modified the call pipeline to support both inlining and method_missing. And there's still a lot of extra overhead here that can be eliminated. But in general we're still mostly Ruby, and even with this change you can run a lot of code.

This represents the current state of JRuby. I've taken you all the way from slow, compatible execution, through fast, compatible execution, and all the way to faster, less-compatible execution. There's certainly a lot more we can do, and we're not yet as fast as some of the incomplete experimental Ruby VMs. But we run Ruby applications, and that's no small feat. We will continue making measured steps, always ensuring compatibility first so each release of JRuby is more stable and more complete than the last. If we don't immediately leap to the top of the performance heap, there's always good reasons for it.





Performance Optimization, Duby-style





As a final illustration, I want to show the tak performance for a language that looks like Ruby, and tastes like Ruby, but boasts substantially better performance: Duby.

def tak(x => :fixnum, y => :fixnum, z => :fixnum)

unless y < x

z

else

tak( tak(x-1, y, z),

tak(y-1, z, x),

tak(z-1, x, y))

end

end



puts "Running tak(24,16,8) 1000 times"



i = 0

while i<1000

tak(24, 16, 8)

i+=1

end

This is the Takeuchi function written in Duby. It looks basically like Ruby, except for the :fixnum type hints in the signature. Here's a timing of the above script (which calls tak the same as before but 1000 times instead of 5 times), running on the server JVM:

➔ time jruby -J-server bin/duby examples/tak.duby

Running tak(24,16,8) 1000 times



real 0m13.657s

user 0m14.529s

sys 0m0.450s

So what you're seeing here is that Duby can run "tak(24,16,8)", the same function we tested in JRuby above, in an average of 0.013 seconds--nearly two orders of magnitude faster than the fastest JRuby optimizations above and at least an order of magnitude faster than the fastest incomplete, experimental implementations of Ruby. What does this mean? Absolutely nothing, because Duby is not Ruby. But it shows how fast a Ruby-like language can get, and it shows there's a lot of runway left for JRuby to optimize.





Be a (Supportive) Critic!

So the next time someone posts an article with crazy-awesome performance numbers for a Ruby implementation, by all means applaud the developers and encourage their efforts, since they certainly deserve credit for finding new ways to optimize Ruby. But then ask yourself and the article's author how much of Ruby the implementation actually supports, because it makes a big difference.





Update, April 4: Several people told me I didn't go quite far enough in showing that by breaking Ruby you could get performance. And after enough cajoling, I was convinced to post one last modification: recursion optimization.

JRuby 1.3.0-dev, compiled ("--fast", dyncall optz, recursion optz), server VM:

➔ jruby --server --fast bench/bench_tak.rb 5

user system total real

0.524000 0.000000 0.524000 ( 0.524000)

0.338000 0.000000 0.338000 ( 0.338000)

0.325000 0.000000 0.325000 ( 0.325000)

0.299000 0.000000 0.299000 ( 0.299000)

0.310000 0.000000 0.310000 ( 0.310000)

Woah! What the heck is going on here? In this case, JRuby's compiler has been hacked to turn recursive "functional calls", i.e. calls to an implicit "self" receiver, into direct calls. The logic behind this is that if you're calling the current method from the current method, you're going to always dispatch back to the same piece of code...so why do all the dynamic call gymnastics? This fits a last piece into the JVM inlining-optimization puzzle, allowing mostly-recursive benchmarks like Takeuchi to inline more of those recursive calls. What do we lose? Well, I'm not sure yet. I haven't done enough testing of this optimization to know whether it breaks Ruby in some subtle way. It may work for 90% of cases, but fail for an undetectable 10%. Or it may be something we can determine statically, or something for which we can add an inexpensive guard. Until I know, it won't go into a release of JRuby, at least not as a default optimization. But it's out there, and I believe we'll find a way.





It is also, incidentally, only a few times slower than a pure Java version of the same benchmark, provided Java is using all boxed numerics too.

The truth is it's actually very easy to make small snippits of Ruby code run really fast, especially if you optimize for the benchmark. But is it useful to do so? And can we extrapolate eventual production performance from these early numbers?