Last week, a lot of optimization work has been done. While the week before was mostly about making NQP faster, this week (or should I call it “the last week”?) shifted the focus on optimizing pieces of Rakudo instead.

The spesh branch of MoarVM, which contains the bytecode specializer, has been merged into the master branch.

a long-standing inconsistency concerning how protos have been skipped or not depending on whether or not they have been compile-time-inlined has been fixed

on MoarVM, this previous bit also enables an optimization to make multiple dispatch cheaper. This will come to the JVM at some point, too.

Hash assignment used to use a very indirect way of getting values into the actual hash, which caused lots of unnecessary work in the most general case. There is now a method “assign_key”, that works much faster.

Array assignment got a similar improvement involving an “assign_pos” method.

Using some subs on lists, like push and unshift, used to come with a lot of extra overhead, as it created a slurpy list even if you only passed a single parameter. This rather common case is now much cheaper.

Rakudo’s optimizer now removes some instances of $*DISPATCHER, $_, $/, and $! if they are not used.

… and many more little improvements all over the place

I’ve actually posted a benchmark run with perl5, nqp-moarvm and the 2014.03 releases of rakudo-parrot, rakudo-moar and rakudo-jvm to compare them against the master branch at that time. You can find it here. It’s a bit of a mess, so I’ll point out a few things:

The graphs have log-log scales. One step to the right means twice the amount of work, one step up means half the time taken.

When hovering over a data point, it will display “$foo x times slower than fastest”. The “fastest” it refers to is the highest data point anywhere in the graph. Thus, if the graph has a little spike on the far left, or the amount of work doesn’t scale linearly with the “scaling factor” (like in the 2d visit tests, or the man-or-boy tests), you can’t rely on it. You’ll have to count grid lines instead (or … you know … submit a pull request to the perl5 script that generates the benchmark plots)

Clicking on a name in the legend will toggle visibility of the corresponding line. With so many lines, you can easily lose track of what’s going on.

There’s of course not only performance improvements to be found:

The implementation and semantics of “winner” have been refined further by lizmat

Mouq has fixed both the samespace method for Str and the ss/// regex form

lizmat has started implementing the “is cached” routine trait, that would automatically cache return values based on all parameters.

SetHash, BagHash and MixHash now have minpairs and maxpairs methods that give you the value with the most or fewest occurences together with the number of occurences.

Using threads on MoarVM used to cause a messy shutdown, as one thread was going ahead to destroy the VM object and other threads still tried to access it. Now it properly shuts down and just lets the OS free all the resources.

This has been the case for a somewhat longer time, but I forgot to mention it: linenoise is now in use on MoarVM again, so the REPL actually has some nice line editing.

In the spectest repository, the following things have happened:

dwarring has added more and more tests based on the advent calendar blog.

moritz has fudged more failing tests and created RT tickets to go with them.

Mouq has added a couple of tests for RT tickets.

And here’s a thing I forgot to mention in last week’s post:

retupmoca created a module named “LibraryMake”, that greatly eases shipping native C code with your Perl 6 module. It comes with an example Makefile.in that you can adapt to your needs and it will fill in all the necessary flags and values to build a compatible library. After that, it helps you find exactly where the library was installed.

For more information about what’s been happening with MoarVM recently, you can also check out jnthn’s blog post about the topic.

As you can see in his post, jnthn will be ensuring we’ll finally get a multi-backend Rakudo * release this month. Color me excited!

Something for you to try

Currently, Rakudo sorts lists by generating a list of indices, sorting this index list based on the original values and then using list subscript splicing to re-order the objects into the correct order. This was once needed so that we could use Parrot’s built-in sort function, but on MoarVM and JVM this is quite a lot of wasted effort. If you (yes you!) would like to contribute a little something to Rakudo, find us on the IRC channel and ask for directions.