Even though last week’s headline claimed it was about weeks 30 and 31, the 31st week was actually this last one! D’oh! Calendars are hard 🙂

Anyway, here’s your mostly-weekly fix of changes:

Jnthn found a bunch of optimization opportunities in the optimizer (hehe), making it run a bunch faster.

Another big win was jnthn finding stuff we did repeatedly or stuff we did just to throw away the results again: When trying to match an <?after …>, the regex compiler would flip the target string every time the <?after > was hit. Now the flipped target string gets cached. Every time an NFA got evaluated (which happens whenever we reach a “|” alternation in a regex or a proto token that has multiple implementations. i.e. very often) we took a closure. Jnthn re-wrote parts of the code that works with the NFA cache and managed to shave off a dozen GC runs from our CORE.setting build! Another 800000 allocations went away after jnthn let the alternation index array be generated statically rather than every time the alternation is hit. Improved handling of sink (which is what handles things like failure objects being thrown and side-effects invoked in certain conditions; only thing you need to know is it gets emitted a whole lot during AST generation) leads to smaller ASTs and gives our code-gen opportunities to output better code. even more improvements I didn’t mention!

A little piece of work jnthn suggested I’d do is letting our bytecode specializer only mark a guard (like “please make sure this object is concrete” or “please make sure the type of this object is $FOO”, which would cause a deoptimization if they fail) as necessary, when the optimization that would rely on the guard in question actually was done. Unfortunately, we don’t have before/after measurements for how often specialized bytecode deoptimized …

On MoarVM, simple exception throws can now be turned into simple goto operations. This also includes things like redo/continue/last in loops.

I implemented specializations for the smart numify and smart stringify operations on MoarVM; Something that happens especially often is numifying lists or hashes, which now turns into a simple elems op (which our specializer can and will further simplify)

I also implemented a few very simple ops for the jit, so that brrt could spend more time doing the hard bits.

Another thing jnthn worked on is making allocations and deserializations lazy. This helps improve the start-up time of the repl and all programs and also improves the memory-usage of perl6 programs that don’t use much of the CORE setting, which would probably be most.

Vendethiel (Nami-Doc on github) wrote a nice guide to perl6 that became included on learnxinyminutes. Kudos!

Froggs and lizmat worked further on CompUnitRepo and module-loading-related things

a well-timed “well volunteered!” motivated btyler to start a C binding for the jansson JSON parsing/manipulating/outputting library. In very performance-critical situations, especially when you only use parts of a very big JSON document, this should give you better performance than JSON::Tiny. However, JSON::Tiny is a part of the benchmark suite we’re running all the time, meaning we’ll react to performance regressions switfly and try to figure out what makes it slower than it has to be.

Now here’s some numbers to compare today’s state with the state at the time of the last release:

30 seconds for a NQP build (used to be 37)

57 seconds for a Rakudo build (used to be 1:15)

0.02s and 13 MB maxrss to fire up a Rakudo REPL (used to be 0.04s and 35 MB)

0.2s and 114 MB maxrss for a “say 1” (used to be 0.27s and 135 MB)

now it is: 584.95user 75.88system 3:44.71elapsed 294%CPU for a full spectest run with 4 test_jobs

used to be: 765.43user 89.06system 4:40.32elapsed

There’s still more opportunities to deserialize things lazily upon first use in MoarVM which ought to give us lower baseline memory usage and spread out startup time “more evenly” (and remove it when possible).

The baseline memory usage of our Perl 6 compilers has always been something that annoyed me. MoarVM gave us about a 2x memory saving compared to Parrot and now we’ve started work on making the memory usag better. I’m excited! 🙂

Sadly, brrt has been plagued with very stealthy bugs all week, so progress was kind of slow. What he did manage to finish is support for indexat and jumplist, which is needed in most regexes Also, sp_findmethod, findmeth, findmeth_s, indexat, getdynlex, … are done now, but temporarily commented out for bug-hunting purposes.

That’s it from me for this week. I’m already looking forward for the cool stuff I’ll be able to highlight next week 🙂