Planet Raku

Raku RSS Feeds

Roman Baumer (Freenode: rba #raku or ##raku-infra) / 2020-09-22T03:19:18

Published by liztormato on 2020-09-21T20:10:48

Votemaster Will Coleda has published the results of the first Raku Steering Council election. Thanks to everybody who has voted! The elected council members are (in alphabetical order of their last name):

Congratulations! Yours truly assumes that after an initial meeting, the Council will come with a statement on how to proceed with the future of the Raku Programming Language.

Another Raku Survey

JJ Merelo has announced the results of the more general Raku User Survey that has been running in the past weeks: the raw CSV, and preliminary PDF. Kudos to JJ Merelo for taking care of this yet another year!

The DB of Unicode

Daniel Sockwell dives into the Unicode internals of the Raku Programming Language and finds out in more detail that Raku is pretty unique in that respect. In A deep dive into Raku’s Unicode support (/r/rakulang comments).

Of Proxy and Containers

Vadim Belman elaborates on the Proxy Container in the Advanced Raku for Beginners series: in other words, how you can override the FETCH and STORE methods on containers.

Many Pearls

Andrew Shitov wrote three episodes in the Pearls of Raku series this week:

Weekly Challenge

Weekly Challenge #79 is available for your perusal. A full review of the Raku solutions of Challenge #77 (including a video run-through) was done by Andrew Shitov.

Core Developments

Most of the core developments have been happening in the rakuast branch. The 2020.09 Rakudo Compiler Release has been postponed to iron out some configuration issues. Meanwhile, in the main branch:

Nicholas Clark added comments and ASCII diagrams explaining the new way hashes are implemented in MoarVM, which uncovered a very nice piece of hidden internals information to yours truly.

Stefan Seifert fixed an issue with dumping the contents of a P6opaque object in MoarVM.

object in MoarVM. Patrick Böker fixed another set of build issues.

Vadim Belman started a CAVEATS file for all platform dependent notes. And Will Coleda expanded on that.

Timo Paulssen has provided an AppImage for the Rakudo Compiler 2020.08.2 release. And possibly even more exciting for Linux users: an AppImage for moarperf, the full MoarVM performance profiler.

Jonathan Worthington improved the specialization of boxed Num s.

This week’s new Pull Requests:

Please check them out and leave any comments that you may have!

Questions about Raku

Finally crossed the 1500 question mark on StackOverflow. Keep those questions coming! Meanwhile:

Meanwhile on Twitter

Comments about Raku

Updated Raku Modules

Sparrow6 by Alexey Melezhik.

ake by Aleks-Daniel Jakimenko-Aleksejev.

BigRoot by Julio.

App::Mi6 by Shoichi Kaji.

Inline::Python by Stefan Seifert.

FixedInt by Steve Schulze.

Winding down

The suspense was killing! Finally the election results are in. Yours truly is happy to have been selected by more 75% of the voters. It is good to know that so many people think you’re doing a good thing for the Raku Programming Language. Thank you! I congratulate the other elected members and look forward to work constructively with them!

Finally, again and again, please don’t forget to stay healthy and to stay safe. Next week there will be more news about Raku. Until then!

Published by liztormato on 2020-09-14T14:55:08

Want to quickly learn about the fundamentals of Raku with a book? Raku Fundamentals by Moritz Lenz has just arrived on the physical bookshelves as well as on the virtual ones. Formerly known as “Perl 6 Fundamentals”, the second edition has been completely updated and has a chapter on Cro web services added. Be sure to leave a review when you have become the owner of a copy!

Introduction Videos

Out of the blue, a very nice set of introductory videos into the Raku Programming Language have appeared on the interwebs. Kudos to Alex Merced for making these, and William Michels for the tip!

Steering Council Election

You have until midnight UTC on 20 September 2020 to cast your vote in the first official Raku Steering Council election. Fourteen candidates to fill 7 positions: and here they are in alphabetical order of their last name (follow the link to find out why they would like to be on the RSC):

Please follow the instructions on how to cast your ballot!

Give Peas A Chance

Daniel Sockwell elaborates about how the difference between pod and pod6 , is like the difference between JSON and Javascript objects. In Peas in a Pod6 (/r/rakulang comments).

Errors International

L’Alabameñu has started a project to translate Raku’s error messages into various natural languages other than English. The associated module is not (yet) in the module ecosystem, but feels interesting enough to start mentioning already

Weekly Suspects

Wenzel P. P. Peppmeyer wrote about releasing on Github, and Andrew Shitov revisited weekly challenges of the past with an interesting range of alternate programming languages.

Weekly Meetings

For quite a few months now, Joseph Brenner has been running a weekly Raku Study Group in San Francisco. Sadly, yours truly had not noticed that these events have been online, so you don’t actually have to travel to San Francisco to be able to attend. So be sure to checkout the upcoming events for details on the next meeting!

Weekly Challenge

Weekly Challenge #78 is available for your perusal, and Andrew Shitov was quick to follow that up with their solutions and found time to do a full review of the Raku solutions of Challenge #76.

Core Developments

Most of the core developments have been happening in branches on MoarVM and Rakudo, specifically in the rakuast branch. Meanwhile, in the main branch:

Patrick Böker fixed a problem with writing profile files on relocatable builds of Rakudo in the main branch.

Alexander Kiryuhin updated a helper script for doing Rakudo releases that was originally developed by Aleks-Daniel Jakimenko-Aleksejev.

This week’s new Pull Requests:

Please check them out and leave any comments that you may have!

Questions about Raku

Meanwhile on Twitter

Comments about Raku

New Raku Modules

Math::Libgsl::DigitalFiltering by Fernando Santagata.

Math::Roman by Itsuki Toyota.

Hyperscript by Jack Miles.

BigRoot by Julio.

Acme::OwO by Kane Valentine.

System::Stats::DISKUsage by Ramiro Encinas.

Updated Raku Modules

Pod::Literate by Daniel Sockwell.

Font::FreeType, PDF::Font::Loader by David Warring.

Matrix::Client by Matias Linares.

Router::Right by Konstantin Narkhov.

Winding down

A new book, some new videos, new modules, new blog posts and many people talking about Raku. A quiet week again, indeed :-). Please, don’t forget to stay healthy and to stay safe. Check again next week for more news about the Raku Programming Language!

Published by Vadim Belman on 2020-09-11T00:00:00

After month and a half full of many events, I finally got time to complete one more article for the Advanced Raku For Beginners series.

Frankly, I’m not happy about it. It feels to me that my not the perfect English is gotten even worse; not all topics and quirks are covered. But at least I’m getting back in these waters. So, just let it be. I’m warming up. And hope to advance into another subject soon.

Published by gfldex on 2020-09-10T19:48:28

In my last post I lamented the lack of testing metadata. Just a few days later it got in my way when I played with creating releases on github. My normal workflow on github is to commit changes and push them to trigger travis. When travis is fine I bump the version field in META6.json so the ecosystem and zef can pick up the changes. And there is a hidden trap. If anybody clones the repo via zef just before I bump the version, there will be a mismatch between code and version. CPAN doesn’t got that problem because there is always a static tarball per version. With releases we can get the same on github.

It’s a fairly straight forward process.

build a tag-name from the github-project-name and a version string

generate the URL the tarball will get in the end (based on tag-name) and use that as source-url in the META6.json

commit the new META6.json locally

create a git tag locally

push the local commit with the changed META6.json to github

push the git tag to github

use the github API to create the release, which in turn creates the tarball

This is so simple that I immediately automated that stuff with META6::bin so I can mess it up. (Not released yet, see below.)

The result is an URL like so: https://github.com/gfldex/raku-release-test/archive/raku-release-test-0.0.19.tar.gz. When we feed that to zef it will check if the version is not already installed and then proceed to test and install.

And there is a catch. Even though zef is fine with the URL, Test::META will complain because it doesn’t end in .git and fail the test. This in turn will stop zef from installing the module. We added that check to make sure zef always gets a proper link to a clone-able repo for modules hosted on github. This assumption is clearly wrong and needs fixing. I will send a PR soon.

Having releases on github (other gitish repo-hosting sites will have similar facilities or will get them) can get us one step closer to a proper RPAN. Once I got my first module into the ecosystem this way I will provide an update here.

Published by Vadim Belman on 2020-09-10T00:00:00

Just a reminder to anybody passing by this blog that the election to Raku Steering Council is going on now and will be taking place until Sep 20. More details can be found in the voting form and the original announcement.

I have cast my ballot.

Published by liztormato on 2020-09-07T21:12:46

The coming two weeks will allow all Rakoons to vote in the first official Raku Steering Council election. Fourteen candidates to fill 7 positions: and here they are in alphabetical order of their first name:

Please follow the instructions on how to cast your ballot: you have until midnight UTC on 20 September 2020 to cast your vote!

Semiliterate weaving

Daniel Sockwell delves further into Raku in an inspiring blog post: Weaving Raku: semiliterate programming in a beautiful language. Taking the Raku Programming Language to as yet unexplored corners of its capabilities (/r/rakulang comments).

August Report

The August report of the Raku Development Grant of Jonathan Worthington was published: not a lot happened on it in August, but September has more time available. In related news, Makoto Nozaki announced the proper launch of the Raku Development Fund.

Weekly Suspects

Wenzel P. P. Peppmeyer wrote about finding out which modules are added / updated in the ecosystem.

Weekly Challenge

After careful consideration, yours truly has decided to no longer list the Raku solutions of the Weekly Challenge in the Rakudo Weekly News. Turns out that the WordPress editor gets very confused about changing URLs in existing, but copied posts. Last week had several wrong links in the overview, and nobody noticed or took the trouble to inform yours truly. Clearly, this is not a very heavily used feature of the Rakudo Weekly News.

Whenever there is a weekly review of Raku solutions, these will be mentioned! Therefore, please check out Andrew Shitov‘s review of Raku solutions to Weekly Challenge #75.

Core Developments

Daniel Lathrop fixed several textual documentation issues in nqp.

Stefan Seifert and Clifton Wood fixed a race-condition in pre-compilation, now allowing for multiple modules to be pre-compiled concurrently. And fixed some associated issues with regards to testing.

L’Alabameñu fixed the gist method on hashes by adding a cmp candidate for comparing Code objects.

method on hashes by adding a candidate for comparing objects. Elizabeth Mattijsen introduced a proper Allomorph class from which all allomorphic objects inherit ( IntStr , NumStr , RatStr and ComplexStr ), which makes dispatching for these types a lot simpler, and should better allow for custom allomorphic classes in the future.

class from which all allomorphic objects inherit ( , , and ), which makes dispatching for these types a lot simpler, and should better allow for custom allomorphic classes in the future. Daniel Sockwell added deprecation warnings for the parse-names function.

function. And some other minor fixes and updates.

This week’s new Pull Requests:

Please check them out and leave any comments that you may have!

Questions about Raku

Meanwhile on Twitter

Meanwhile on perl6-users

Comments about Raku

New Raku Modules

Updated Raku Modules

Math::Libgsl::Elementary by Fernando Santagata.

Font::FreeType by David Warring.

Auth::SCRAM by Marcel Timmerman.

Email::MIME by Rod Taylor.

App::Mi6 by Shoichi Kaji.

Matrix::Client by Matías Linares.

Winding down

A bit of a quiet week, which in many ways feels like the calm before the storm. It still bears repeating: don’t forget to stay healthy and to stay safe. Please check again next week for more news about the Raku Programming Language!

Published by gfldex on 2020-09-05T21:05:31

I didn’t know so I asked her.

15:25 <gfldex> How do you gather info for "Updated Raku Modules"? 17:40 <lizmat> https://twitter.com/raku_cpan_new 17:59 <gfldex> thx 23:06 <lizmat> well volunteered :-)

That’s what you get for being nosey. So off I went into the land of mostly undocumented infrastructure.

The objective is simple. Generate two lists of modules where the first contains all modules that are newly added to the ecosystem and the second got all updated modules. For both the timespan of interest is Monday of this week until Monday of last week. Currently we got two collections of META-files. Our ecosystem and CPAN. The latter does not know about META6 and that sucks. But we will manage. Conveniently both lists are provided by ugexe at github. Since there are commits we can travel back in time and get a view of the ecosystem from when we need it. To do so we first need to get a list of commits.

sub github-get-remote-commits($owner, $repo, :$since, :$until) is export(:GIT) { my $page = 1; my @response; loop { my $commits-url = $since && $until ?? „https://api.github.com/repos/$owner/$repo/commits?since=$since&until=$until&per_page=100&page=$page“ !! „https://api.github.com/repos/$owner/$repo/commits“; my $curl = Proc::Async::Timeout.new('curl', '--silent', '-X', 'GET', $commits-url); my $github-response; $curl.stdout.tap: { $github-response ~= .Str }; await my $p = $curl.start: :$timeout; @response.append: from-json($github-response); last unless from-json($github-response)[0].<commit>; $page++; } if @response.flat.grep(*.<message>) && @response.flat.hash.<message>.starts-with('API rate limit exceeded') { dd @response.flat; die „github hourly rate limit hit.“; } @response.flat } my @ecosystems-commits = github-get-remote-commits(‚ugexe‘, ‚Perl6-ecosystems‘, :since($old), :until($young));

Now we can get a whole bunch of ex-json which was compiled of the META6.json and *.meta files. Both file formats are not compatible. The auth field of a CPAN module will differ from the auth of the upstream META6.json, there is no authors field and no URL to the upstream repo. Not pretty but fixable because tar is awesome.

my @meta6; px«curl -s $source-url» |» px<tar -xz -O --no-wildcards-match-slash --wildcards */META6.json> |» @meta6; my $meta6 = @meta6.join.chomp.&from-json;

(Well, GNU tar is awesome. BSD tar doesn’t sport --no-wildcards-match-slash and there is one module with two META6.json-files. I think I can get around this with a 2 pass run.)

This works nicely for all but one module. For some reason a Perl 5 module sneaked into the list of Raku modules on CPAN. It’s all just parsed JSON so we can filter those out.

my @ecosystems = fetch-ecosystem(:commit($youngest-commit)).grep(*.<perl>.?starts-with('6'));

Some modules don’t contain an auth field, some got an empty name. Others don’t got the authors field set. We don’t enforce proper meta data even though it’s very easy to add quality control. Just use Test::META in your tests. Here is an example.

I can’t let lizmat down though and github knows almost all authors.

sub github-realname(Str:D $handle) { my @github-response; my $url = 'https://api.github.com/users:' ~ $handle; px«curl -s -X GET $url» |» @github-response; @github-response.join.&from-json.<name> }

If there is more then one author they wont show up with this hack. I can’t win them all. I’m not the only one who suffers here. On modules.raku.org at least one module shows up twice with the same author. My guess is that happens when a module is published both in our ecosystem and on CPAN. I don’t know what zef does if you try to nail a module down by author and version with such ambiguity.

I added basic html support and am now able to give you a preview of next weeks new modules.

New Modules

Pod::Weave by Daniel Sockwell.

Pod::Literate by Daniel Sockwell.

System::Stats::NETUsage by Ramiro Encinas Alarza.

Updated Modules

If your module is in the list and your name looks funny, you may want to have a look into the META6.json of you project.

Yesterday we had a discussion about where to publish modules. I will not use CPAN with the wrong language. Don’t get me wrong. I like CPAN. You can tie an aircraft carrier to it and it wont move. But it’s a Comprehensive Perl Archive Network. It’s no wonder it doesn’t like our metadata.

Kudos to tony-o for taking on a sizeable task. I hope my lamentation is helpful in this regard.

The script can be found here. I plan to turn it into a more general module to query the ecosystem. Given I spend the better part of a week on a 246 lines file the module might take a while.

Published by liztormato on 2020-08-31T20:49:36

Less than a week to go until the candidacy period for the first election of the Raku Steering Council ends (at midnight UTC on 6 September 2020, to be precise). So far, ten people have announced their candidacy, which is great to see! Yours truly feels that, to make the Raku Steering Council truly reflect the Raku userbase, there should be more women, more younger people and more people who do not have English as their first language. If you feel you belong to these groups, and you want to be a part of the future of Raku, please consider adding your candidacy! If you have any questions about the process, please feel free to open an issue!

Testing and conditional compilation

Daniel Sockwell explains how they have fallen in love with the Raku Programming Language in an extensive blog post about testing and conditional compilation. It’s really all about applying Rust’s approach to organizing unit tests to Raku. And how the DOC phaser can be appropriated to achieve that goal. An inspiring piece of work that will surely have its influence on the development of Raku (/r/rakulang, /r/rust comments).

Comma Complete Again

Alexander Kiryuhin informs us that there is a new release of Comma Complete, the full featured IDE for Raku. Please note that by buying a copy of Comma Complete, you will also be helping the development of the Comma Community edition, and will help with implementing of the Roadmap.

Ecosystem grant not approved

Sadly, the Raku ecosystem grant proposal did not make it in the July 2020 round of The Perl Foundation grants. Suggestions for an improved grant proposal were made (/r/rakulang comments).

Raku-Utils

Alexey Melezhik has launched a proposal to wrap existing command-line tools into Raku functions.

Weekly Suspects

Gábor Szabo takes another, deeper look at the Raku REPL. Wenzel P. P. Peppmeyer wrote about tripping over variables. And another nice blog post by Andrew Shitov in the Pearls of Raku series: Issue 9: toss a coin, topic vs temporary variables (/r/rakulang comments).

Weekly Challenge

The entries for Challenge 75 that have Raku solutions:

Andrew Shitov reviewed all of the Raku solutions of Challenge #74 with a video version for Task #1 and Task #2. The Weekly Challenge #76 is up for your perusal!

Core Developments

This week saw the merging of the new hash implementation work by Nicholas Clark, which could make your program more than 10% faster! In other core developments:

Patrick Böker fixed raku -V , a problem that was discovered shortly after the 2020.08.1 compiler release, causing a 2020.08.2 release.

, a problem that was discovered shortly after the 2020.08.1 compiler release, causing a 2020.08.2 release. Timo Paulssen made the moar --dump functionality ignore dependency information added by Rakudo, thus making it work on Rakudo bytecode out of the box. They also worked on various aspects of memory allocation in MoarVM.

functionality ignore dependency information added by Rakudo, thus making it work on Rakudo bytecode out of the box. They also worked on various aspects of memory allocation in MoarVM. Daniel Sockwell fixed the signature on SetHash.set / SetHash.unset methods.

This week’s new Pull Requests:

Please check them out and leave any comments that you may have!

Questions about Raku

Meanwhile on Twitter

Meanwhile on perl6-users

Comments about Raku

New Raku Modules

Win32::DrivesAndTypes by Ramiro Encinas.

Metropolis by Paweł Pabian.

Test::Inline by Matthew Stuckwisch.

Math::Libgsl::MovingWindow by Fernando Santagata.

Updated Raku Modules

Native::Packing by David Warring.

Gnome::Gtk3, XML::Actions, Pod::Render by Marcel Timmerman.

Proc::Async::Timeout, Pod::To::BigPage by Wenzel P.P. Peppmeyer.

Fcntl by Michael Stemle.

Math::Libgsl::Matrix, Math::Libgsl::Constants by Fernando Santagata.

App::Mi6 by Shoichi Kaji.

Raku::Pod::Render by Richard Hainsworth.

Winding down

A week with exciting developments, a love story, and a new Comma Complete release! And thanks to Wenzel P.P. Peppmeyer, a complete list of updated Raku modules!

Yours truly keeps repeating: don’t forget to stay healthy and to stay safe. Please check again next week for more news about the Raku Programming Language!

Published by liztormato on 2020-08-31T20:03:57

A special edition of the Rakudo Weekly News to inform you all of an exciting development in the world of the Raku Programming Language.

The implementation of hashes in MoarVM has been replaced! Nicholas Clark wrote a new hash implementation for MoarVM in the past months, which uses less memory and less CPU.

MoarVM had been using the open source uthash library since July 2012. At the time it was the “Go To” hashing library for C – full of features and flexible – so it was the logical choice. Over the years it has been modified to remove the features not needed by MoarVM, and hence the associated code and overhead. But the basic design was necessarily unchanged – it remained the “classic” approach to hash tables – “collision chaining”. The internal structure is an array of linked lists – just like Perl has done, since Perl 1.

This structure is simple to implement, but ultimately dates back to an era when CPUs didn’t even have caches, so it doesn’t fare too well now that minimising cache misses is the key to performance.

The basic problem that hash tables need to solve, is that they need a fast way to map from arbitrary keys not known in advance into something that is efficient for a CPU to manipulate – integers. You’d like to store each key (and its associated value data) in an array, but also you can’t make the array infinitely big, so you always end up with a trade off – sometimes more than one key maps to the same integer, and so you need to code to handle this.

Collision chaining is the “obvious” way – a memory efficient way to store a variable number of keys that all need to sit at the same index in the array. Operations need to walk all keys at the same index linearly, but keep the list small (grow the array as needed) and the performance is still good.

However the downside is that the keys you need to walk linearly are not adjacent in memory, meaning you likely have cache misses. You could replace the linked lists with arrays (for more complexity, and higher delete costs), but you still have one pointer hop and potential cache miss.

With the increasing importance of staying CPU cache-friendly, a different approach to collision resolution is now better: open address hashing.

This is roughly equivalent to using a larger array than collision chaining, and eliminating the linked lists. While this reduces the collisions, it doesn’t eliminate them, so you need some other strategy to handle them. Usually this involves a strategy for storing the key/value pair in an array index close to the correct location, and searching all locations where the key might be until either it is found or the possible locations are exhausted.

Currently one of the most efficient approaches to this is “Robin Hood” hashing. On updates, it moves entries around to minimise the distance of all keys to their ideal locations, not just the key added or deleted. Usefully, this increases the chance of CPU cache hits during lookups.

Regarding hash iterators: one fun “gotcha” is that the Raku Programming Language (as well as Perl) both specify that it is acceptable to delete the key at the current hash iterator position without problems. Fortunately it was possible to adapt the insertion and iteration strategies to permit this without needing any extra complications or overhead.

All of this is now available in the main branch of the Rakudo compiler. If you have hash-intensive applications, please try it out with this (as yet unreleased) version of Rakudo. And let us know of your findings. Reports so far indicate 10% to 15% improvements in core setting compilation, as well as in roast testing. But of course, Your Mileage May Vary!

Published by gfldex on 2020-08-25T20:47:33

I was wondering where lizmat gets the info for changed modules from. She kindly answered with a link. I learned that updates to modules only show up, when we put them on CPAN. Since most modules are hosted on github, changing a module there does not mean that the world will be informed. I believe a better way to do that would be to fetch the ecosystems (we got two) once a week and check if version in any META6.json has changed.

Anyway, the reason I started this post is the documentation for FixedInt. It reads:

One major caveat to be aware of when using this module. The class instance may not be instantiated in a $ sigiled variable.



Raku $ sigiled scalar variables do not implement a STORE method, but instead do direct assignment; and there doesn’t seem to be any easy way to override that behaviour.



An implication of that is that classes that do implement a STORE method can not be held in a $ sigiled variable. (Well, they can, they just won’t work correctly. The first time you try to store a new value, the entire class instance will be overwritten and disappear.)

That is not true.

class Changing { has $!var handles <Str gist raku> is default(Nil); method STORE(\v) { note 'storing'; $!var = v } method FETCH { note 'fetching'; $!var } } constant term:<$a> := Changing.new; $a = 42; put $a; # OUTPUT: storing 42

The problem here is that the docs talk about variables while Raku don’t got any. It got containers with mutable content and values which are immutable. The language also got symbols that we can actually point at in source code. (Values we can point at in source code are called literals.) In the example above I created a symbol that looks like a variable but is a “reference” to a value of type Changing . The assignment operator can not be overloaded so we can protect immutable values. We can implement the method STORE instead. In fact we must, because there is no container in-between the symbol $a and the instance of Changing . (We get X::Assignment::RO if we try to assign without a STORE .) Since Rakudo does not recognise Changing as a container, it will refuse to call FETCH .

Thundergnat wrote a neat module with very little effort. Quite useful to do calculations with integers of fixed bit size.

my \fixedint = FixedInt.new(:8bit); say fixedint; # 0 say fixedint -= 12; # 244 say fixedint.signed; # -12 say fixedint.bin; # 0b11110100 say fixedint.hex; # 0xF4

He achieved all that in just 36 lines of code. The trick is to force the user to bind and thus avoid the creation of a container while using STORE and FETCH to change the object in place. I doubt this is thread safe. Also the user of the module loses the ability to use some forms of quote interpolation and data dumper functions/modules will have less to display.

my \i = Int.new(10); my $i = Int.new(10); dd i; dd $i; # OUTPUT: 10 Int $i = 10

We don’t have to define many operators to make custom types work because of plenty of meta-programming that is done in CORE. Many of those constructs assume immutable values. Autothreading is planned and will make the use of ». “interesting”. Thundergnat did not specify a language version for his module. The module itself is not hard to make safe. But – acutally BUT – this will change the interface for the user.

The flexibility of the language bites us here. Even thought the docs explain the difference between different sigils nobody is forced to read it. Also, nobody is forced to stick use v6.d at the beginning of a module. Please do so or the compiler wont be able to help you in the future. While naming immutable constructs quite often, the docs don’t explain why we use them. Concurrency and thus threading is very easy to add to a program. Testing it is hard.

I don’t have a solution to those problems but I’m pretty sure we need one or they will haunt us the next 100 years.

Published by gfldex on 2020-08-22T22:44:22

STDERR is often (ab)used for printing debug or status information. This can create clutter which in turn hides the important stuff. I want to print the essential stuff in exceptions in red unless a dynvar or environment variable is set.

class Explode is Exception { method message { put "$*dynvar is bad‼"; } } sub e() { await start { Explode.new.throw; } CATCH { default { put .message } } } my $*dynvar = 'foo'; e(); # OUTPUT: foo is bad‼

We can access a dynvar inside the method of an exception from within an exception handler. In Shell::Piping error handling is a bit more involved. The biggest issue is fail because the enclosed exception is thrown by some routine in CORE about two steps down the call tree seen from the implicit or explicit MAIN sub. The dynvar is simply not there at this point in time. Luckily instances of Exception tend not to be long lived so we can get away with capturing the state of a dynvar at object creation. A good place to do so is TWEAK .

sub infix:<///>(\a, \b) is raw { my $dyn-name = a.VAR.name; my $has-outer-dynvar = CALLER::CALLERS::{$dyn-name}:exists; CALLER::{$dyn-name} = $has-outer-dynvar ?? CALLER::CALLERS::{$dyn-name} !! b } role Exception::Colored is Exception is export { has $.color; submethod TWEAK { my $*colored-exceptions /// on; $!color = $*colored-exceptions ~~ on && $env-color ?? 31 !! 0; } method RED($str) { $*ERR.t ?? ("\e[" ~ $.color ~ 'm' ~ $str ~ "\e[0m") !! $str } }

Now I can use $.RED in .message of any exception that is Exception::Colored .

To have a look at the full stack was very helpful to figure out why the dynvar wasn’t there in some cases. For such cases I have a context sensitive binding in my .vimrc .

nmap <F1> :w<CR>:!raku -I ./lib %<CR> imap <F1> <esc>:w<CR>:!raku --ll-exception -I ./lib %<CR>

In insert mode F1 will write the file and run Rakudo with an additional parameter. This results in a full stack trace.

foo failed at SETTING::src/core.c/Exception.pm6:62 (/usr/local/src/rakudo/install/share/perl6/runtime/CORE.c.setting.moarvm:throw) from SETTING::src/core.c/Failure.pm6:56 (/usr/local/src/rakudo/install/share/perl6/runtime/CORE.c.setting.moarvm:throw) from SETTING::src/core.c/Failure.pm6:111 (/usr/local/src/rakudo/install/share/perl6/runtime/CORE.c.setting.moarvm:sink) from /home/dex/tmp/tmp-2.raku:56 (<ephemeral file>:<unit>) from /home/dex/tmp/tmp-2.raku:1 (<ephemeral file>:<unit-outer>) from gen/moar/stage2/NQPHLL.nqp:1948 (/usr/local/src/rakudo/install/share/nqp/lib/NQPHLL.moarvm:eval) from gen/moar/stage2/NQPHLL.nqp:2153 (/usr/local/src/rakudo/install/share/nqp/lib/NQPHLL.moarvm:evalfiles) from gen/moar/stage2/NQPHLL.nqp:2113 (/usr/local/src/rakudo/install/share/nqp/lib/NQPHLL.moarvm:command_eval) from gen/moar/Compiler.nqp:60 (/usr/local/src/rakudo/install/share/perl6/lib/Perl6/Compiler.moarvm:command_eval) from gen/moar/stage2/NQPHLL.nqp:2038 (/usr/local/src/rakudo/install/share/nqp/lib/NQPHLL.moarvm:command_line) from gen/moar/rakudo.nqp:116 (/usr/local/src/rakudo/install/share/perl6/runtime/perl6.moarvm:MAIN) from gen/moar/rakudo.nqp:1 (/usr/local/src/rakudo/install/share/perl6/runtime/perl6.moarvm:<mainline>) from <unknown>:1 (/usr/local/src/rakudo/install/share/perl6/runtime/perl6.moarvm:<main>) from <unknown>:1 (/usr/local/src/rakudo/install/share/perl6/runtime/perl6.moarvm:<entry>)

As you can see there are quite a few things called before your script will be executed. Luckily Rakudo is implementing Raku in Raku so we have a chance to see what is going on.

Published by vrurg on 2020-08-21T00:01:00

A little preface with an off-topic first. In the process of writing this post I was struck by the worst sysadmin’s nightmare: loss of servers followed by a bad backup. Until the very last moment I have had well-grounded fears of not finishing the post whatsoever. Luckily, I made a truce with life to get temporary respite. A conclusion? Don’t use bareos with ESXi. Or, probably, just don’t use bareos…

While picking up a RFC for my previous advent post I was totally focused on language-objects section. It took me a few passes to find the right one to cover. But in the meantime I realized that a very important topic is actually missing from the list. “Impossible!” – I said to myself and went onto another hunt later. Yet, neither search for “abstract class”, nor for “role” didn’t come up with any result. I was about to give up and make the conclusion that the idea came to life later, when the synopses were written or around so.

But, wait, what interface is mentioned as a topic of a OO-related RFC? Oh, that interface! As the request body states it:

Add a mechanism for declaring class interfaces with a further method for declaring that a class implements said interface.

At this point I realized once again that it is now a full 20 years behind us. That the text is from the times when many considered Java as the only right OO implementation! And indeed, by reading further we find the following statement, likely to be affected by some popular views of the time:

It’s now a compile time error if an interface file tries to do anything other than pre declare methods.

Reminds of something, isn’t it? And then, at the end of the RFC, we find another one:

Java is one language that springs to mind that uses interface polymorphism. Don’t let this put you off – if we must steal something from Java let’s steal something good.

Good? Good?!! Oh, my… Java’s attempt to solve problems of C++ multiple inheritance approach by simply denying it altogether is what drove me away from the language from the very beginning. I was fed up with Pascal controlling my writing style as far back as in early 90s!

Luckily, those involved in early Perl6 design must have shared my view to the problem (besides, Java itself has changed a lot since). So, we have roles now. What they have in common with abstract classes and the modern interfaces is that a role can define an interface to communicate with a class, and provide implementation of some role-specific behavior too. It can also do a little more than only that!

What makes roles different is the way a role is used in Raku OO model. A class doesn’t implement a role; nor it inherits from it as it would with abstract classes. Instead it does the role; or the other word I love to use for this: it consumes a role. Technically it means that roles are mixed into classes. The process can be figuratively described as if the compiler takes all methods and attributes contained by role’s type object and re-plants then onto the class. Something like:

role Foo { has $.foo = 42; method bar { say "hello!" } } class Bar does Foo { } my $obj = Bar.new; say $obj.foo; # 42 $obj.bar; # hello!

How is it different from inheritance? Let’s change the class Bar a little:

class Baz { method bar { say "hello from Baz!" } } class Bar does Foo is Baz { method bar { say "hello from Bar!"; nextsame } } Bar.new.bar; # hello from Bar! # hello from Baz!

nextsame in this case re-dispatches a method call to the next method of the same name in the inheritance hierarchy. Simply put, it passes control over to the method Baz::bar , as one can see from the output we’ve received. And Foo::bar ? It’s not there. When the compiler mixes the role into Bar it finds that the class does have a method named bar already. Thus the one from Foo is ignored. Since nextsame only considers classes in the inheritance hierarchy, Foo::bar is not invoked.

With another trick the difference from interface consumption can also be made clear:

class Bar { method bar { say "hello from Bar!" } } my $obj = Bar.new; $obj.bar; # hello from Bar! $obj does Foo; $obj.bar; # hello!

In this example the role is mixed into an existing object, thanks to the dynamic nature of Raku which makes this possible. When a role is applied this way its content is enforced over the class content, similarly to a virus injecting its genetic material into a cell effectively overriding internal processes. This is why the second call to bar is dispatched to the Foo::bar method and Bar::bar is nowhere to be found on $obj this time.

To have this subject fully covered, let me show you some funny code example. The operator but used in it behaves like does except it doesn’t modify its LHS object; instead but creates and returns a new one:

‌‌my $s1 = "not empty means true"; my $s2 = $s1 but role { method Bool { False } }; say $s1 ?? "true" !! "false"; say $s2 ?? "true" !! "false";

This snippet I’m leaving for you to try on your own because it’s time for my post to move onto another topic: role parameterization.

Consider the example:

role R[Str:D $desc] { has Str:D $.description = $desc; } class Foo does R["some info"] { } say Foo.new.description; # some info

Or more practical one:

role R[::T] { has T $.val is rw; } class ContInt does R[Int] { } ContInt.new.val = "oops!"; # "Type check failed..." exception is thrown

The latter example utilizes so called type capture where T is a generic type, the concept many of you are likely to know from other languages, which turns into a concrete type only when the role gets consumed and supplied with a parameter, as in class ContInt declaration.

The final iteration for parametrics I’m going to present today would be this more extensive example:

role Vect[::TX] { has TX $.x; method distance(Vect $v) { ($v.x - $.x).abs } } role Vect[::TX, ::TY] { has TX $.x; has TY $.y; method distance(Vect $v) { (($v.x - $.x)² + ($v.y - $.y)²).sqrt } } class Foo1 does Vect[Rat] { } class Foo2 does Vect[Int, Int] { } my $foo1 = Foo1.new(:x(10.0)); my $foo2 = Foo2.new(:x(10), :y(5)); say $foo1; # Foo1.new(x => 10.0) say $foo2; # Foo2.new(x => 10, y => 5) say $foo2.distance(Foo2.new(:x(11), :y(4))); # 1.4142135623730951

Hopefully, the code explains itself. Most certainly it nicely visualizes the long way made by the language designers since the initial RFC was made.

At the end I’d like to share a few interesting facts about Raku roles and their implementation by Rakudo.

As of Raku v6.e, a role can define own constructor/destructor submethods. They’re not mixed into a class as methods are. Instead, they’re used to build/destroy an object same way, as constructors/destructors of classes do:

use v6.e.PREVIEW; # 6.e is not released yet role R { submethod TWEAK { say "R" } } class Foo { submethod TWEAK { say "Foo" } } class Bar is Foo does R { submethod TWEAK { say "Bar" } } Bar.new; # Foo # R # Bar

Role body is a subroutine. Try this example:

role R { say "Role" } class Foo { say "Foo" } # Foo

Then modify class Foo so that it consumes R :

class Foo does R { say "Foo" } # Role # Foo

The difference in the output is explained by the fact that role body gets invoked when the role itself is mixed into a class. Try adding one more class consuming R alongside with Foo and see how the output changes. To make the distinction between class and role bodies even more clear, make your new class inherit from Foo . Even though is and does look alike they act very much different. 3. Square brackets in role declaration enclose a signature. As a matter of fact, it is the signature of role body subroutine! This makes a few very useful tricks possible:

# Limit role parameters to concrete numeric objects. role R[Numeric:D ::T $default] { has T $.value = $default; } class Foo[42.13] { }; say Foo.new.x; # 42.13

Or even:

# Same as above but only allow specific values. role R[Numeric:D ::T $default where * > 10] { has T $.value = $default; }

Moreover, in case when few different parametric candidates are declared for a role, choosing the right one is a task of the same kind as choosing the right routine of a few multi candidates and based on matching signatures to the parameters passed. 4. Rakudo implements a role using four different role types! Let me demonstrate one aspect of this with the following snippet based on the example for the previous fact:

for Foo.^roles -> \consumed { say R === consumed }

=== is a strict object identity operator. In our case we can consider it as a strict type equivalence operator which tells us if two types are actually exactly the same one.

And as I hope to have this subject covered later in a more extensive article, at this point I would make it a classical abrupt open ending by providing just the output of the above snippet as a hint:

False

Published by coke on 2020-08-20T03:00:00

RFC 28 – Perl Should Stay Perl

Originally Submitted by Simon Cozens, RFC 28 on August 4, 2020, this RFC asked the community to make sure that whatever updates were made, that Perl 6 was still definitely recognizable as Perl. After 20 years of design, proofs-of-concept, implementations, two released language versions, we’ve ended up with something that is definitely Perlish, even if we’re no longer a Perl.

At the time the RFCs were submitted, the thought was that this language would be the next Perl in line, Perl 6. As time went on before an official language release, Perl 5 development picked up again, and that team & community wanted to continue on its own path. A few months ago, Perl 6 officially changed its name to Raku – not to get away from our Perl legacy, but to free the Perl 5 community to continue on their path as well. It was a difficult path to get to Raku, but we are happy with the language we’re shipping, even if we do miss having the Perl name on the tin.

“Attractive Nuisances”

Let’s dig into some of the specifics Simon mentions in his RFC.

We’ve got a golden opportunity here to turn Perl into whatever on earth we like. Let’s not take it.

This was a fine line that we ended up crossing, even before the rename. Specific design decisions were changed, we started with a fresh implementation (more than once if you count Pugs & Parrot & Niecza …). We are Perlish, inspired by Perl, but Raku is definitely different.

Nobody wins if we bend the Perl language out of all recognition, because it won’t be Perl any more.

I argue that eventually, everyone won – we got a new and improved Perl 5 (and soon, a 7), and we got a brand new language in Raku. The path wasn’t clear 20 years ago, but we ended up in a good place.

Some things just don’t need heavy object orientation.

Raku’s OO is everywhere: but it isn’t required. While you can treat everything as an object:

3 . sqrt . say ;

You can still use the familiar Perlish forms for most features. say sqrt 3;

Even native scalars (which don’t have the overhead of objects) let you treat them as OO if you want.

my uint32 $ x = 32 ; say $ x ; $ x .^ name . say ;

Even though $x here doesn’t start out as an object, by calling a meta-method on it, the compiler cheats on our behalf and outputs Int here, the closest class to our native int.

But we avoid going the extent of Java; for example, we don’t have to define a class with a main method in order to execute a program.

Strong typing does not equal legitimacy.

Similar to the OO approach, we don’t require typing, but allow you to gradually add it. You can start with an untyped scalar variable, but as you further develop your code, you can add a type to that declared variable, and to parameters to subs & methods. The types can be single classes, subsets, Junctions, where clauses with complicated logic: you can use as much or as little typing as you want. Raku’s multi routines (subs or methods with the same name but different arguments) give you a way to split up your code based on types that is then optimized by the compiler. But you can use as little or as much of it as you want.

Just because Perl has a map operator, this doesn’t make it a functional programming language.

I think Raku stayed true to this point – while there are functional elements, the polyglot approach (supporting multiple different paradigms) means that any one of them, including functional, doesn’t take over the language. But you can declare routines pure , allowing the compiler to constant fold calls to that routine when the args are known at compile time.

Perl is really hard for a machine to parse. … It’s meant to be easy for humans to understand.

Development of Raku definitely embraced this thought – “torture the implementators on behalf of the users”. This is one of the reasons it took us a while to get to here. But on that journey, we designed and developed new language parsing tools that we not only use to build and run Raku, but we expose to our users as well, allowing them to implement their own languages and “Slangs” on top of our compiler.

fin

Finally, now that the Perl team is proposing a version jump to 7, I suspect the Perl community will raise similar concerns to those raised by Simon. Raku and Perl 7 have taken two different paths, but both will be recognizable to the Perl 5 RFC contributors from 20 years ago.

Published by koto on 2020-08-19T01:00:00

RFC 84 by Damian Conway: Replace => (stringifying comma) with => (pair constructor)

Yet another nice goodie from Damian, truly what you might expect from the interlocutor and explicator!

The fat comma operator, => , was originally used to separate values – with a twist. It behave just like , operator did, but modified parsing to stringify left operand.

It saved you some quoting for strings and so this code for hash initialization:

my % h = ( ' a ' , 1 , ' b ' , 2 , );

could be written as:

my % h = ( a => 1 , b => 2 , );

Here, bare a and b are parsed correctly, without a need to quote them into strings. However, the usual hash assignment semantics is still the same: pairs of values are processed one by one, and given that => is just a “left-side stringifying” comma operator, interestingly enough the code above is equivalent to this piece:

my % h = ( a => 1 => b => 2 => );

The proposal suggested changing the meaning of this “special” operator to become a constructor of a new data type, Pair.

A Pair is constructed from a key and a value:

my @ pairs = a => 42 , 1 => 2 ; say @ pairs [ 0 ]; # a => 42 say @ pairs [ 1 ]; # 1 => 2; say @ pairs [ 1 ] . key .^ name ; # Int, not a Str

The @pairs list contains just 2 values here, not 4, one is conveniently stringified for us and the second just uses bare Int literal as a key.

It turns out, introducing Pair is not only a convenient data type to operate on, but this change offers new opportunities for… subroutines.

Raku has first class support of signatures, both for the sake of the “first travel class” pun here and for the matter of it, yes, actually having Signature, Parameter and Capture as first-class objects, which allows for surprising solutions. It is not a surprise it supports named parameters with plenty of syntax for it. And Pair class has blended in quite naturally.

If a Pair is passed to a subroutine with a named parameter where keys match, it works just so, otherwise you have a “full” Pair, and if you want to insist, a bit of syntax can help you here:

sub foo($ pos , : $named) { say " $pos.gist(), $named.gist() " ; } foo( 42 ); # 42, (Any) try foo( named => 42 ); # Oops, no positionals were passed! foo(( named => 42 )); # named => 42, (Any) foo(( named => 42 ), named => 42 ); # named => 42, 42

As we can see, designing a language is interesting: a change made in one part can have consequences in some other part, which might seem quite unrelated, and you better hope your choices will work out well when connected together. Thanks to Damian and all the people who worked on Raku design, for putting in an amazing amount of efforts into it!

And last, but not the least: what happened with the => train we saw? Well, now it does what you mean if you mean what it does:

my % a = a => 1 => b => 2 ; say % a . raku; # {:a(1 => :b(2))}

And yes, this is a key a pointing to a value of Pair of 1 pointing to a value of Pair of b pointing to value of 2, so at least the direction is nice this time. Good luck and keep your directions!

Published by liztormato on 2020-08-18T03:00:00

Proposed on 7 September 2000, frozen on 20 September 2000, depends on RFC 159: True Polymorphic Objects proposed on 25 August 2000, frozen on 16 September 2000, also by Nathan Wiger and already blogged about earlier.

What is tie anyway?

RFC 200 was about extending the tie functionality as offered by Perl.

This functionality in Perl allows one to inject program logic into the system’s handling of scalars, arrays and hashes, among other things. This is done by assigning the name of a package to a data-structure such as an array (aka tying). That package is then expected to provide a number of subroutines (e.g. FETCH and STORE ) that will be called by the system to achieve certain effects on the given data-structure.

As such, it is used by some of Perl’s core modules, such as threads, and many modules on CPAN, such as Tie::File. The tie functionality of Perl still suffers from the problems mentioned in the RFC.

It’s all tied

In Raku, everything is an object, or can be considered to be an object. Everything the system needs to do with an object, is done through its methods. In that sense, you could say that everything in Raku is a tied object. Fortunately, Rakudo (the most advanced implementation of the Raku Programming Language) can recognize when certain methods on an object are in fact the ones supplied by the system, and actually create short-cuts at compile time (e.g. when assigning to a variable that has a standard container: it won’t actually call a STORE method, but uses an internal subroutine to achieve the desired effect).

But apart from that, Rakudo has the capability of identifying hot code paths during execution of a program, and optimize these in real time.

Jonathan Worthington gave two very nice presentations about this process: How does deoptimization help us go faster from 2017, and a Performance Update from 2019.

Because everything in Raku is an object and access occurs through the methods of the classes of these objects, this allows the compiler and the runtime to have a much better grasp of what is actually going on in a program. Which in turn gives better optimization capabilities, even optimizing down to machine language level at some point.

And because everything is “tied” in Raku (looking at it using Perl-filtered glasses), injecting program logic into the system’s handling of arrays and hashes can be as simple as subclassing the system’s class and providing a special version of one of the standard methods as used by the system. Suppose you want to see in your program when an element is fetched from an array, one need only add a custom AT-POS method:

class VerboseFetcher is Array { # subclass core's Array class method AT-POS($pos) { # method for fetching an element say "fetching #$pos"; # tell the world nextsame # provide standard functionality } } my @a is VerboseFetcher = 1,2,3; # mark as special and initialize say @a[1]; # fetching #1␤2

The Raku documentation contains an overview of which methods need to be supplied to emulate an Array and to emulate a Hash . By the way, the whole lemma about accessing data structure elements by index or key is recommended reading for someone wanting to grok those aspects of the internals of Raku.

Nothing is special

In a blog post about RFC 168 about making things less special, it was already mentioned that really nothing is special in Raku. And that (almost) all aspects of the language can by altered inside a lexical scope. So what the above example did to the Array class, can be done to any of Raku’s core classes, or any other classes that have been installed from the ecosystem, or that you have written yourself.

But it can be overwhelming to have to supply all of the logic needed to fully emulate an array or a hash. Especially when you first try to do this. Therefore the ecosystem actually has two modules with roles that help you with that:

Both modules only require you to implement 5 methods in a class that does these roles to get the full functionality of an array or a hash, completely customized to your liking.

In fact, the flexibility of the approach of Raku towards customizability of the language, actually allowed the implementation of Perl’s tie built-in function in Raku. So if you’re porting code from Perl to Raku, and the code in question uses tie , you can use this module as a quick intermediate solution.

Has the problem been fixed?

Let’s look at the problems that were mentioned with tie in RFC 200:

It is non-extensible; you are limited to using functions that have been implemented with tie hooks in them already.

Raku is completely extensible and pluggable in (almost) all aspects of its implementation. There is no limitation to which classes one can and one cannot extend.

Any additional functions require mixed calls to tied and OO interfaces, defeating a chief goal: transparency.

All interfaces use methods in Raku, since everything is an object or can be considered as one. Use of classes and methods should be clear to any programmer using Raku.

It is slow. Very slow, in fact.

In Raku, it is all the same speed during execution. And every customization profits from the same optimization features like every other piece of code in Raku. And will be, in the end, optimized down to machine code when possible.

You can’t easily integrate tie and operator overloading.

In Raku, operators are multi-dispatch subroutines that allow additional candidates for custom classes to be added.

If defining tied and OO interfaces, you must define duplicate functions or use typeglobs.

Typeglobs don’t exist in Raku. All interfacing in Raku is done by supplying additional methods (or subroutines in case of operators). No duplication of effort is needed, so no such problem.

Some parts of the syntax are, well, kludgey

One may argue that the kludgey syntax of Perl has been replaced by another kludgey syntax in Raku. That is probably in the eye of the beholder. Fact is that the syntax in Raku for injecting program logic, is not different from any other subclassing or role mixins one would otherwise do in Raku.

Conclusion

Nothing from RFC 159 actually was implemented in the way it was originally suggested. However, solutions to the problems mentioned have all been implemented in Raku.

Published by gfldex on 2020-08-17T09:11:05

While adding dynvars to Shell::Piping to reduce the risk of finger injury I made a typo lizmat kindly corrected. She suggested to use the defined-or operator to test if a given dynamic variable is declared.

($*FOO // '') eq 'yes'

This is not equivalent to test if a dynvar was declared down the call tree. For that we need to check CALLERS .

say CALLERS::<$*colored-exceptions>:exists; dd CALLERS::<$*colored-exceptions>; # OUTPUT: False # Nil

In case the dynvar is declared we get a different result.

sub dyn-receiver { say CALLERS::<$*colored-exceptions>:exists; dd CALLERS::<$*colored-exceptions>; } my $*colored-exceptions; dyn-receiver(); # OUTPUT : True # Any $*colored-exceptions = Any

For a module author that means we can have somebody sneak some undefined value into a dynvar we use, that has a type we don’t expect. Composebility is not the same thing as correctness. If we want do deal with this situation properly we need to check if the caller declared the dynvar and use a proper default value if they don’t.

class Switch { has $.name; method gist { $.name } method Str { die('invalid coersion') } method Bool { die('invalid coersion') } } constant on is export := Switch.new: :name<on>; constant off is export := Switch.new: :name<off>; sub dyn-receiver { my $*colored-exceptions = CALLERS::<$*colored-exceptions>:exists ?? CALLERS::<$*colored-exceptions> !! off }

In this example there are just two possible values but if there are more and they can be undefined we need to be more careful. However, this is quite a bit of typing. Can we use a deboilerplater here?

sub infix:<///>(\a, \b) is raw { my $dyn-name = a.VAR.name; my $has-outer-dynvar = CALLER::CALLERS::{$dyn-name}:exists; CALLER::{$dyn-name} = $has-outer-dynvar ?? CALLER::CALLERS::{$dyn-name} !! b } sub c { my $*colored-exceptions /// Int; dd $*colored-exceptions; } sub d { my $*colored-exceptions = Str; c(); } c(); d(); # OUTPUT: Int $*colored-exceptions = Int Str $*colored-exceptions = Str

This operator takes two bound arguments. If we call it with a dynvar a contains the container that is the dynvar. We can query the name of that container and us it to check if down the call tree the dynvar was already declared. If so we use its value and assign it directly into the dynvar declared in c . Otherwise we assign b to the dynvar. In both cases we might return something naughty so we better do so raw.

Poking across the stack is risky. This could be done better with proper macros. I am quite sure we can do so after Christmas*.

*) For any value greater then Christmas last year.

Published by liztormato on 2020-08-17T00:01:00

Proposed on 25 August 2000, frozen on 16 September 2000

On polymorphism

RFC159 introduces the concept of true polymorphic object.

Objects that can morph into numbers, strings, booleans and much more on-demand. As such, objects can be freely passed around and manipulated without having to care what they contain (or even that they’re objects).

When one looks at how 42 , "foo" , now work in Raku nowadays, one can only see that that vision has pretty much been implemented. Because most of the time, one doesn’t really care about the fact that 42 is really an Int object, "foo" is really a Str object and that now represents a new Instant object every time it is called. The only thing one cares about, is that they can be used in expressions:

say "foo" ~ "bar"; # foobar say 42 + 666; # 708 say now - INIT now; # 0.0005243

RFC159 lists a number of method names to be used to indicate how an object should behave under certain circumstances, with a fallback provided by the system if the class of the object does not provide that method. In most cases these methods did not make it into Raku, but some of them did with a different name:

Name in RFC Name in Raku When STRING Str Called in a string context NUMBER Numeric Called in a numeric context BOOLEAN Bool Called in a boolean context

And some of them even retained their name:

Name in RFC When BUILD Called in object blessing STORE Called in an lvalue = context FETCH Called in an rvalue = context DESTROY Called in object destruction

but with sometimes subtly different semantics from the RFC.

Only a few made it

In the end, only a limited set of special methods was decided on for Raku. All of the other methods in RFC159 have been implemented by polymorphic operators that coerce when needed. For instance the proposed PLUS method has been implemented as an infix + operator that has a “default” candidate that coerces its operands to a number.

So, effectively, if you have an object of class Foo and you want that to act as a number, one only needs to add a Numeric method to that class. An expression such as:

my $foo = Foo.new; say $foo + 42;

is effectively executing:

say infix:<+>( $foo, 42 );

and the infix:<+> candidate that takes Any objects, does:

return infix:<+>( $foo.Numeric, 42.Numeric );

And if such a class Foo does not provide a Numeric method, then it will throw an exception.

The DESTROY method

In Raku, object destruction is non-deterministic. If an object is no longer in use, it will probably get garbage collected. The probable part is because Raku does not know a global destruction phase, unlike Perl. So when a program is done, it just does an exit (although that logic does honour any END blocks).

An object is marked “ready for removal” when it can no longer be “reached”. It then has its DESTROY method called when the garbage collection logic kicks in. Which can be any amount of time after it became unreachable.

If you need deterministic calling of the DESTROY method, you can use a LEAVE phaser. Or if that doesn’t allow you to scratch your itch, you can possibly use the FINALIZER module.

STORE / FETCH on scalar values

Conceptually, you can think of a container in Raku as an object with STORE and FETCH methods. Whenever you set a value in a container, it conceptually calls the STORE method. And whenever the value inside the container is needed, it conceptually calls the FETCH method. In pseudo-code:

my $foo = 42; # Scalar.new(:name<$foo>).STORE(42)

But what if you want to control access to a scalar value, similar to Perl’s tie ? Well, in Raku you can, with a special type of container class called Proxy . An example of its usage:

sub proxier($value? is copy) { return-rw Proxy.new( FETCH => method { $value }, STORE => method ($new) { say "storing"; $value = $new } ) } my $a := proxier(42); say $a; # 42 $a = 666; # storing say $a; # 666

Subroutines return their result values de-containerized by default. There are basically two ways of making sure the actual container is returned: using return-rw (like in this example), or by marking the subroutine with the is rw trait.

STORE on compound values

Since FETCH only makes sense on scalar values, there is no support for FETCH on compound values, such as hashes and arrays, in Raku. I guess one could consider calling FETCH in such a case to be the Zen slice, but it was decided that that would just return the compound value itself.

The STORE method on compound values however, allows for some interesting functionality. The STORE method is called whenever there is an initialization of the entire compound value. For instance:

@a = 1,2,3;

basically executes:

@a := @a.STORE( (1,2,3) );

But what if you don’t have an initialized @a yet? Then the STORE method is supposed to actually create a new object and initialize this with the given values. And the STORE method can tell, because then it also receives a INITIALIZE named argument with a True value. So when you write this:

my @b = 1,2,3;

what basically gets executed is:

@b := Array.new.STORE( (1,2,3), :INITIALIZE );

Now, if you realize that:

my @b;

is actually short for:

my @b is Array;

it’s only a small step to realize that you can create your own class with customized array logic, that can replace the standard Array logic with your own. Observe:

class Foo { has @!array; method STORE(@!array) { say "STORED @!array[]"; self } } my @b is Foo = 1,2,3; # STORED 1 2 3

However, when you actually start using such an array, you are confronted with some weird results:

say @b[0]; # Foo.new say @b[1]; # Index out of range. Is: 1, should be in 0..0

Without getting into the reasons for these results, it should be clear that to completely mimic an Array , a lot more is needed. Fortunately, there are ecosystem modules available to help you with that: Array::Agnostic for arrays, and Hash::Agnostic for hashes.

BUILD

The BUILD method also subtly changed its semantics. In Raku, method BUILD will be called as an object method and receive all of the parameters given to .new , after which it is fully responsible for initializing object attributes. This becomes more visible when you use the internal helper module BUILDPLAN . This module shows the actions that will be performed on an object of a class when built with the default .new method:

class Bar { has $.score = 42; } use BUILDPLAN Bar; # class Bar BUILDPLAN: # 0: nqp::getattr(obj,Foo,'$!score') = :$score if possible # 1: nqp::getattr(obj,Foo,'$!score') = 42 if not set

This is internals speak for: – assign the value of the optional named argument score to the $!score attribute – assign the value 42 to the $!score attribute if it was not set already

Now, if we add a BUILD method to the class, the buildplan changes:

class Bar { has $.score = 42; method BUILD() { } } use BUILDPLAN Bar; # class Bar BUILDPLAN: # 0: call obj.BUILD # 1: nqp::getattr(obj,Foo,'$!score') = 42 if not set

Note that there is no automatic attempt to take the value of the named argument score anymore. Which means that you need to do a lot of work in your custom BUILD method if you have many named arguments, and only one of them needs special handling. That’s why the TWEAK method was added:

class Bar { has $.score = 42; method TWEAK() { } } use BUILDPLAN Bar; # class Bar BUILDPLAN: # 0: nqp::getattr(obj,Foo,'$!score') = :$score if possible # 1: nqp::getattr(obj,Foo,'$!score') = 42 if not set # 2: call obj.TWEAK

Note that the TWEAK method is called after all of the normal checks and initializations. This is in most cases much more useful.

Conclusion

Although the idea of true polymorphic objects has been implemented in Raku, it turned out quite different from originally envisioned. In hindsight, one can see why it was decided to be unpractical to try to support an ever increasing list of special methods for all objects. Instead, a choice was made to only implement a few key methods from the proposal, and for the others the approach of automatic coercions was taken.

Published by Vadim Belman on 2020-07-30T00:00:00

I’m not the blogging kind of person and usually don’t post without a good reason. For a long while even a good reason wasn’t enough for me to write something. But things are changing, and this subject I should have mentioned earlier.

We’re currently in the process of forming The Raku Steering Council which is considered as a potentially effective governance model for the language and the community. It’s aimed at taking off load from the shoulders of Jonathan Worthington who currently bears the biggest responsibility for the vast majority of problems the community and the language development encounter.

The biggest advantages of the Council as I see them are:

it’s an elected body which granted legitimacy by the community and thus will have the most trust from it

being a collective authority it will provide stability and more reasonable decisions that it is possible with a single-person governance model

besides, I belive it will provide more structure in otherwise sometimes rather chaotic way of taking decisions by the Raku community.

Disclaimer: everything stated above is my personal view of the situation which is to be sumed up as: the damn good thing is happening!

To the point! The Council is not an elite closed club. Anybody can nominate himself! Just read the election announce.

BTW, the announce currently states the Aug 2 is the last date to nominate. This is about to change to Sep 6. Still, don’t procrastinate too much, let the community know about your nomination and yourself!

Published by Vadim Belman on 2020-07-18T06:51:00

I’m publishing the next article from ARFB series. This time rather short one, much like a warm up prior to the main workout.

But I’d like to devote this post to another subject. It’s too small for an article yet still worth special note. It was again inspired by one more post from Wenzel P.P. Peppmeyer. Actually, I knew there going to be a new post from him when found an error report in Rakudo repository. And this is the subject of the report which made me write the post.

In the report Wenzel claims that the following code results in incorrect Rakudo behaviour:

class C { }; my \px = C.new; sub postcircumfix:«< >»(C $c is raw) { dd $c; } px<ls -l>;

And that either the operator redefinition must work or the error message he gets is less than awesome:

===SORRY!=== Error while compiling /home/dex/projects/raku/lib/raku-shell-piping/px.raku Missing required term after infix at /home/dex/projects/raku/lib/raku-shell-piping/px.raku:9 ------> px<ls -l>⏏; expecting any of: prefix term

Before I tell why things are happening as intended here, let me notice two problems with the code itself which I copied over as-is since it doesn’t work anyway. First, the postcircumfix sub must be a multi and in Wenzel’s post it is done correctly. Second, it must receive two arguments: first is the object it is applied to, second is what is enclosed into the angle brakets.

So, why won’t it work as one might expect? In Raku there is a class of syntax constructs which look like operators but in fact they’re syntax sugars. There may be different reasons why is it done so. For example, the assignment operator = is done this way to achieve better performance. < > makes what is inclosed inside of it a string or a list of strings. Because of this it belongs to the same category, as quotes "" , for example. Therefore, it can only be implemented properly as a syntax construct. When we try to redefine it we break the compiler’s parsing and instead of a postcircumfix it finds a pair of less than and greater than operators. Because the latter doesn’t have rhs statement hence the error we see.

And you know, it was really useful to make this post as I realized that closing of the tickat was preliminary and that such compiler behavior is still incorrect because the attempt to redefine the op should prbably not result in bad parsing.

Published by Vadim Belman on 2020-07-13T10:01:00

A new article of Advanced Raku For Beginners series is published now. With a really surprising subject this time! It is about how we define Raku. Or, in other words: how one knows that his code is Raku? Why a compiler has the right to state that it is actually compiling Raku? And a few side concepts to provide grounds for the main topic.

It may seem to be a bit late. But keeping in mind that the article refers back to a few concepts from previous publications, it’s probably right the time for it!

Enjoy and don’t forget to correct me whenever necessary!

Published by Timo Paulssen on 2020-07-01T20:09:08

Good, that's the click-baity title out of the way. Sorry for taking such a long time to write again! There really has been everything going on.

To get back into blogging, I've decided to quickly write about a change I made some time ago already.

This change was for the "instrumented profiler", i.e. the one that will at run-time change all the code of the user's program, in order to measure execution times and count up calls and allocations.

In order to get everything right, the instrumented profiler keeps an entire call graph in memory. If you haven't seen something like it yet, imagine taking stack traces at every point in your program's life, and all these stack traces put together make all the paths in the tree that point at the root.

This means, among other things, that the same function can come up multiple times. With recursion, the same function can in fact come up a few hundred times "in a row". In general, if your call tree can become both deep and wide, you can end up with a whole truckload of nodes in your tree.

Is it a bad thing to have many nodes? Of course, it uses up memory. Only a single path on the tree is ever interesting at any one moment, though. Memory that's not read from or written to is not quite as "expensive". It never has to go into the CPU cache, and is even free to be swapped out to disk and/or compressed. But hold on, is this really actually the case?

It turns out that when you're compiling the Core Setting, which is a code file almost 2½ megabytes big with about 71½ thousand lines, and you're profiling during the parsing process, the tree gets enormous. At the same time, the parsing process slows to a crawl. What on earth is wrong here?

Well, looking at what MoarVM spends most of its time doing while the profiler runs gives you a good hint: It's spending almost all of its time going through the entirety of the tree for garbage collection purposes. Why would it do that, you ask? Well, in order to count allocated objects at every node, you have to match the count with the type you're allocating, and that means you need to hold on to a pointer to the type, and that in turn has to be kept up to date if anything moves (which the GC does to recently-born things) and to make sure types aren't considered unused and thrown out.

That's bad, right? Isn't there anything we can do? Well, we have to know at every node which counter belongs to which type, and we need to give all the types we have to the garbage collector to manage. But nothing forces us to have the types right next to the counter. And that's already the solution to the problem:

Holding on to all types is now the job of a little array kept just once per tree, and next to every counter there's just a little number that tells you where in the array to look.

This increases the cost of recording an allocation, as you'll now have to go to a separate memory location to match types to counters. On the other hand, the "little number" can be much smaller than before, and that saves memory in every node of the tree.

More importantly, the time cost of going through the profiler data is now independent of how big the tree is, since the individual nodes don't have to be looked at at all.

With a task as big as parsing the core setting, which is where almost every type, exception, operator, or sub lives, the difference is a factor of at least a thousand. Well, to be honest I didn't actually calculate the difference, but I'm sure it's somewhere between 100x faster and 10000x faster, and going from "ten microseconds per tree node" to "ten microseconds per tree" isn't a matter of a single factor increase, it's a complexity improvement from O(n) to O(1). As long as you can find a bigger tree, you can come up with a higher improvement factor. Very useful for writing that blog post you've always wanted to put at the center of a heated discussion about dishonest article titles!

Anyway, on testing my patch, esteemed colleague MasterDuke had this to say on IRC:

timotimo: hot damn, what did you do?!?! stage parse only took almost twice as long (i.e., 60s instead of the normal 37s) instead of the 930s last time i did the profile

(psst, don't check what 930 divided by 60 is, or else you'll expose my blog post title for the fraud that it is!)

Well, that's already all I had for this post. Thanks for your attention, stay safe, wear a mask (if by the time you're reading this the covid19 pandemic is still A Thing, or maybe something new has come up), and stay safe!

Published by p6steve on 2020-06-27T11:24:01

It was an emotional moment to see the keynote talk at TPRCiC from Sawyer X announcing that perl 7.00 === 5.32. Elation because of the ability of the hardcore perl community to finally break free of the frustrating perl6 roadblock. Pleasure in seeing how the risky decision to rename perl6 to raku has paid off and hopefully is beginning to defuse the tensions between the two rival communities. And Fear that improvements to perl7 will undermine the reasons for many to try out raku and may cannibalise raku usage. (Kudos to Sawyer to recognising that usage is an important design goal).

Then the left side of my brain kicked in. Raku took 15 years of total commitment of genius linguists to ingest 361 RFCs and then synthesise a new kind of programming language. If perl7 seeks the same level of completeness and perfection as raku, surely that will take the same amount of effort. And I do not see the perl community going for the same level of breaking changes that raku did. (OK maybe they could steal some stuff from raku to speed things up…)

And that brought me to Sadness. To reflect that perl Osborned sometime around 2005. That broke the community in two – let’s say the visionaries and the practical-cats. And it drove a mass emigration to Python. Ancient history.

So now we have two sister languages, and each will find a niche in the programming ecosystem via a process of Darwinism. They both inherit the traits (https://en.wikipedia.org/wiki/Perl#Design) that made perl great in the first place….

The design of Perl can be understood as a response to three broad trends in the computer industry: falling hardware costs, rising labor costs, and improvements in compiler technology. Many earlier computer languages, such as Fortran and C, aimed to make efficient use of expensive computer hardware. In contrast, Perl was designed so that computer programmers could write programs more quickly and easily.

Perl has many features that ease the task of the programmer at the expense of greater CPU and memory requirements. These include automatic memory management; dynamic typing; strings, lists, and hashes; regular expressions; introspection; and an eval() function. Perl follows the theory of “no built-in limits,” an idea similar to the Zero One Infinity rule.

Wall was trained as a linguist, and the design of Perl is very much informed by linguistic principles. Examples include Huffman coding(common constructions should be short), good end-weighting (the important information should come first), and a large collection of language primitives. Perl favours language constructs that are concise and natural for humans to write.

Perl’s syntax reflects the idea that “things that are different should look different.” For example, scalars, arrays, and hashes have different leading sigils. Array indices and hash keys use different kinds of braces. Strings and regular expressions have different standard delimiters. This approach can be contrasted with a language such as Lisp, where the same basic syntax, composed of simple and universal symbolic expressions, is used for all purposes.

Perl does not enforce any particular programming paradigm (procedural, object-oriented, functional, or others) or even require the programmer to choose among them.

But perl7 and raku serve distinct interests & needs:

Thing… perl7 raku compilation static parser one pass compiler compile speed super fast relies on pre-c0mp execution interpreted virtual machine execution speed super fast relies on jit module library CPAN native CPAN import closures yes yes OO philosophy Cor not module pervasive OO inheritance Roles + Is Roles + Is + multiple method invocation -> . type checking no gradual sigils idiosyncratic consistent references manual automatic unicode feature guard core signatures feature guard core lazy execution nope core Junctions nope core Rat math nope core Sets & Mixes nope core Complex math nope core Grammars nope core mutability nope core concurrency nope core variable scope “notched” cleaner operators C-like cleaner (e.g. for ->) switch no gather/when regexen classic cleaner eval yes shell AST macros huh? … …and so on

A long list and perhaps a little harsh on perl since many things may be got from CPAN – but when you use raku in anger, you do see the benefit if having a large core language. Only when I made this table, did I truly realise just what a comprehensive language raku is, and that perl will genuinely be the lean and mean option.

perl7

raku

And, lest we forget our strengths:

When I first saw Python code, I thought that using indents to define the scope seemed like a good idea. However, there’s a huge downside. Deep nesting is permitted, but lines can get so wide that they wrap lines in the text editor. Long functions and long conditional actions may make it hard to match the start to the end. And I pity anyone who miscounts spaces and accidentally puts in three spaces instead of four somewhere — this can take hours to debug and track down. [Source: here]

Published by p6steve on 2020-05-07T21:51:52

Chapter 1: The Convenience Seeker

Coming from Python, the Raku object model is recognizable, but brings a tad more structure:

What works for me, as a convenience seeker, is:

the attributes $.x, $.y are automatically provided with setter and getter methods

the constructor new() is automatically provided

the output method e.g. ‘say $p.Str’ is automatically provided

I can simply assign to an attribute with ‘=’

These are the things you want if you are writing in a more procedural or functional style and using class as a means to define a record type.

Chapter 2: The Control Freak

Here’s the rub…

When we describe OO, terms like “encapsulation” and “data hiding” often come up. The key idea here is that the state model inside the object – that is, the way it chooses to represent the data it needs in order to implement its behaviours (the methods) – is free to evolve, for example to handle new requirements. The more complex the object, the more liberating this becomes.

However, getters and setters are methods that have an implicit connection with the state. While we might claim we’re achieving data hiding because we’re calling a method, not accessing state directly, my experience is that we quickly end up at a place where outside code is making sequences of setter calls to achieve an operation – which is a form of the feature envy anti-pattern. And if we’re doing that, it’s pretty certain we’ll end up with logic outside of the object that does a mix of getter and setter operations to achieve an operation. Really, these operations should have been exposed as methods with a names that describes what is being achieved. This becomes even more important if we’re in a concurrent setting; a well-designed object is often fairly easy to protect at the method boundary.

(source jnthn https://stackoverflow.com/questions/59671027/mixing-private-and-public-attributes-and-accessors-in-raku)

Let’s fix that:



Now, I had to do a bit more lifting, but here’s what I got:

the private attributes $!x, $!y are formally encapsulated

the BUILD submethod does constructor .new() – zero boilerplate needed

it takes a method call [$p.y( 2 )] or the colon variant [$p.y: 3] to affect state

And, in contrast to Chapter 1:

I cannot assign to has attributes using ‘=’

since accessors are explicit I can easily code for constraints and side-effects

it’s a pita to code accessors encouraging proper separation of behaviours

Chapter 3: Who Got the Colon in the End?

I also discovered Larry’s First Law of Language Redesign: Everyone wants the colon

Apocalypse 1: The Ugly, the Bad, and the Good https://www.perl.com/pub/2001/04/02/wall.html/

I conclude that Larry’s decision was to confer the colon on the method syntax, subtly tilting the balance towards the strict model: by preferring $p.y: 3 over $p.y = 2.

Published by p6steve on 2020-04-17T17:36:39

Having hit rock bottom with an ‘I can’t understand my own code sufficiently enough to extend/maintain it’, I have been on a journey to review the perl5 Physics::Unit design and to use this to cut through my self made mess of raku Physics::Unit version 0.0.2.

Now I bring the perspective of a couple of years of regular raku coding to bear, so I am hoping that the bastard child of mature perl5 and raku version one will surpass both in the spirit of David Bowie’s “Pretty Things”.

One of the reasons I chose Physics::Units as a project was that, on the face of it, it seemed to have an aspect that could be approached by raku Grammars – helping me learn them. Here’s a sample of the perl5 version:





Yes – a recursive descent parser written from scratch in perl5 – pay dirt! There are 215 source code lines dedicated to the parse function. 5 more screens like this…

So I took out my newly sharpened raku tools and here’s my entire port:

Instead of ranging over 215 lines, raku has refined this down to a total of 58 lines (incl. the 11 blank ones I kept in for readability) – that’s a space saving of over 70%. Partly removal of parser boilerplate code, partly the raku Grammar constructs and partly an increased focus on the program logic as opposed to the mechanism.

For my coding style, this represents a greater than a two-thirds improvement – by getting the whole parser onto a single screen, I find that I can get the whole problem into my brain’s working memory and avoid burning cycles scrolling up and down to pin down butterflies bugs.

Attentive students will have noted that the Grammar / code integration provides a very natural paradigm for loading real-world data into an OO system, the UnitAction class starts with a stub object and populates ‘has’ attributes as it goes.

Oh and the raku code does a whole lot more such as support for unicode superscripts (up to +/-4), type assignment and type checking, offsets (such as 0 K = 273.15 °C), wider tolerance of user input and so on. Most importantly Real values are kept as Rats as much as possible which helps greatly for example, when round tripping 38.5 °C to °F and back it is still equals 38.5 °C!

One final remark – use Grammar::Tracer is a fantastic debugging tool for finding and fixing the subtle bugs that can come in and contributing to quickly getting to the optimum solution.

Published on 2020-02-24T00:00:00

Published by p6steve on 2020-01-20T22:40:01

For anyone wondering where my occasional blog on raku has been for a couple of months – sorry. I have been busy wrestling with, and losing to, the first released version of my Physics::Measure module.

Of course, this is all a bit overshadowed by the name change from perl6 to raku. I was skeptical on this, but did not have a strong opinion either way. So kudos to the folks who thrashed this out and I am looking forward to a naissance. For now, I have decided to keep my nickname ‘p6steve’ – I enjoy the resonance between P6 and P–sics and that is my niche. No offence intended to either camp.

My stated aim (blogs passim) is to create a set of physical units that makes sense for high school education. To me, inspired by the perl5 Physics::Unit module, that means not just core SI units for science class, but also old style units like sea miles, furlongs/fortnight and so on for geography and even history. As I started to roll out v0.0.3 of raku Physics::Unit, I thought it would be worthwhile to track a real-world high school education resource, namely OpenStax CNX. As I came upon this passage, I had to take the firkin challenge on:

While there are numerous types of units that we are all familiar with, there are others that are much more obscure. For example, a firkin is a unit of volume that was once used to measure beer. One firkin equals about 34 liters. To learn more about nonstandard units, use a dictionary or encyclopedia to research different “weights and measures.” Take note of any unusual units, such as a barleycorn, that are not listed in the text. Think about how the unit is defined and state its relationship to SI units.

Disaster – I went back to the code for Physics::Unit and, blow me, could I figure out how to drop in a new Unit: the firkin??…. nope!! Why not? Well Physics::Unit v:0.0.3 was impenetrable even to me, the author. Statistically it has 638 lines of code alongside 380 lines of heredoc data. Practically, while it passes all the tests 100%, it is not a practical, maintainable code base.

How did we get here? Well I plead guilty to being an average perl5 coder who really loves the expressivity that Larry offers … but a newbie to raku. I wanted to take on Physics::Measure to learn raku. Finally, I have started to get raku – but it has taken me a couple of years to get to this point!

My best step now – bin the code. I have dumped my original effort, gone back to the original perl5 Physics::Unit module source and transposed it to raku. The result: 296 lines of tight code alongside the same 380 lines of heredoc – a reduction of 53%! And a new found respect for the design skill of my perl5 forbears.

I am aiming to release as v0.0.7 in April 2020.

Published by Jo Christian Oterhals on 2019-11-24T19:25:11

By the way, you could replace … * with Inf or the unicode infinity symbol ∞ to make it more readable, i.e.

my @a = 1, 1, * + * … ∞;

— — or — —

my @a = 1, 1, * + * … Inf;

Published by Jo Christian Oterhals on 2019-11-24T10:20:11

As I understand this, * + * … * means the following:

First— * + * sums the two previous elements in the list. … * tells this to do this an infinite number of times; i.e.

1, 1, (1 + 1)

1, 1, 2, (1 + 2)

1, 1, 2, 3, (2 + 3)

1, 1, 2, 3, 5, (3 + 5)

1, 1, 2, 3, 5, 8, (5 + 8), etc.

… three dots means that it does it lazy, i.e. that it does not generate an element before you call it. This can be good for large lists that are computationally heavy.

Published by Timo Paulssen on 2019-10-25T23:12:36

Hello everyone! In the last report I said that just a little bit of work on the heap snapshot portion of the UI should result in a useful tool.



Photo by Sticker Mule / Unsplash

Here's my report for the first useful pieces of the Heap Snapshot UI!

Last time you already saw the graphs showing how the number of instances of a given type or frame grow and shrink over the course of multiple snapshots, and how new snapshots can be requested from the UI.

The latter now looks a little bit different:

Each snapshot now has a little button for itself, they are in one line instead of each snapshot having its own line, and the progress bar has been replaced with a percentage and a little "spinner".

Navigating the heap

There are multiple ways to get started navigating the heap snapshot. Everything is reachable from the "Root" object (this is the norm for reachability-based garbage collection schemes). You can just click through from there and see what you can find.

Another way is to look at the Type & Frame Lists , which show every type or frame along with the number of instances that exist in the heap snapshot, and the total size taken up by those objects.

Type & Frame Lists

Clicking on a type, or the name or filename of a frame leads you to a list of all objects of that type, all frames with the given name, or all frames from the given file. They are grouped by size, and each object shows up as a little button with the ID:

Clicking any of these buttons leads you to the Explorer.

Explorer

Here's a screenshot of the explorer to give you an idea of how the parts go together that I explain next:

The explorer is split into two identical panels, which allows you to compare two objects, or to explore in multiple directions from one given object.

There's an "Arrow to the Right" button on the left pane and an "Arrow to the Left" button on the right pane. These buttons make the other pane show the same object that the one pane currently shows.

On the left of each pane there's a "Path" display. Clicking the "Path" button in the explorer will calculate the shortest path to reach the object from the root. This is useful when you've got an object that you would expect to have already been deleted by the garbage collector, but for some reason is still around. The path can give the critical hint to figure out why it's still around. Maybe one phase of the program has ended, but something is still holding on to a cache that was put in as an optimization, and that still has your object in it? That cache in question would be on the path for your object.

The other half of each panel shows information about the object: Displayed at the very top is whether it is an object, a type object, an STable, or a frame.

Below that there is an input field where you can enter any ID belonging to a Collectable (the general term encompassing types, type objects, stables, and frames) to have a look.

The "Kind" field needs to have the number values replaced with human-readable text, but it's not the most interesting thing anyway.

The "Size" of the Collectable is split into two parts. One is the fixed size that every instance of the given type has. The other is any extra data an instance of this type may have attached to it, that's not a Collectable itself. This would be the case for arrays and hashes, as well as buffers and many "internal" objects.

Finally, the "References" field shows how many Collectables are referred to by the Collectable in question (outgoing references) and how many Collectables reference this object in question.

Below that there are two buttons, Path and Network . The former was explained further above, and the latter will get its own little section in this blog post.

Finally, the bottom of the panel is dedicated to a list of all references - outgoing or incoming - grouped by what the reference means, and what type it references.

In this example you see that the frame of the function display from elementary2d.p6 on line 87 references a couple of variables ( $_ , $tv , &inv ), the frame that called this frame ( step ), an outer frame ( MAIN ), and a code object. The right pane shows the incoming references. For incoming references, the name of the reference isn't available (yet), but you can see that 7 different objects are holding a reference to this frame.

Network View

The newest part of the heap snapshot UI is the Network View. It allows the user to get a "bird's eye" view of many objects and their relations to each other.

Here's a screenshot of the network view in action:

The network view is split into two panes. The pane on the left lists all types present in the network currently. It allows you to give every type a different symbol, a different color, or optionally make it invisible. In addition, it shows how many of each type are currently in the network display.

The right pane shows the objects, sorted by how far they are from the root (object 0, the one in Layer 0, with the frog icon).

Each object has one three-piece button. On the left of the button is the icon representing the type, in the middle is the object ID for this particular object, and on the right is an icon for the "relation" this object has to the "selected" object:

This view was generated for object 46011 (in layer 4, with a hamburger as the icon). This object gets the little "map marker pin" icon to show that it's the "center" of the network. In layers for distances 3, 2, and 1 there is one object each with a little icon showing two map marker pins connected with a squiggly line. This means that the object is part of the shortest path to the root. The third kind of icon is an arrow pointing from the left into a square that's on the right. Those are objects that refer to the selected object.

There is also an icon that's the same but the arrow goes outwards from the square instead of inwards. Those are objects that are referenced by the selected object. However, there is currently no button to have every object referenced by the selected object put into the network view. This is one of the next steps I'll be working on.

Customizing the colors and visibility of different types can give you a view like this:

And here's a view with more objects in it:

Interesting observations from this image:

Most objects referencing the central object (the stroopwafel in layer 8) are actually farther from the root (layers for distance 9 through 15).

Not every layer has objects in it; in this case layers for distances 12 and 14 are empty.

Next Steps

You have no doubt noticed that the buttons for collectables are very different between the network view and the type/frame lists and the explorer. The reason for that is that I only just started with the network view and wanted to display more info for each collectable (namely the icons to the left and right) and wanted them to look nicer. In the explorer there are sometimes thousands of objects in the reference list, and having big buttons like in the network view could be difficult to work with. There'll probably have to be a solution for that, or maybe it'll just work out fine in real-world use cases.

On the other hand, I want the colors and icons for types to be available everywhere, so that it's easier to spot common patterns across different views and to mark things you're interested in so they stand out in lists of many objects. I was also thinking of a "bookmark this object" feature for similar purposes.

Before most of that, the network viewer will have to become "navigable", i.e. clicking on an object should put it in the center, grab the path to the root, grab incoming references, etc.

There also need to be ways to handle references you're not (or no longer) interested in, especially when you come across an object that has thousands of them.

But until then, all of this should already be very useful!

Here's the section about the heap snapshot profiler from the original grant proposal:

A web frontend for the heap snapshot analyzer Refactor how the analyzer gives data to the shell Result sets now have information about what each column means, for example "a number of bytes". Draft a concept for how the user will interact with the analyzer This refers mainly to how the navigator works UI for Per-Snapshot Summary: total heap size, total object count, etc. This is the "front page" with the graphs. UI for Top Lists for objects sorted by count or memory usage This is the "Type and Frame Lists". UI for Details of individual objects: size, pointers to other objects This is part of the explorer. UI for the shortest path that keeps an object alive This is also part of the explorer. UI for Across-Snapshot comparisons: object counts over time, etc. I think I will allow the left and right pane of the explorer to refer to different snapshots, which will allow comparing similar objects. Additionally, the user can open as many windows or tabs with the heap snapshot UI in it and switch freely between them in their regular web browser. UI for Heap Exploration: Find all objects of a specific type, etc. This is reachable from the "Type and Frame Lists". Functionality for finding paths from one object to all roots that reach it. The network view will allow getting the path to every object with a reference to the given object, which will fulfill this purpose. UI for whole parts of the network, like multiple paths to a single object. This is the network view. If an instrumented profile is also loaded

(this is currently not supported by moarperf) Links from types to routines allocating the type Links from frames (closures for example) to the call graph



Looking at the list, it seems like the majority of intended features are already available or will be very soon!

Easier Installation

Until now the user had to download nodejs and npm along with a whole load of javascript libraries in order to compile and bundle the javascript code that powers the frontend of moarperf .

Fortunately, it was relatively easy to get travis-ci to do the work automatically and upload a package with the finished javascript code and the backend code to github.

You can now visit the releases page on github to grab a tarball with all the files you need! Just install all backend dependencies with zef install --deps-only . and run service.p6 !

And with that I'm already done for this report!

It looks like the heap snapshot portion of the grant is quite a bit smaller tha