I just saw the release of Aristotle's Try::Tiny::Tiny to CPAN, which aims to speed up Try::Tiny. That led me to wonder how fast the various Try* modules were. I cannibalized the benchmark code from Try::Catch, and off I went.

Updates

Include eval and Try::Tiny master (39b2ba3b0) at Aristotle's request Fix bug; correct versions of Try::Tiny now always loaded.

The candidates are:

Where

PP => Pure Perl XS => XS routine PSP => Perl Syntax Plugin

Try::Tiny::Tiny doesn't replace Try::Tiny ; it alters it, so it's not possible to test the two at the same time. The test code uses an environment variable is used to switch between the two. It also switches between testing try with no catch and try with catch :

use strict; use warnings; use Storable 'store'; use if $ENV{TRY_TINY_MASTER}, lib => 'Try-Tiny-339b2ba3b0/lib'; use if $ENV{TRY_TINY_TINY}, 'Try::Tiny::Tiny'; use Dumbbench; use Benchmark::Dumb qw(:all); our $die_already = $ENV{DIE_ALREADY}; my $TT_label = $ENV{TRY_TINY_TINY} ? 'Try::Tiny::Tiny' : 'Try::Tiny'; $TT_label .= '::Master' if $ENV{TRY_TINY_MASTER}; my $res = timethese( 0, { 'TryCatch' => \&TEST::TryCatch::test, 'Try::Catch' => \&TEST::Try::Catch::test, $TT_label => \&TEST::Try::Tiny::test, 'Syntax::Keyword::Try' => \&TEST::Syntax::Keyword::Try::test, 'Syntax::Feature::Try' => \&TEST::Syntax::Feature::Try::test, 'Eval' => \&TEST::Eval::test, }, 'none' ); store $res, $ARGV[0] // die( "must specify output file" ); { package TEST::TryCatch; use TryCatch; sub test { try { die if $die_already; } catch( $e ) { }; } } { package TEST::Try::Catch; use Try::Catch; sub test { try { die if $die_already; } catch { if ( $_ eq "n" ) { } }; } } { package TEST::Try::Tiny; use Try::Tiny; sub test { try { die if $die_already; } catch { if ( $_ eq "n" ) { } }; } } { package TEST::Syntax::Keyword::Try; use Syntax::Keyword::Try 'try'; sub test { try { die if $die_already; } catch { if ( $@ eq "n" ) { } }; } } { package TEST::Syntax::Feature::Try; use syntax 'try'; sub test { try { die if $die_already; } catch { if ( $@ eq "n" ) { } }; } } { package TEST::Eval; sub test { eval { die if $die_already; }; if ( $@ ) { if ( $@ eq "n" ) { } } } }

The separate runs results are merged thanks to the magic of Benchmark::Dumb.

use strict; use warnings; use Storable 'retrieve'; use Regexp::Common; use Benchmark::Dumb qw(:all); use Term::Table; die( "must specify input files" ) unless @ARGV; my %merge; push @{ $merge{ $_->name } }, $_ for map { values %{ retrieve $_ } } @ARGV; my %results; print "Key:

"; for my $results ( values %merge ) { my @results = @$results; my $result = shift @results; my @sections = map { /^([[:upper:]])/g; $1 } split( '::', $result->name ); my $name = join '', @sections; printf " %4s => %s

", $name, $result->name; $result = $result->timesum( $_ ) foreach @results; $results{$name} = $result; } my $rows = cmpthese( \%results, undef, 'none' ); my $header = shift @$rows; for my $row ( @$rows ) { for ( @$row ) { s/\+\-[\d.]*//g; s<($RE{num}{real})/s><sprintf( "%8d/s", $1)>ge; s<($RE{num}{real})%><sprintf( "%4d%%", $1)>ge; s/--//; } } my $table = Term::Table->new( header => $header, rows => $rows, ); print "$_

" for $table->render;

And one script to bind them all:

#!/bin/bash for da in 0 1 ; do export DIE_ALREADY=$da for ttm in 0 1 ; do export TRY_TINY_MASTER=$ttm for ttt in 0 1 ; do export TRY_TINY_TINY=$ttt perl all2.pl ttt_$ttt-ttm_$ttm-da_$da.store > /dev/null done done perl -Ilocal/lib/perl5 merge.pl \ ttt_0-ttm_0-da_$da.store \ ttt_1-ttm_0-da_$da.store \ ttt_0-ttm_1-da_$da.store \ ttt_1-ttm_1-da_$da.store done

Dumbbench provides individual errors. In this instance they are smaller than the differences between the results, so I've removed them to simplify the comparison tables. All tests were run using Perl 5.22.

Key:

E => Eval T => TryCatch TC => Try::Catch TT => Try::Tiny SFT => Syntax::Feature::Try SKT => Syntax::Keyword::Try TTM => Try::Tiny (master) TTT => Try::Tiny::Tiny TTTM => Try::Tiny::Tiny with Try::Tiny (master)

First, try without a catch :

+-----+---------+-----+-----+-----+-----+----+----+----+----+----+ | |Rate | SFT | TC | TT | TTM |TTT |TTTM| T |SKT | E | +-----+---------+-----+-----+-----+-----+----+----+----+----+----+ | SFT | 44666/s| | -56%| -60%| -62%|-78%|-79%|-86%|-91%|-97%| | TC | 101475/s| 127%| | -10%| -14%|-51%|-53%|-70%|-81%|-95%| | TT | 113761/s| 154%| 12%| | -4%|-46%|-48%|-66%|-78%|-94%| | TTM | 118890/s| 166%| 17%| 4%| |-43%|-46%|-64%|-77%|-94%| | TTT | 210960/s| 372%| 107%| 85%| 77%| | -4%|-37%|-60%|-90%| | TTTM| 220330/s| 393%| 117%| 93%| 85%| 4%| |-34%|-59%|-89%| | T | 337780/s| 656%| 232%| 196%| 184%| 60%| 53%| |-37%|-84%| | SKT | 538450/s|1105%| 430%| 373%| 352%|155%|144%| 59%| |-75%| | E |2176700/s|4773%|2045%|1813%|1730%|931%|887%|544%|304%| | +-----+---------+-----+-----+-----+-----+----+----+----+----+----+

Now, try with catch :

+-----+---------+----+-----+-----+----+----+----+----+----+----+ | | Rate|SFT | TC | TTM | TT | T |TTT |TTTM|SKT | E | +-----+---------+----+-----+-----+----+----+----+----+----+----+ | SFT | 19747/s| | -55%| -75%|-76%|-83%|-83%|-84%|-87%|-90%| | TC | 44001/s|122%| | -44%|-46%|-62%|-63%|-65%|-71%|-78%| | TTM | 78860/s|299%| 79%| | -4%|-32%|-33%|-38%|-49%|-61%| | TT | 82734/s|318%| 88%| 4%| |-28%|-30%|-35%|-46%|-60%| | T | 115970/s|487%| 163%| 47%| 40%| | -2%|-10%|-25%|-43%| | TTT | 118930/s|502%| 170%| 50%| 43%| 2%| | -7%|-23%|-42%| | TTTM| 129150/s|554%| 193%| 63%| 56%| 11%| 8%| |-16%|-37%| | SKT | 154550/s|682%| 251%| 95%| 86%| 33%| 29%| 19%| |-25%| | E | 206810/s|947%| 370%| 162%|149%| 78%| 73%| 60%| 33%| | +-----+---------+----+-----+-----+----+----+----+----+----+----+

TT vs. TTM : These measurements swap ordered with repeated runs, indicating they're the same within measurement errors.

vs. : These measurements swap ordered with repeated runs, indicating they're the same within measurement errors. TTT vs. TTTM : TTT is always slower than TTTM .

So,