Although Perl is no longer finding the excellent numbers, it’s still a big part of my process. Perl ran out of steam a long time ago, but it’s still managing everything.

I could do big numbers through the Math::GMP module, the time to convert between Perl data structures and GMP data structures kills performance. But, I don’t need Perl for that part. I switched to a pure C program for the number crunching part. That does make me appreciate Perl a little more as I do really common things with a lot of typing in C.



But I’m also at the point where a single run of the program takes an unreasonable amount of time. By my back of the envelope calculations, with the fastest hardware available to me, a single processor would take 4 years of continuous running to exhaust the 30-digit space, which I’m working on now. As I noted in the “Benchmarking” chapter, I came up with a much better algorithm, and while I might have more micro-optimizations, I’ll need something much different to get reasonable runtime for the 32-digit space (40 years of processor time). This is several orders of magnitude faster than I thought I’d ever achieve when I started in May.

That’s not a technical problem with PaaS offerings. I could spin up two Amazon EC2 c4.8xlarge instances (72 processors) and finish in a month. I’d spend a bit under $3,000 to get through the 30-digit space. Or, $30,000 for the 32-digit space. I’m not enjoying this problem that much.

Fortunately, ServerCentral donated a phat testing box for me to use when they aren’t playing with it. I have 80 very fast processors to play with as long as I stay out of their way. These are actual hardware processors too so they are much faster than anything EC2 offers since I don’t have the virtualization tax. Even if that virtualization overhead is only 2%, that means quite a bit. But, managing all of that in C would be a nightmare. Perl to the rescue!

First, I need to break up the job to run on multiple processors. I could do all sorts of fancy things to distribute work and give workers new jobs, but I’d spend quite a bit of time making that work. I want to get this done in a month, including the month of run time.

First, I need a way to figure out how many processors I have. That’s not a big deal, but it is slightly dependent on the operating system. I tried a few CPAN modules, but most of them were much too complicated. I wrote my own CpuCount module and used the $^O variable to switch on the operating system like I write about in Chapter 11: “Configuring Perl Programs”:

use v5.22; use feature qw(signatures); no warnings qw(experimental::signatures); package CpuCount; use Exporter qw(import); our @EXPORT_OK = qw(get_cpu_count); __PACKAGE__->run unless caller; sub run { say get_cpu_count() } sub get_cpu_count () { state $dispatch = { freebsd => \&_freebsd, _default => \&_posix, }; my $uname = $dispatch->{ $^O } // '_default'; $dispatch->{ $uname }->(); } sub _freebsd () { _get_conf( 'NPROCESSORS_ONLN' ) } sub _posix () { _get_conf( '_NPROCESSORS_ONLN' ) } sub _get_conf ( $key ) { state $command = '/usr/bin/getconf'; chomp( my $number = `$command $key 2>/dev/null` ); $number; }

This is also a modulino (Chapter 18: “Modules as Programs”) so I can run the module to get an immediate count of the CPUs:

% perl lib/CpuCount.pm 80

When I know that, I can parcel out the work. That’s not a big deal. I decide where I want to start and where I want to end then partition that into the number of processors I want to use. Here’s the meat of my launch program:

for( my $a = $opts{'start'}; $a <= $opts{'end'}; $a += $interval ) { state $i = 0; $i++; my $end = $a + $interval - 1; $end = $opts{'end'} if $end > $opts{'end'}; my @args = ( $opts{'length'}, $a, $end ); my $output_file = make_filename( $opts{output_dir}, $opts{length}, $a, $end ); push @output_files, $output_file; if( my $child_pid = fork ) { # parent say "*** Parent process $$ forked child $child_pid, saving in $output_file"; $children{$child_pid} = { file => $output_file, start => $a, end => $end, done => 0, }; } else { # child open STDOUT, '>:utf8', $output_file or die "Could not open $output_file: $!"; exec $opts{command}, @args; die "Could not exec! $!"; } }

There interesting bits that come out of Mastering Perl are in the child portion. I re-open STDOUT to the file I want to store the stuff in then I `exec` the command I want to run. I spent a couple hours trying to do something with shell redirection, then pipes, and all manner of other things to get my launch program to monitor the children (which was the genesis of that %children hash). But that didn’t work out.

So I rethought it. I don’t need to connect all these bits. The C program will print to normal files and Perl will post-process those files.

The C program prints an excellent number as soon as it finds it, and prints a progress message every 15 minutes or whenever they get a USR1 or TERM signal. I grab those files by committing them to the git repo from a cron job and rsyncing them from another host. My program might go down at any time.

And here’s the fun Perl parts.

I’ve already started the processes and they are mostly doing their thing. But then I need to get out of someone’s way without coordination. Instead of making anyone wait for me, I tell anyone who heeds the processors to kill my stuff. Since the programs respond to the INT signal by printing a final progress message to its output file, I know where I left off.

I wrote a restart_run program to read all of the log files. Each output file has its intended range:

*** [46345] start a is 58627205437732 *** [46345] end a is 62000000000000

And is has progress information:

+++ [46345] Checked [58627205437732] to [58668609925922]

If my run is interrupted, I can read through the output files to discover the start point and last progress, then pick up there with a new run that adds to that file. This simple glue role of Perl has been easier to manage and quicker to produce results rather than architecting something fancier (and more fragile). I’ve designed from the start to handle failure and interruption by separating the concerns.

But that’s just part of the fun. I also figured that I didn’t want to lose any excellent numbers so I wanted to stash them in more places. With Net::Twitter, I can post them to the @excellent_nums Twitter account. When I wrote that script, I never thought I’d pass 200 excellent numbers. The maximum number of Twitter statuses I can fetch in one go is 200, so today I had to modify this program to page the results. That’s a pretty good problem to have.

There are two parts to my get_tweets program. I get all the status updates to see what I’ve already tweeted and tweet the numbers I’ve discovered that aren’t on Twitter yet. That goes through all of the output files, finds all the numbers, and checks them against the existing list. At the same time I fix up the local lists and README files.