ENOSUCHBLOG

Programming, philosophy, pedaling.

Apr 1, 2018

Tags: kbsecret, devblog, programming, ruby

This is a short writeup of how I got SimpleCov coverage reports working across multiple Ruby processes, specifically when created through Kernel#fork .

Background

As part KBSecret 1.3 (soon to be released!), I’ve significantly refactored the way in which KBSecret executes commands (e.g., kbsecret list and kbsecret new ) as part of a larger effort to simplify the codebase and improve performance.

KBSecret now executes commands “in-process,” meaning that it does not exec or otherwise spawn a fresh Ruby interpreter to handle the command. This has two important consequences:

It’s fast, since commands don’t have to require most of the heavy libraries (keybase-unofficial and KBSecret itself).

most of the heavy libraries (keybase-unofficial and KBSecret itself). It makes coverage statistics on commands possible, since commands run as part of unit tests are visible to SimpleCov’s coverage trackers.

However, KBSecret commands still behave as if they’re in complete control of the process — they call exit and abort on error conditions, fiddle with I/O, and do all sorts of other things. This makes testing difficult, especially when the tests are of error conditions — calling exit in the command takes the entire test harness down with it.

Testing commands with fork and pipes

fork is the conceptually simple solution to the problem of testing programs that terminate or otherwise modify the process state. Ruby even provides a nice Kernel#fork method that takes a block:

# BAD! This will take down the test harness if the command decides to exit. KBSecret :: CLI :: Command . run! cmd , * args # GOOD! The command's termination has no (direct) impact on the test harness. fork do KBSecret :: CLI :: Command . run! cmd , * args end # We want to make sure our forked process finishes before we test its state. Process . wait

However, fork comes with its own challenges — now that we’re in a separate (child) process, we no longer have direct access to the child’s standard I/O descriptors. Since commands communicate with the user through stdin , stdout , and stderr , we’ll need to introduce a pipe for each:

def kbsecret ( cmd , * args , input: "" ) pipes = { stdin: IO . pipe , stdout: IO . pipe , stderr: IO . pipe , } # Send our input into the write-end of our stdin pipe, for the child to read. pipes [ :stdin ][ 1 ]. puts input fork do # Child: close those pipe ends we don't need. pipes [ :stdin ][ 1 ]. close pipes [ :stdout ][ 0 ]. close pipes [ :stderr ][ 0 ]. close # Reassign the child's global standard I/O handlers to point to our pipes. $stdin = pipes [ :stdin ][ 0 ] $stdout = pipes [ :stdout ][ 1 ] $stderr = pipes [ :stderr ][ 1 ] # ...and run the command. KBSecret :: CLI :: Command . run! cmd , * args end # Parent: close those pipe ends we don't need. pipes [ :stdin ][ 0 ]. close pipes [ :stdin ][ 1 ]. close pipes [ :stdout ][ 1 ]. close pipes [ :stderr ][ 1 ]. close # Wait for our child to finish. Process . wait # Finally, return the contents of the child's stdout and stderr streams for testing. [ pipes [ :stdout ][ 0 ]. read , pipes [ :stderr ][ 0 ]. read ] end

This works as expected:

>> # a command that runs normally >> kbsecret "version" => [ "kbsecret version 1.3.0.pre.3.

" , "" ] >> # a command that terminates via `exit` due to a bad flag >> kbsecret "list" , "-z" => [ "" , " \e [31mFatal \e [0m: Unknown option `-z'.

" ]

Introducing code coverage to the forked processes

So far, we have commands running in their own processes for the purposes of resiliency/testing failure conditions. That’s cool, but what we ultimately want is coverage statistics from those child processes. How do we get there?

Well, because we’re using fork , our child processes share the same library context as their parents. That means that anything we get a copy of anything require d or load ed pre- fork , including SimpleCov’s state.

To take advantage of this, we need to modify our coverage preamble slightly, from something like this:

if ENV [ "COVERAGE" ] require "simplecov" SimpleCov . start end

to this:

if ENV [ "COVERAGE" ] require "simplecov" # Only necessary if your tests *might* take longer than the default merge # timeout, which is 10 minutes (600s). SimpleCov . merge_timeout ( 1200 ) # Store our original (pre-fork) pid, so that we only call `format!` # in our exit handler if we're in the original parent. pid = Process . pid SimpleCov . at_exit do SimpleCov . result . format! if Process . pid == pid end # Start SimpleCov as usual. SimpleCov . start end

We also need to add a tiny bit of code to our fork block:

fork do if ENV [ "COVERAGE" ] # Give our new forked process a unique command name, to prevent problems # when merging coverage results. SimpleCov . command_name SecureRandom . uuid SimpleCov . start end # Same as the fork-block code above... end

And ta-da, multi-process coverage reports:

Each UUID above is a separate process.

command/new.rb , command/list.rb , and command/rm.rb are all tested under separate processes.

Afternote: Uploading to Codecov

This technique works great locally, but not so great on remote services like Codecov. To get properly merged multi-process coverage results on Codecov, you’ll need to do some additional post-processing.

Here’s an example rake task:

desc "Upload coverage to codecov" task :codecov do require "simplecov" require "codecov" formatter = SimpleCov :: Formatter :: Codecov . new formatter . format ( SimpleCov :: ResultMerger . merged_result ) end

This handles uploading to Codecov, so there’s no need to require "codecov" in your helper.rb or equivalent file.

Thus, the complete workflow:

# Run unit tests with code coverage enabled. $ COVERAGE = 1 bundle exec rake test # Stitch the previous results together and send the merged result to Codecov. $ bundle exec rake codecov

Check out KBSecret’s repository for a working example.

Thanks for reading!