Devel::Cover is wonderful, and so is Dist::Zilla::App::Command::cover. Measuring your test coverage can help you increase your confidence in the correctness and reliability of your software.

The numbers in the coverage report are only tools, however. Your job as an intelligent and capable human being is to interpret those numbers and to understand what they mean.

For example, my new book formatting software (which needs more documentation before I release it publicly) has a handful of hard-coded escape codes for the LaTeX emitter to produce the right code. Part of that code is:

my %characters = ( acute => sub { qq|\\'| . shift }, grave => sub { qq|\\`| . shift }, uml => sub { qq|\\"| . shift }, cedilla => sub { '\c' }, # ccedilla opy => sub { '\copyright' }, # copy dash => sub { '---' }, # mdash lusmn => sub { '\pm' }, # plusmn mp => sub { '\&' }, # amp rademark => sub { '\texttrademark' } ); sub emit_character { my $self = shift; my $content = eval { $self->emit_kids( @_ ) }; return unless defined $content; if (my ($char, $class) = $content =~ /(\w)(\w+)/) { return $characters{$class}->($char) if exists $characters{$class}; } return Pod::Escapes::e2char( $content ); }

While emit_character() is interesting on its own and worthy of testing, the important code is the %characters data structure. Devel::Cover can't tell me if every entry in that hash gets accessed appropriately (though I suppose it could in theory track the use of the anonymous functions). Only my knowledge of the tests and the code can satisfy me that I've tested this important code thoroughly.