OK, you know how it works. Everybody who's really comfortable with testing often offers the following advice:

Use Devel::Cover

Don't strive for 100% coverage

This advice is absolutely correct, but misleading. It leaves you with an obvious question: what coverage do I want? This is where the danger lies.

While hacking on my Veure project, I find myself often digging into code that I don't understand terribly well. This also means I don't understand my interface. As a result, I often implement something I think will work, hit my browser and view the results. Which are wrong. Again. So I tinker some more. Again.

Lather, rinse, repeat. The benefit of this strategy is that I tend to add new functionality much faster. The drawback is that once something is working, it's easy to be careless with testing. And here's what I've found:

If it's not covered, it's broken. Every. Single. Time.

I admit that anecdotes are not data and maybe I'm just a bad programmer, but my code is getting complex enough that there are parts of it which look perfectly fine, but are nonetheless wrong. I recently discovered a URL hack exploit because when I was trying to increase my code coverage, I focused on a few modules with coverage around 98% or so and tried to get them to 100%. Three modules in a row with just a couple of untested lines had bugs in those lines.

You don't want to test that things already tested work (don't test getters/setters in Moose unless you're hacking on Moose directly), but you had better make sure that your code is setting them appropriately.

So don't strive for 100%, but be very, very careful about the seductive thought of "I have 95% coverage". If your project is sufficiently complex, that last 5% will be a minefield.