My development workflow

The code I write is influenced by certain factors upstream of the code itself.

Before I start coding a feature, I like to do everything I can to try to ensure that the user story I’m working on is small and that it’s crisply defined. By small, I mean not more than a day’s worth of work. By crisply defined, I mean that the user story includes a reasonably precise and detailed description of what the scope of that story is.

I also like to put each user story through a reasonably thorough scrutinization process. In my experience, user stories often aren’t very thoroughly thought through and contain a level of ambiguity or inconsistency such that they’re not actually possible to implement as described.

I find that if care is taken to make sure the user stories are high-quality before development work begins, the development work goes dramatically more smoothly.

Assuming I’m starting with good user stories, I’ll grab a story to work on and then break that story into subtasks. Each day I have a to-do list for the day. On that to-do list I’ll put the subtasks for whatever story I’m currently working on. Here’s an example of what that might look like, at least as a first draft:

Feature: As a staff member, I can manage insurance payments

Scaffolding for insurance payments

Feature spec for creating insurance payment (valid inputs)

Feature spec for creating insurance payment (invalid inputs)

Feature spec for updating insurance payment

(By the way, I write more about my specific formula for writing feature specs in my post https://www.codewithjason.com/repeatable-step-step-process-writing-rails-integration-tests-capybara/.)

Where my development workflow and my testing workflow meet

Looking at the list above, you can see that my to-do list is expressed mostly in terms of tests. I do it this way because I know that if I write a test for a certain piece of functionality, then I’ll of course have to build the functionality itself as part of the process.

When I use TDD and when I don’t

Whether or not I’ll use TDD on a feature depends largely on whether it’s a whole new CRUD interface or whether it’s a more minor modification.

If I’m working on a whole CRUD interface, I’ll use scaffolding to generate the code, and then I’ll make a pass over everything and write tests. (Again, I write more about the details of this process here.) The fact that I use scaffolding for my CRUD interfaces makes TDD impossible. This is a trade-off I’m willing to make due to how much work scaffolding saves me and due to the fact that I pretty much never forget to write feature specs for my scaffold-generated code.

It’s also rare that I ever need to write any serious amount of model specs for scaffold-generated code. I usually write tests for my validators using shoulda-matchers, but that’s all. (I often end up writing more model specs later as the model grows in feature richness, but not in the very beginning.)

If instead of writing a whole new CRUD interface I’m just making a modification to existing code (or fixing a bug), that’s a case where I usually will use TDD. I find that TDD in these cases is typically easier and faster than doing test-after or skipping tests altogether. If for example I need to add a new piece of data to a CSV file my program generates, I’ll go to the relevant test file and add a failing expectation for that new piece of data. Then I’ll go and add the code to put the data in place to make the test pass.

The other case where I usually practice TDD is if the feature I’m writing is not a CRUD-type feature but rather a more of a model-based feature, where the interesting work happens “under the hood”. In those cases I also find TDD to be easier and faster than not-TDD.

The kinds of tests I write and the kind I don’t

I write more about this in my book, but I tend to mostly write model specs and feature specs. I find that most other types of tests tend to have very little value. By the way, I’m using the language of “spec” instead of “test” because I use RSpec instead of Minitest, but my high-level approach would be the exact same under any testing framework.

When I use mocks and stubs and when I don’t

I almost never use mocks or stubs in Rails projects. In 8+ years of Rails development, I’ve hardly ever done it.

How I think about test coverage

I care about test coverage enough to measure it, but I don’t care about test coverage enough to set a target for it or impose any kind of rule on myself or anything like that. I’ve found that the natural consequence of me following my normal testing workflow is that I end up with a pretty decent test coverage level. Today I checked the test coverage on the project I’ve been working on for the last year or so and the measurement was 97%.

The test coverage metric isn’t the main barometer for me though on the health level of a project’s tests. To me it seems much more useful to pay attention to how many exceptions pop up in production and what it feels like to do maintenance coding on the project. “What it feels like to do maintenance coding” is obviously not a verify quantifiable metric, but of course not everything that counts can be counted.