The steps of telling what –and how– the project does were:

telling some tech details about the project and the whole architecture

asking the new developer to explore the Storybook stories

asking the new developer to run and watch the Cypress tests

asking the new developer to run and read the output of the Jest tests

asking the new developer to read the code of the tests

measuring the new developer understanding of the project asking him to change and test some flows

The handover went so well that I decided to write this post 😊. The new developer (a quite smart one, I have to admit, hi Lorenzo 👋) showed me to know pretty well the user flows and to be able to change big parts of the project with confidence (and the related tests too). Nevertheless, he was able to implement new user flows and test them the same way I did with all the rest of the project.

The good and the bad parts

Well, even if I am pretty satisfied with this experience, I must admit that something was not perfect. I am not speaking about the process itself or the idea of using the tests as a documentation tool, but the way I wrote the tests. There were some details that misled the comprehension of the developer that should read my tests. Let’s examine a quick list of the pros and cons of the tests I wrote:

What was good with my tests:

Testing everything from the UI perspective with Cypress: for a front-end flow, the UI speaks more than everything

Having a well-compiled Storybook: writing stories in Storybook is already testing ! It is visual testing that can be easily frozen with a plugin like Storyshots

! It is visual testing that can be easily frozen with a plugin like Storyshots Straightforward tests code: the code of the tests must be super simple. Simple to be read, condition-free, with a low-abstraction level, with a good level of logging, etc. Always remember that the tests must reduce the cognitive load of reading and understanding the code , hence their complexity should be an order of magnitude lower compared to the code to be understood

, hence their complexity should be an order of magnitude lower compared to the code to be understood Sharing some step “ids” between the code and the tests: if a user flow is quite long, it could be useful to share some “steps” between the code and the code of the tests (mines were comments like “/** #1 */”, “/** #2 */”, etc.)

Having more low-level tests for parts of the code (like some sagas, as shown in the above screenshot) that could be hard to be understood (and hence hard to be updated or refactored to a simpler version)

What was bad with my tests:

Some test descriptions were not perfect: good storytelling skills are pretty important while writing the description of the tests

while writing the description of the tests Not leveraging Gherkins to write the tests themselves: I was not so experienced at the beginning and I decide to not consider writing BDD-style tests with Gherkins. Take a look at it to understand the storytelling advantages

to write the tests themselves: I was not so experienced at the beginning and I decide to not consider writing BDD-style tests with Gherkins. Take a look at it to understand the storytelling advantages Sharing some fixtures between different tests: the fixtures are the static version of the responses of the server, the idea of recycling them when they are identical has nothing wrong, but I should have cared more about their names. Using a “registration-success.json” fixture for both the registration and the login flows (it is just an example, I did this mistake in more complex cases) leave some doubts to the new developer. This is the kind of things that are frozen in the memory of the developer who wrote the code (why you can use the same fixture for two different cases?), a really bad thing from the company perspective

In the end, writing tests allows you to:

have well-descriptive documentation: the description of the tests are always written from the user perspective, not from the developer one

have easy handover

avoid relying on the historical memory of some employees (you, for example)

document some choices that could sound “strange”, or simply complex, when reading the code, but perfectly reasonable from the user perspective

not to cite the obvious testing advantages like working regression-free, leveraging some automated and fast tools, etc. 😊

I am eager to learn your experiences on the same topic! Feel free to leave a comment about them 🤗