Take a look at the following code:

A code example representing a node module to flatten an array inside a file

It looks for a file that contains a JSON Array inside, flattens it, and then writes again to the file system. After everything is finished, it prints Done! into the console.

Taking a close look at that code it is possible to infer some patterns that provide very good abstractions which might reduce the amount of cognitive overload to this piece of functionality.

To understand what can be abstracted and what cannot, we need to stop thinking about the implementation details and start thinking about what needs to be done in an operation whose responsibility is to “flatten an array inside a file”:

Read the file Convert String to Array Flatten the Array Convert the flattened Array to String Write to the file

1. Read the file

The only thing we need here is readFile(fileName) , and that is it. If we have a readFile function, we can abstract away all the concerns of how to read that file, including character decoding and the usage of Promises:

A code example representing a node module to read a file

On the client, it will be as simple as a readFile(fileName) call. All concerns of how to implement Promises are gone. All concerns about character encoding are gone. And when you want to unit test it, you don’t even need to touch the file system, you can just mock the readFile dependency.

2. Convert String to Array

Now we need to convert the resulted String of the file content to an Array Literal. We could perfectly use the JSON.parse function to do the work for us inside the flattenArrayInsideFile(fileName) function. But taking a step back, is it really the responsibility of the function to actually handle the incorrect data type format that is returned from the readFile(name) operation? What if the operation that reads the file could give us the Array Literal format we want?

Of course we can’t change the readFile(fileName) function. It has one responsibility and one responsibility only, which is to read the file decoding it with the UTF-8 character encoding algorithm. Also, if we change readFile(fileName) to return a different type other than a String, it will be confusing because not all files contain data that can be converted to JSON. This is a specific functionality for files that contain something that can be converted to JSON.

The solution, in this case, would be to read the file as JSON, or more formally readFileAsJSON(fileName) :

A code example representing a node module to read a file as JSON

3. Flatten the Array

This is probably the core functionality of what we are trying to achieve, which is to flatten the Array Literal inside the file. Because it is the core functionality, it totally makes sense to be done without any abstraction. But just because the function should have the knowledge to do the flatten, it doesn’t mean that the whole algorithm should be implemented there.

Flatten an array is a pretty common task, it totally makes sense abstracting this functionality out, and that is what the first example is doing by using a module called flatten-array, that is available on npm.

4. Convert the flattened Array to String

Now it comes another interesting part. In the first example we are using the following code to convert the flattened Array into a stringified representation:

A code example representing a call to JSON "stringify" function passing a "flattened array" variable as an argument

There is no big problem with that, except for the fact that is not the responsibility of the main function to convert anything. It should only be concerned about flattening the content of the file, and that is it.

In this case, we should take the same approach we took for the readFileAsJSON(fileName) function and delegate the responsibility of how to format the content that is JSON-aware so that it is written correctly to the output. We need a writeFileToJSON(fileName, content) :

A code example representing a node module to write a file to JSON

5. Write to the file

In the previous example, the function is doing more than just converting the content to a stringified representation. It is also being concerned with the "promisification" of the task that writes the file to disk. The solution to reducing the responsibility would be to create a function whose sole concern is to write the content to disk and use that instead, the same way we are doing for reading the file:

A code example representing a node module to write a file

This way we can reuse the functionality without having to promisify it all the time. Also, it makes easier to mock the default write operation in the writeFileToJSON(fileName, content) for unit tests because here you don’t have to be concerned with the file system, which makes us follow the principle of not mocking objects that you don’t own.

By separating the concerns and trying to create more efficient abstractions, we can build a few functions that are perfectly decoupled and reusable. This way we don't need to think too much when making changes to the domain.

Design an application composed using small functional modules is one step closer to be able to change any part of the domain without unintended side effects.

This is the resulting code:

A code example representing a node module consuming the other modules created so far

And the usage is as simple as this:

A code example showing how to use the refactored "flatten array inside file" function

It is not about lines of code. It is not about one line modules. It is not even about reusability. It is all about building efficient and useful abstractions that make your code easier to reason about.