There are four ways of consuming readable streams. Developers should choose one of the methods of consuming data. Mixing API’s can lead to unexpected behavior and should never be done while consuming data from a single stream.

Using `readable.pause()`, `readable.resume()` and `data` event:

`data` event

emitted whenever the stream is passing chunk of data (automatically switches stream to flowing mode when listener attached)

`readable.pause()`

pauses stream, switching it to paused mode

`readable.resume()`

switches stream to flowing mode

An example of a readable stream that is consumed and data is written to stdout. Nothing very useful but it will serve well as a demonstration:

2. Using `readable.read()` and `readable` event:

`readable` event

fired when there is some underlying data to be read (attaching a listener to `readable` switches stream to paused mode)

`readable.read([size])`

pulls some data out of the internal buffer and returns it. Returns `null` if there is no data left to read. By default, data will be returned as `Buffer` if no encoding is specified.

This is a similar example to the one above, but uses the second way of consuming a readable stream:

3. Using `readable.pipe()`:

`readable.pipe(writable[, options])`

attaches a writable stream to a readable stream switching it to flowing mode and causing readable to pass all its data to the attached writable stream. Flow of data (i.e. backpressure) will be automatically handled.

This is the most convenient for consuming a readable stream, since it is not verbose, and backpressure and closing the streams is automatically handled when finished.

A simple example copied from one of previous code snippets:

One thing that is not automatically managed is error handling and propagation. For example, if we want to close each stream when an error occurs, we have to attach error event listeners.

An example of a complete version of consuming readable streams with pipe with proper error handling:

4. Using Async Iteration / Async Generators:

readable streams implement the [Symbol.asyncIterator] method, so they can be iterated over with `for await of`

Async Generators are officially available in Node v10+. The async generators are a mix of async functions and generator functions. They implement [Symbol.asyncIterator] method, and can be used for async iteration. Generally streams are a chunked collection of data across time, therefore Async Generators fit perfectly. Here’s an example:

Consuming Duplex and Transform Streams

Duplex streams implement both the readable and writable interface. One kind of duplex stream is a `PassThrough` stream. This type of stream is used when some API expects readable stream as a parameter, and you also want to manually write some data.

To accomplish both needs:

Create an instance of a `PassThrough` stream

Send the stream to the API (the API will use the readable interface of the stream)

Add some data to the stream (using the writable interface of the stream)

This process is shown below:

Transform streams are Duplex streams. These streams have both readable and writable interface but their main purpose is to transform passing data.

The most common example is compressing data with built-in transform stream from `zlib` module:

Useful class methods (Node v10+)

`Stream.finished(stream, callback)`

allows you to get notified when a stream is no longer readable, writable or has experienced an error or a premature close.

This method is useful for error handling or performing further actions after the stream is consumed. An example:

Stream.pipeline(…streams[, callback])`

method to pipe between streams forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

This method is the cleanest and least verbose way of building stream pipelines. In contrast to `readable.pipe()`, everything is handled automatically, including error propagation and cleaning up of resources after the process has ended. An example: