The handle_continue/2 callback prevents race conditions and allows for faster, asynchronous initialization.

Let’s start by looking at the problems that handle_continue solves. If you don’t care about the problems and just want the code, you can skip to the end or checkout (pun intended) the GitHub repo.

Here is a short, all-in-one example that shows an application which starts three instances of a process, each of which load different data when they start up:

An application with three processes which are started synchronously, one after another.

The supervisor iterates through its list of children, calling each child’s init/1 callback. This is done synchronously, one child after another. Since we are performing a (fake) HTTP request to fetch data for our processes’ state, this is kind of slow, and would become ever slower with every child process we add:

The child processes starting one-after-another

Since the processes don’t depend on each other, it would be nice if we could start them up all at the same time, instead of waiting ~9 seconds for them all to initialize sequentially.

A common “trick” that people use for asynchronously initializing a process is to have that process send itself a message using self/0 (which returns the process’ pid ) and then either Kernel.send/2 , Process.send/3 , or Process.send_after/4 .

Let’s modify our init/1 callback to defer the HTTP call and perform it asynchronously, so that the init/1 function can return faster, and the supervisor can move on to the next child sooner:

An example of performing asynchronous initialize of a process. Complete code here.

Now when we start our application, everything is initialized a lot faster because the HTTP calls are no longer being performed in the init/1 callback:

A faster overall startup time, since the init callbacks are now delegating their HTTP calls

This seems great: we have decreased our startup (or restart) time by taking slow code out of our init/1 callback, everything looks okay.

But there is a problem; lets take a look at another example.

We will introduce a new process, Spammer , which is constantly trying to send messages to the MyServer processes. In this example it is using GenServer.cast/2 to represent any other messages that might be sent in a real application. The MyServer processes will process these messages via a new handle_cast/2 callback:

New Spammer process which sends messages to the other processes. Complete code here.

In the application’s start/2 function we setup the supervisor. To start the Spammer child, there are no arguments, so we just specify the module name. We are placing/starting the Spammer before the MyServer processes because this illustrates what could happen in crash-restart situations.

The application’s start function. Complete code here.

When we run this, we get an error:

Race condition! The increment message arrived before the data was fetched

Looking at the logs, we can see that the increment message arrived before the data was fetched, and the process crashed because we were expecting data to be a map, but it was still nil .

Now is a good time to highlight something that we have just demonstrated: sending yourself a message in the init/1 callback does not mean that it will be the first message in the mailbox.

This means that it is pretty easy to introduce a race condition when, for example, you are sending messages by name (and not pid ). This can happen on startup (as we just demonstrated) but could also happen anytime the MyServer process is restarted.