Illustration created for “A Journey With Go”, made from the original Go Gopher, created by Renee French.

ℹ️ This article is based on Go 1.13.

Creating goroutines in Go is easy and fast. However, Go can run them, at most, one per core at the same time and need a way to park the other goroutines and make sure the load is well balanced across the processors.

Goroutines queues

Go manages the awaiting goroutines at two levels thanks to the local queues and the global one. The local queues are attached to each processor, while the global queue is unique and available through all processors:

Global and local queues

Each local queue has a maximum capacity of 256, and any new incoming goroutine is pushed to the global queue after that. Here is an example with a program that spawns thousands of goroutines:

func main() {

var wg sync.WaitGroup



for i := 0;i < 2000 ;i++ {

wg.Add(1)

go func() {

a := 0



for i := 0; i < 1e6; i++ {

a += 1

}



wg.Done()

}()

}



wg.Wait()

}

Here are the traces of the scheduler done with two processors:

Details of the local and global queues

The traces show the number of goroutines in the global queue with runqueue and the local queues (respectively P0 and P1 ) in the bracket [3 256] . When the local queue is full and reaches 256 awaiting goroutines, the next ones will stack in the global queue as we can see with the runqueue attribute growing.

Goroutines do not go in the global queue only when the local queue is full; it is also pushed in it when Go inject a list of goroutines to the scheduler, e.g. from the network poller or goroutines asleep during the garbage collection.

Here is the diagram of the previous example:

Local queues have up to 256 goroutines

However, we could wonder why the local queue of P0 was not empty in the previous example. Go uses different strategies to ensure each processor is keeping busy.

Work-stealing

When a processor does not have any work, it applies the following rules until one can be satisfied:

pull work from the local queue

pull work from the global queue

pull work from network poller

steal work from the other P’s local queues

In our previous example, the main function is running and creating goroutines on P1 . While the first goroutines are enqueued in the P1 ’s local queue, P0 is looking for work. However, its local queue, the global queue, and the network poller are empty. The last solution is to steal a job from P1 :

Work-stealing by P0

Here are the traces of the scheduler before and after the work-stealing:

Work-stealing by P0

The traces show how a processor steals work from other processors. It takes half of the goroutines from the local queue; out of the seven goroutines, four are stolen — one runs immediately on P0 while the other three go in the local queue. The work is now well-balanced between the processors. This can be confirmed with the execution tracing:

The goroutines are well dispatched, and since there is no I/O, the goroutines are chained without switching. Let’s see now what happens when some I/O such as file operations is involved.

I/O and global queue

Let’s take an example that involves file operations:

func main() {

var wg sync.WaitGroup



for i := 0;i < 20 ;i++ {

wg.Add(1)

go func() {

a := 0



for i := 0; i < 1e6; i++ {

a += 1

if i == 1e6/2 {

bytes, _ := ioutil.ReadFile(`add.txt`)

inc, _ := strconv.Atoi(string(bytes))

a += inc

}

}



wg.Done()

}()

}



wg.Wait()

}

The variable a is now incremented from time to time by the number written in a file. Here are the new traces:

With this example, we can see that each goroutine is not processed by only one processor. In the case of a system call, Go uses the network poller that pulls back the goroutine in the global queue when the call is done. Here is an illustration with the goroutine #35:

I/O operations put the work back to the global queue