Another intermediate: Long running goroutines listen on multiple channels. Given scenario that a set of workers listen to 2 or more channels to do different kind of work. This is particular common in a lot of codebase, so I think it’s beneficial to point it out.

type (

work1 struct {

query string

}

work2 struct {

command int

}

result1 struct {

answer string

err error

}

result2 struct {

output int

err error

}

)

var (

workc1 = make(chan work1, 4)

workc2 = make(chan work2, 4)

resultc1 = make(chan result1, 4)

resultc2 = make(chan result2, 4)

)

// Result collector listens to 2 result channels

go func() {

for {

select {

case r := <-resultc1:

if r.err != nil {

log.Println(err)

continue

}

processResultType1(r)

case r := <-resultc2

if r.err != nil {

log.Println(err)

continue

}

processResultType2(r)

default:

}

}

}()

// 4 workers listen to 2 working channels

for i := 0; i < 4; i++ {

go func() {

for {

select {

case w := <- workc1

resultc1 <- processWorkType1(w)

case w := <- workc2

resultc2 <- processWorkType2(w)

default:

}

}()

}

Great… Wait that’s 4 channels already, only for 2 types of work! and quite a pattern of repetitive code, imagine we have to do this with 3, 4 or even 10 types of work (we will need 2x number of channels!)

Channel is just a medium for communication between goroutines. A good medium is a stateless one, it shouldn’t care what it carries. To reduce the number of channels (down to a fixed 2 - in and out!) disregarding how many types of work, we can do this:

type work struct {

typ int

query string

command int

err error

}

type result struct {

typ int

answer string

output int

err error

}

or even better, consider in and out data types are just data!

type (

work1 struct {

query string

}

work2 struct {

command int

}

result1 struct {

answer string

}

result2 struct {

output int

}

message struct {

work1

work2

result1

result2

typ int

err error

}

)

The only notice is that we should increase the buffer if the message carry more types, for above example, we can bump the buffer up to guarantee the same throughput (in ideal scenario):