Dealing with memory in Go is relatively easy, compared to C or C++, since there's a built-in garbage collector. But if you allocate / deallocate memory thousands of time per seconds, like one of our API services at Viki, would there be a large overhead?

For example, our server returns JSON to client's request. Normally, the logically-sound way would be creating a new JSON string for every request, and let Go handle memory allocation.

So our Platform Engineering team created a library called bytepool to provide a more memory-friendly way to tackle this. BytePool manages a thread-safe pool of []byte . By using a pool of pre-allocated arrays, one reduces the number of allocations (and deallocations) as well as reducing memory fragmentation.

If the pool is empty, new items will be created on the fly, but the size of the pool will not grow. Furthermore, the returned items are fixed-length []byte - they will not grow as needed. The idea is for you to favor over-allocation upfront.

Check it out on Github. It's currently being used in production.

Example

A common example is reading the body of an HTTP Request. The memory-unfriendly approach is to do:

body, _ := ioutil.ReadFull(req.Body)

A slightly better approach would be to predefine the array length:

body := make([]byte, req.ContentLength) io.ReadFull(req.Body, body)

While the 2nd example avoids any over-allocation as well reallocation from a dynamically growing buffer, it still creates a new array (a new array which will need to be garbage collected).

This allocation can be avoided by using a pool of []byte :

//pre-allocates 256MB (8K arrays of 32K bytes each) var pool = bytepool.New(8196, 32768) func handler(res http.ResponseWriter, req *http.Request) { buffer := pool.Checkout() defer buffer.Close() buffer.ReadFrom(req.Body) body := buffer.Bytes() ... }

The above generates a pool of 8K []byte each of which can hold 32K of data. An array is retrieved via the Checkout method and returned back to the pool by calling Close .

Benchmarking

Here is a simple benchmark test, where we create a JSON array of 10,000 consecutive integers in 3 different ways (string concatenation, array join and bytepool) (yes, bytepool supports JSON construction)

String concatenation:

func BenchmarkNormalAllocation(b *testing.B) { for j := 0; j < b.N; j++ { str := `[` for i := 0; i < 20000; i ++ { str += strconv.Itoa(j) + `,` } str += `]` fmt.Sprintf(str) } }

Array's Join:

// string array's join func BenchmarkArrayJoin(b *testing.B) { var Count = 20000; for j := 0; j < b.N; j++ { list := make([]string, Count, Count) for i := 0; i < Count; i ++ { list[i] = strconv.Itoa(i) } str := `[` + strings.Join(list, `,`) + `]` fmt.Sprintf(str) } }

bytepool:

// bytepool func BenchmarkBytepoolAllocation(b *testing.B) { for j := 0; j < b.N; j++ { buffer := pool.Checkout() buffer.BeginArray() for i :=0; i < 20000; i ++ { buffer.WriteInt(j) } buffer.EndArray() fmt.Sprintf(buffer.String()) } }

Look at the results:

String: 114,090,199 ns/op Array: 3,114,562 ns/op Bytepool: 2,314,697 ns/op

The string concatenation method is way too slow, this is expected. Comparing array's join and bytepool approach, bytepool performs about 34% better.

Check out bytepool on Github