10 min read

Memory management is a way to control and organize memory. The basic goal of memory management algorithms is to dynamically designate segments of memory to programs on demand. The algorithms free up memory for reuse when the objects in the memory are never required again. Garbage collection, cache management, and space allocation algorithms are good examples of memory management techniques.

In this article, we will cover the garbage collection algorithm in Golang. We’ll look at garbage collection first, then look at the different algorithms related to garbage collection.

This article is taken from the book Learn Data Structures and Algorithms with Golang by Bhagvan Kommadi. In this book, you will explore Golang’s data structures and algorithms to design, implement, and analyze code in the professional setting.

Technical requirements

Install Go Version 1.10 from Golang, choosing the right version for your OS. The GitHub repository for the code in this article can be found here.

Garbage collection

Garbage collection is a type of programmed memory management in which memory, currently occupied by objects that will never be used again, is gathered. John McCarthy was the first person to come up with garbage collection for managing Lisp memory management. This technique specifies which objects need to be de-allocated, and then discharges the memory. The strategies that are utilized for garbage collection are stack allocation and region interference. Sockets, relational database handles, user window objects, and file resources are not overseen by garbage collectors.

Garbage collection algorithms help reduce dangling pointer defects, double-free defects, and memory leaks. These algorithms are computing-intensive and cause decreased or uneven performance. According to Apple, one of the reasons for iOS not having garbage collection is that garbage collection needs five times the memory to match explicit memory management. In high-transactional systems, concurrent, incremental, and real-time garbage collectors help manage memory collection and release.

Garbage collection algorithms depend on various factors:

GC throughput

Heap overhead

Pause times

Pause frequency

Pause distribution

Allocation performance

Compaction

Concurrency

Scaling

Tuning

Warm-up time

Page release

Portability

Compatibility

That’s simple, deferred, one-bit, weighted reference counting, mark-and-sweep, and generational collection algorithms discussed in the following sections.

The ReferenceCounter class

The following code snippet shows how references to created objects are maintained in the stack. The ReferenceCounter class has the number of references, including the pool of references and removed references, as properties:

//main package has examples shown // in Hands-On Data Structures and algorithms with Go book package main // importing fmt package import ( "fmt" "sync" ) //Reference Counter type ReferenceCounter struct { num *uint32 pool *sync.Pool removed *uint32 }

Let’s take a look at the method of the ReferenceCounter class.

The newReferenceCounter method

The newReferenceCounter method initializes a ReferenceCounter instance and returns a pointer to ReferenceCounter . This is shown in the following code:

//new Reference Counter method func newReferenceCounter() *ReferenceCounter { return &ReferenceCounter{ num: new(uint32), pool: &sync.Pool{}, removed: new(uint32), } }

The Stack class is described in the next section.

The Stack class

The Stack class consists of a references array and Count as properties. This is shown in the following code:

// Stack class type Stack struct { references []*ReferenceCounter Count int }

Let’s take a look at the methods of the Stack class.

The Stack class – a new method

Now, let’s look at the heap interface methods that are implemented by the Stack class. The new method initializes the references array, and the Push and Pop heap interface methods take the reference parameter to push and pop reference out of the stack. This is shown in the following code:

// New method of Stack Class func (stack *Stack) New() { stack.references = make([]*ReferenceCounter,0) } // Push method func (stack *Stack) Push(reference *ReferenceCounter) { stack.references = append(stack.references[:stack.Count], reference) stack.Count = stack.Count + 1 } // Pop method func (stack *Stack) Pop() *ReferenceCounter { if stack.Count == 0 { return nil } var length int = len(stack.references) var reference *ReferenceCounter = stack.references[length -1] if length > 1 { stack.references = stack.references[:length-1] } else { stack.references = stack.references[0:] } stack.Count = len(stack.references) return reference }

The main method

In the following code snippet, let’s see how Stack is used. A Stack instance is initialized, and references are added to the stack by invoking the Push method. The Pop method is invoked and the output is printed:

// main method func main() { var stack *Stack = &Stack{} stack.New() var reference1 *ReferenceCounter = newReferenceCounter() var reference2 *ReferenceCounter = newReferenceCounter() var reference3 *ReferenceCounter = newReferenceCounter() var reference4 *ReferenceCounter = newReferenceCounter() stack.Push(reference1) stack.Push(reference2) stack.Push(reference3) stack.Push(reference4) fmt.Println(stack.Pop(), stack.Pop(), stack.Pop(), stack.Pop()) }

Run the following commands to execute the stack_garbage_collection.go file:

go run stack_garbage_collection.go

The output is as follows:

The reference counting, mark-and-sweep, and generational collection algorithms will be discussed in the following sections.

Reference counting

Reference counting is a technique that’s used for keeping the count of references, pointers, and handles to resources. Memory blocks, disk space, and objects are good examples of resources. This technique tracks each object as a resource. The metrics that are tracked are the number of references held by different objects. The objects are recovered when they can never be referenced again.

The number of references is used for runtime optimizations. Deutsch-Bobrow came up with the strategy of reference counting. This strategy was related to the number of updated references that were produced by references that were put in local variables. Henry Baker came up with a method that includes references in local variables that are deferred until needed.

In the following subsections, the simple, deferred, one-bit, and weighted techniques of reference counting will be discussed.

Simple reference counting

Reference counting is related to keeping the number of references, pointers, and handles to a resource such as an object, block of memory, or disk space. This technique is related to the number of references to de-allocated objects that are never referenced again.

The collection technique tracks, for each object, a tally of the number of references to the object. The references are held by other objects. The object gets removed when the number of references to the object is zero. The removed object becomes inaccessible. The removal of a reference can prompt countless connected references to be purged.

The algorithm is time-consuming because of the size of the object graph and slow access speed.

In the following code snippets, we can see a simple reference-counting algorithm being implemented. The ReferenceCounter class has number ( num ), pool , and removed references as properties:

///main package has examples shown // in Go Data Structures and algorithms book package main // importing sync, atomic and fmt packages import ( "sync/atomic" "sync" "fmt" ) //Reference Counter type ReferenceCounter struct { num *uint32 pool *sync.Pool removed *uint32 }

The newReferenceCounter , Add , and Subtract methods of the ReferenceCounter class are shown in the following snippet:

//new Reference Counter method func newReferenceCounter() ReferenceCounter { return ReferenceCounter{ num: new(uint32), pool: &sync.Pool{}, removed: new(uint32), } } // Add method func (referenceCounter ReferenceCounter) Add() { atomic.AddUint32(referenceCounter.num, 1) } // Subtract method func (referenceCounter ReferenceCounter) Subtract() { if atomic.AddUint32(referenceCounter.num, ^uint32(0)) == 0 { atomic.AddUint32(referenceCounter.removed, 1) } }

Let’s look at the main method and see an example of simple reference counting. The newReferenceCounter method is invoked, and a reference is added by invoking the Add method. The count reference is printed at the end. This is shown in the following code snippet

// main method func main() { var referenceCounter ReferenceCounter referenceCounter = newReferenceCounter() referenceCounter.Add() fmt.Println(*referenceCounter.count) }

Run the following commands to execute the reference_counting.go file:

go run reference_counting.go

The output is as follows:

The different types of reference counting techniques are described in the following sections.

Deferred reference counting

Deferred reference counting is a procedure in which references from different objects to a given object are checked and program-variable references are overlooked. If the tally of the references is zero, that object will not be considered. This algorithm helps reduce the overhead of keeping counts up to date. Deferred reference counting is supported by many compilers.

One-bit reference counting

The one-bit reference counting technique utilizes a solitary bit flag to show whether an object has one or more references. The flag is stored as part of the object pointer. There is no requirement to spare any object for extra space in this technique. This technique is viable since the majority of objects have a reference count of 1.

Weighted reference counting

The weighted reference counting technique tallies the number of references to an object, and each reference is delegated a weight. This technique tracks the total weight of the references to an object. Weighted reference counting was invented by Bevan, Watson, and Watson in 1987. The following code snippet shows an implementation of the weighted reference counting technique:

//Reference Counter type ReferenceCounter struct { num *uint32 pool *sync.Pool removed *uint32 weight int } //WeightedReference method func WeightedReference() int { var references []ReferenceCounter references = GetReferences(root) var reference ReferenceCounter var sum int for _, reference = range references { sum = sum + reference.weight } return sum }

The mark-and-sweep algorithm

The mark-and-sweep algorithm is based on an idea that was proposed by Dijkstra in 1978. In the garbage collection style, the heap consists of a graph of connected objects, which are white. This technique visits the objects and checks whether they are specifically available by the application. Globals and objects on the stack are shaded gray in this technique. Every gray object is darkened to black and filtered for pointers to other objects. Any white object found in the output is turned gray. This calculation is rehashed until there are no gray objects. White objects that are left out are inaccessible.

A mutator in this algorithm handles concurrency by changing the pointers while the collector is running. It also takes care of the condition so that no black object points to a white object. The mark algorithm has the following steps:

Mark the root object Mark the root bit as true if the value of the bit is false For every reference of root , mark the reference, as in the first step

The following code snippet shows the marking algorithm. Let’s look at the implementation of the Mark method:

func Mark( root *object){ var markedAlready bool markedAlready = IfMarked(root) if !markedAlready { map[root] = true } var references *object[] references = GetReferences(root) var reference *object for _, reference = range references { Mark(reference) } }

The sweep algorithm’s pseudocode is presented here:

For each object in the heap, mark the bit as false if the value of the bit is true

if the value of the bit is If the value of the bit is true , release the object from the heap

The sweep algorithm releases the objects that are marked for garbage collection.

Now, let’s look at the implementation of the sweep algorithm:

func Sweep(){ var objects *[]object objects = GetObjects() var object *object for _, object = range objects { var markedAlready bool markedAlready = IfMarked(object) if markedAlready { map[object] = true } Release(object) } }

The generational collection algorithm

The generational collection algorithm divides the heap of objects into generations. A generation of objects will be expired and collected by the algorithm based on their age. The algorithm promotes objects to older generations based on the age of the object in the garbage collection cycle.

The entire heap needs to be scavenged, even if a generation is collected. Let’s say generation 3 is collected; in this case, generations 0 – 2 are also scavenged. The generational collection algorithm is presented in the following code snippet:

func GenerationCollect(){ var currentGeneration int currentGeneration = 3 var objects *[]object objects = GetObjectsFromOldGeneration(3) var object *object for _, object = range objects { var markedAlready bool markedAlready = IfMarked(object) if markedAlready { map[object] = true } } }

This article covered the garbage collection algorithms in Golang. We looked at reference counting algorithms, including simple, deferred, one-bit, and weighted. The mark-and-sweep and generational collection algorithms were also presented with code examples. To learn other memory management techniques in Golang like cache management, and memory space allocation, read our book Learn Data Structures and Algorithms with Golang.

Read Next

Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work.

State of Go February 2019 – Golang developments report for this month released

The Golang team has started working on Go 2 proposals