Selecting a new programming language is an important decision for any organization.

In this article, I’m going to walk through some of the reasons American Express elected to use Go within our payment and rewards systems.

Background

Our journey with Go started in late 2016, when certain legacy platforms were due for a refresh.

These platforms were purpose built for performance, concurrency, resiliency, and availability. Given advancements in the programming space, we wanted to see what languages best suited our reliability and performance needs. We also needed something that fit modern infrastructure and design patterns.

As such, we launched an effort to evaluate different languages.

Language Showdown

We called our evaluation the “Language Showdown.” To begin, we wrote the same application in multiple languages and then benchmarked them one by one. Since we were measuring performance, we optimized each version for the language used. We ensured the logic to process requests was consistent across each version. Since the way one writes code is as important as the language used, inefficient code can have a large effect on performance.

The application requirements were simple, but they covered our use cases well.

Write an HTTP/S service that converts ISO8583 messages to JSON.

The binary message format, ISO8583, is commonly used in credit card processing and other parts of the financial services industry.

By focusing our showdown on converting ISO8583 to JSON, we ensured that the winner had support for both older and newer message formats.

Choosing what to evaluate

With so many programming languages available, we knew it would be difficult to test every possible option; instead, we elected to test four, provided that each language:

was used in high performance platforms;

had good support for network programming;

was well suited for creating backend REST/gRPC APIs; and

offered an open source toolchain, libraries, and a large community.

Based on these criteria, we narrowed our testing languages to C++17, Java, Node.js, and Go. The first three languages were already in use at American Express, whereas Go was not.

Showdown Results

With performance being the primary testing method, we found some interesting results. Go achieved the second-best performance at 140,000 requests per second. This number wasn’t far off from the top performer, which clocked in at 167,000 requests per second.

From a performance perspective, we saw that Go lived up to its promise.

Even more compelling was that Go achieved this performance without any warm up time. Warm up time is a common challenge we see in languages that use Just-In-Time compilation. Being a compiled language, Go was ready to handle the brunt of our tests the second the HTTP listener started.

In our showdown, we included languages that have garbage collection and those that don’t. Even though Go does have garbage collection, the pause times were negligible. In production today, we see pause times in the range of 250µs to 1ms. All without any special garbage collection tuning options (as there are none for Go).

What we like about Go

While performance is important to our use case, it wasn’t the only criteria we assessed. Here’s what else stood out about Go.

Simple and Straightforward

For the most part, Go is a straightforward language to learn. Those with basic programming experience can often pick it up quickly. This was an important factor. Since we have many engineers with experience in other languages, we sought a language that lent itself to approachable learning and teaching.

While that doesn’t take away the anxiety of learning a new language, once our engineers started practicing and learning Go, those nerves settled quickly. Expertise takes time to build, but our engineers were able to contribute to existing Go projects within one to two months.

The below example shows a very simple Hello World program in Go.

package main import ( "fmt" "math/rand" ) func greeting () string { g := [] string { "Hello" , "Hi" , "Howdy" } return g [ rand . Intn ( len ( g ))] } func main () { fmt . Printf ( "%s, World" , greeting ()) }

Overall, Go should look familiar to anyone who has experience with C-family languages.

Encourages Best Practices

Another aspect of Go that we like is that it encourages programming best practices. This can be seen in many ways, but one clear example is seen in unused imports and variables.

Most languages will let you import a library or define a variable and never use them. In the case of Go, build tools will fail to build a program if they find either of these states. This keeps programs lean and performant by eliminating unnecessary dependencies and objects.

Error handling

Error handling in Go is a bit controversial within the programming community. Yet it is another way that Go encourages best practices.

Go supports multiple return values on functions. If we write idiomatic Go, then our standard method of showing an issue is to return an Error type. This leads to every function call looking like the below example.

x , err := something () if err != nil { // do stuff } y , err := somethingElse () if err != nil { // do stuff }

You can pretty much look at any Go program and find the same err != nil checks throughout. Some would say that this violates the DRY (Don’t Repeat Yourself) principle. They wouldn’t be wrong.

In my opinion, though this goes against the DRY principle, it encourages a better practice. This style of error handling encourages engineers to act immediately. It’s also very simple to see when engineers are not handling errors.

Code like this is a big red flag.

v , _ := something ()

The above example throws away the error returned from the something() function. This is very easy to spot during code reviews. But even if one sneaks by, Go provides tooling to catch unhandled errors. These tools can easily be integrated into a continuous integration pipeline.

Go makes it difficult to have unhandled errors.

Godocs

Commenting code is a best practice in any language. Go encourages this by using code comments as the source for package documentation.

Let’s take a look at an example.

// Greeting will return a random greeting from a pre-defined list of greetings. func Greeting () string { g := [] string { "Hello" , "Hi" , "Howdy" } return g [ rand . Intn ( len ( g ) - 1 )] }

When the godoc command is run against the code above. The Greeting() function documentation will read as follows.

Greeting will return a random greeting from a pre-defined list of greetings.

While generating package documentation from code comments is not unique to Go, the Go community emphasizes and helps enforce this practice.

Any open source Go package that seeks popularity must have quality Go Docs.

Concurrency in its DNA

For most languages, the standard answer to processing concurrent requests is threads. Applications will create operating system (OS) threads and distribute work across those threads. These OS threads, however, can be quite resource intensive, using at least 1Mb of stack space per thread.

As such, applications have to control how many threads they create. This is typically done by creating a thread pool and managing the size and longevity of the pool.

Go’s approach is a bit different. In Go, work is distributed across Goroutines. These Goroutines are not OS threads and come at a much lower resource cost, using at least 2Kb of stack space per Goroutine (0.19% of the memory an OS thread uses). They are managed by the Go runtime, which will manage the allocation of OS threads as needed. It will also distribute Goroutines to those threads as required.

This makes distributing concurrent work easier by reducing the need to create pools.

But Go doesn’t just make it easy to distribute concurrent work, Go also makes creating Goroutines straightforward. It’s as easy as adding the term go in front of a function.

go example ()

Another core aspect of Go’s concurrency approach are channels. Channels act like internal queues for a Go program. These channels provide a thread-safe way of passing information across Goroutines. The below example shows how to create a channel ( c ). It also shows reading from that channel in a Goroutine and writing to it from the main routine.

// Creating a channel c := make ( chan string ) // Kick off anonymous Go routine go func () { // Read from a channel msg <- c // do stuff }() // Writing to a channel c <- "This is a channel"

When building high volume, low latency platforms, concurrency is very important. With Goroutines and channels, it takes a lot less work to build for concurrency.

Great Tooling

Tooling is an area where we feel Go especially excels. Go comes with an assortment of tooling that makes developing in Go easier. This can be seen in its powerful built-in testing and benchmarking framework, and in the simple, but all-important, gofmt command which auto-formats code to keep all Go code looking the same. Even its profiling tool, PProf, lets you really dig into performance hot spots.

Build is one specific area where we have seen significant productivity gains. Our Go projects build time tends to be about half or better than the build time of other comparable projects. Though it may be only a few minutes of savings per build, the benefit scales quickly because engineers run many builds per day.

Dependency Management needs some work

While there are many good things about Go, it has some areas that are less refined. One example of this is in dependency management. A lot of work is going into how Go manages dependencies, but the Go Community is still adopting the new tools and adapting to the changes they bring.

I’m confident this will settle over time.

Convincing Leadership

In summary, there is a lot to like about Go for our use case of backend and network-based services. It’s fast, built for concurrency, easy to learn, and encourages best practices.

When all the showdown results were in, we chose Go as the recommended language. However, this by itself wasn’t enough to start writing in Go. As with many engineering decisions, Go started as a grassroots effort.

Before we could start re-writing critical platforms in Go, we had to first convince our leadership. We did this in a couple of ways.

One was via an internal position paper. It went in-depth into the syntax and benefits of Go and discussed how we could apply Go to specific systems. This paper helped add context around where Go could be used and where it shouldn’t be used.

Beyond the paper and the showdown proof of concept, we also picked several pilot applications. These were not test applications but real-world services that had to run in production. It spoke volumes being able to show how we reduced our memory overhead by nine-tenths using Go.

Thanks to the combination of the paper, proof of concept, pilot applications, and some long conversations, we were able to make a convincing case about the enterprise readiness of Go.

To help other enterprises determine if Go is right for them we’ve published a Go Case Study.