Using the HMAC algorithm with a pre-shared secret between a client and a server is an excellent approach for an HTTP server to verify the authenticity of a request made by a client. The various pros and cons this is well discussed here. I came to the conclusion that the Amazon S3 approach of using the Authorization HTTP header is the least-problematic location for the signature in an HTTP request.

This article is based on the lessons I learned while implementing schnellburger .

The Goal

My goal is to add HMAC-based authentication to requests made against a server developed using net/http . My goals are outlined below.

The server side code should extract the signature from the HTTP Authorization Header. The server should verify the signature against a pre-shared key. The business logic in the method handling the request should have no little to no involvement in the HMAC verification. The impact to the performance of the HTTP server should be minimized.

The first point is easy, the existing net/http package in Go allows you to extract HTTP headers. The second point is also easy, for my tests I use a static pre-shared key that is used by both the client and the server to sign and verify requests. In a real example you would generally have seperate keys for each unique client stored somewhere in a database.

The third point can be solved with the concept of middleware. While the net/http package does not explicitly support middleware, it is easy enough to implement your own solution.

As for the fourth point, it depends on your definition of the phrase "minimized". My final goal is to implement a zero-copy solution.

For an HTTP GET request this is relatively easy, because all the information uniquely identifying the thee request is in the HTTP headers, there is no body to verify in a GET request. Verifying an HTTP POST request is significantly more complex as the intention of the middleware capability was not to read the request. But to verify the signature of an HTTP POST request, the entire body must be read. Reading the request is easy from the middleware. But if the request is read by the middleware, when the final handler is invoked there is no more data accessible. Verifying the authenticity of the request is completely useless if the final handler cannot access the data.

Defining Middleware

The first question I asked myself is: "What is middleware?"

If you have a software component A that calls software component B, you always have some contract of behavior between the two components. Even if this contract is not explicitly indicated, it still exists. Middleware is any software that can sit between component A and component B without requiring a refactor of either component. In conclusion, middleware speaks the same abstraction on both ends.

image[net_http_contract]

In Go, a behavior contract can be elegantly expressed using an interface decalaration. The net/http package expects all HTTP handlers to implement the http.Handler interface. Combining this with my previous definition of middleware, whatever I come up with needs to both implement the http.Handler interface and call another implementation of the http.Handler interface.

This is approach is neither novel or unique, specifically in the realm of web development. The interceptor pattern captures this idea very well. In the realm of web development this is usually called the intercepting filter pattern.

The First Approach

The first solution I came up with is to read the entire request body into memory in the middleware. This is incredibly easy using the io and bytes packages in Go. If you never intend to process PUT or POST requests which have a body, this is moot anyways. There is no body to process, thus there is no body to buffer into memory.

If the body of all your requests is small, then this approach also works. However, there can easily be requests with bodies that are exceptionally large. For example, if I upload a video from my GoPro to YouTube I am uploading several gigabytes of data for just 20 minutes of video.

Handing off responsibility

The reason for the middleware reading the entire body of an HTTP request is that I originally defined my middleware as completely responsible for verifying the authenticity of a request. If you can pass that responsibility along to the next handler, then it is possible to avoid the need to buffer the entire request into memory. With this decision made, I decided the responsiblity of my middleware was to setup everything needed to actually verify the authenticity but stop short of the actual verification.

In order to convey this new responsiblity, I defined my own Handler interface that has a third argument. This argument is named verify and is a function that returns true or false . It is the responsiblity of each Handler implementation to call verify after reading the body completely and before taking any action based off the contents of the request. If the return value is false , the implementation simply returns immediately. In that case, my middleware takes care of sending all the necessary status code and headers to a client to indicate the request has been refused.

The interface definition for my middleware package looks like this.

type Handler interface { ServeHTTP ( rw http . ResponseWriter , req * http . Request , verify func () bool ) }

The final interaction between an implementation of Handler , the middleware and the existing http.Server is depicted here.

Enforcing this responsiblity

I am an extremely paranoid programmer. If the contract above is not followed by an implementation of Handler it might go unnoticed. It also could mean that an application could actually be insecure while appearing to be implemented in a secure manner.

To guard against this, I created an implementation of http.ResponseWriter . This interface type is the first parameter to the ServeHTTP function of the http.Handler interface. This is the only way an implementation can communicate data back to the client. Whenever a function on this interface is called, the implementation checks to see if the verify function has been called. If it has then it delegates the call to the actual http.ResponseWriter provided by the net/http package. If it has not then it logs a message and responds with a bogus object.

The great thing about this is that it is completely transparent. It doesn't require expanding the definition of http.ResponseWriter at all.

This isn't completely fool-proof however. An implementation could still take some action such as updating a database without ever calling verify . As a result, my middleware checks after calling the implementation that verify was called. If it was not, then it logs a message.

Adapting to existing implementations

By expanding the responsiblity of the interface called by my middleware I violated the original definition of middleware: It not longer speaks the same abstraction on both ends.

In order to rectify this, I wrote an adapter that does exactly what my implementation requires. It reads the whole request before calling an implementation of http.Handler . This is done in exactly the manner I proposed originally: buffering the whole request into memory. The buffer is then used to replace the Body member of the http.Request structure.

This allows existing implementations of http.Handler to be used unchanged if they can tolerate the performance penalty of buffering the whole request into memory.

Proving authenticity

The actual question of how a client proves a request is authentic still hasn't been resolved.

In order to prove the authenticiy of a request a client needs 3 pieces of information:

The cryptographic hash algorithm to use. A secret key. This is the preshared secret between the client and server. The key index. This is an integer that uniquely identifies the key in use by the client.

Once the client has computed the signature of the request, it needs to send both the signature and key index to the server. This is done by base64 encoding both and placing them in the Authorization header.

Computing the signature

To compute the signature, the standard HMAC algorithm is used. This is implemented in Go by the crypto/hmac library. The are three inputs to this algorithm: the secret key, the cryptographic hash algorithm, and some data.

The data in this case needs to uniquely identify this request. Two requests for different resources should not share the same signature. In order to make the signature unique, the following data is signed:

The request method. The request path. There is no concept of an empty path, the root resource is always at the path "/". The raw query with the leading "?". The request body, if any.

Drawbacks

This definition of the signature has the drawback that requests are replayable. Once a signature has been computed for a specific request, it never changes. This is commonly addressed by using a nonce between the server and client or requiring the client to include the time as part of the signature.

If a third party is able to intercept a request, it can replay that request at any future point in time and appear authentic. I tolerate this drawback because I always am able to use HTTPS. This means that a third party would need to be able to decrypt the TLS connection between the client and server before obtaining the plaintext of a request.

Unsolved problems

There is of course one large problem with this whole scheme, it assumes that the key shared between the client and server is indeed secret. If a third party can obtain the key, it is trivial to forge a request and appear as if it came from that client. This problem is basically the question of determining if a communications channel is secure. I am not seeking to solve that problem.

In most of my use cases, the HTTP requests being secured are machine-to-machine. So, the secret key is stored in a configuration file on both machines. This means that someone would need to have privileged access to the machines or be able to intercept and decrypt the SSH connection I use to configure them. If this happens, I have much larger problems than forged HTTP requests.

In other cases, you may have some service that a client registers with. If the secret key exchange is done at registration time over HTTPS this is reasonably secure. The actual key never needs to be shared again between the client and server.