Building a Secure API - Part 1

Building a Secure API series. This post is part 1 in theseries. Click here for more articles in the series!

An API Driven World

There's no doubt about it, the world we live in is becoming more and more connected as each day passes. New services are popping up seeking to make our lives simpler and (supposedly) more enjoyable. Behind all of these changes, these combinations of services is a technology that's really come into its own: API driven architectures. These APIs allow services to talk to each other at a programatic level and provide predictable responses that other services can consume and use in their own useful ways.

If you've been doing any kind of development over the last seven or eight years chances are you've encountered an API in some form or fashion. Maybe you created an API for a product your company has or maybe you're a startup and the API is your product. These days the joke is that, if you don't have an API your service has already fallen behind the times.

There's a certain kind of hesitation that comes with API creation, however. Developers that are more comfortable with front end, customer facing development and features may feel a bit out of their depth when wading out into the word of APIs. While the basics are still there (like HTTP request/response and input handling) there's different concerns for how they're implemented. The presentation layer is completely dropped and the emphasis is put on the data being shared rather than how it looks. You're tasked with making something that puts functionality and usefulness over presentation and user experience.

One quick note here - I'm going to be focusing on REST APIs in this article, not SOAP (or RPC). REST has clearly pulled ahead in the API world and if you're planning on implementing a new API it's definitely the direction of choice.

If a developer is relatively new to the world of APIs the natural inclination is the make something that "just works" - receiving a request for data on a certain endpoint and spitting back the requested matches. While this is the ultimate goal, there's something equally as important when creating an API: making the choices and doing the work to adequately protect it. It's easy to think that, because it's an API and only other "machines" will be talking to it that the security on it can be more lax than an customer-facing application. Obviously, nothing could be further from the truth. There are however considerations that need to be made when securing APIs that aren't required in customer-facing tools. Lets start off by looking at some of these concepts and considerations so everyone's on a level field to start.

Enjoying the article? Consider contributing to help spread the message!



The Concepts

First off, let's talk about some of the basics around securing APIs. To start off, I'll say one thing here - the way that I'm going to show later of how to secure the API is not the only way to do it. This is just a method I've put together to illustrate some of the basic security concepts for APIs. There are a whole world of options for API protection out there and they should definitely be evaluated before you settle on one. The method I'll show is great for an API that wants to be secure but doesn't want a lot of the overhead that can come with other systems.

The method we'll be using makes use of a "shared key" system where both the API and the client have a secret piece of information (a token in this case) which is used in authentication and sending of messages. I've seen some systems that implement a similar system and make use of a system using a single static token for each request. This, however, can lead to serious issues if that token was ever found out. An attacker could impersonate a user's messages and potentially bypass auth mechanisms.

Instead of a single token system, I've opted to go with a multiple token, revokable system that lets the user create the tokens they want, describe their use and - another major benefit - revoke them at any time. If you're a GitHub user you're probably familiar with their token system. It works in a similar way, just adding in a bit more metadata around it.

If you're familiar with the OAuth v2 handling this setup will sound familiar. There's a similar secret token and back-and-forth handoff that happens with an OAuth transaction but there's a bit more involved there. Also remember, OAuth isn't primarily designed for authentication. Its primary purpose is authorization. For example, think about all of the "Log in with Twitter" features you've seen on sites all over the web. When you click that button the application drops you back over to the Twitter site and asks for approval. You click on the "Allow" button and the flow continues on. That's OAuth that's handling things behind the scenes there. Using that flow you've authorized the application to use your Twitter information to identify you. However, you authenticated to Twitter to be able to sign into your account. The other application has no idea if you're actually who you say you are - they've accepted the risk and offloaded that decision to Twitter.

One word of warning here: when using a token based system like this, be sure that none of the tokens involved end up in the requested URL. URL requests are, by default, logged to the web server logs. If you have tokens - especially long-lived ones - in the URL all an attacker has to do is breach where your logs are stored and have access to all the tokens they'd want.

In the system we'll be creating there'll also be another aspect of the tokens to help improve their security - limiting the time they're valid for. The tokens I'm talking about here are the ones used in the requests following the authentication, not the ones used for the authentication themselves. By restricting the time that these tokens "live" we're able to reduce the possibility of them being intercepted and re-used by an attacker. In our system the timeout will be limited to 1 hour blocks. This is usually more than enough for users to be able to operate successfully with the API and yet still provide enough protection for the requests.

When the token expires, the client will be sent a message about failed authentication and they'll just have to request a new token. In the world of OAuth they send a "refresh token" that's specifically used for this kind of token "refetch" request but since we're shooting for a simpler, lighter version here we'll just stick with requiring the new token to be fetched manually.

The Basic Flow

I've hinted at how this API will function but let me take some time to walk through the flow of a normal session including both the authentication piece and the remainder of the request.

1. Token Creation

We'll start the process with the user going into their administration section of the application and generating a new token for use in the authentication process. This token will be a randomized string of letters, numbers and symbols that is used during (and only for) the authentication request.

2. Authentication

Once we have the token, the user will make a request to an endpoint on the API and POST their credentials: username and the token for the "password". As mentioned previously, you don't want this request to be a GET as the credential information would show up in server logs...and that's a bad thing just waiting to happen.

A lookup is done on the token and the username provided and a match is ensured. If everything's good the response will contain our randomly generated, time restricted token for the current session. This token is used as an identifier in following requests and to ensure that the message being sent hasn't been tampered with (more on this in a bit). If the user performs another authentication request in the period while this token is still valid, they'll be given a new token each time to even further reduce the risk of token interception and reuse by potential attackers.

3. Following Requests

With this token in hand, the client can then make requests to the remainder of the endpoints in the API. Our system isn't going to enforce any permissions at this time, only the gateway of authentication. In a real API there would need to be a more complex system put in place to help protect individual endpoints and ensure only the users that should use them can.

To make the requests, the client has to perform a few tasks:

Send the token resulting from the authentication in a header called X-Token Create a HMAC hash of the contents of the message being sent and include that as a X-Token-Hash header

With PHP, this second step is pretty easy using the hash_hmac method:

<?php $body = json_encode(['foo' => 'bar']); $messageHash = hash_hmac('SHA512', $body, $hash.time()); ?>

In the code above we're creating a SHA512 HMAC hash of the body contents and using the $hash from the authentication request along with the current time in seconds as the key. When the message gets back to the server this hash is recreated based off of the same pieces of information and compared against the value sent in the X-Token-Hash header. If there's a mismatch the system knows that the contents of the message have been altered and can reject it completely.

The astute readers might have noticed a small issue with this request handling - the use of the current time in seconds. I've worked with APIs in the past where this became an issue simply because their users were spread out widely across the world and sometimes the clocks on their systems just weren't synchronized correctly. As a result the requests were all failing with "Bad Request" messages no matter what was sent. To help resolve this situation, you can modify the time value being used to be a little bit "longer" by using the time down to the minute instead of seconds. The point of including the time is to have something that provides randomness to generation of the HMAC hash so it's not the same with every request. If taking it down to the second is causing too many issues, use something like this for the minute:

<?php $body = json_encode(['foo' => 'bar']); $time = date('ymdHi'); $messageHash = hash_hmac('SHA512', $body, $hash.$time); ?>

You're basically only adding in +59 seconds of lifetime to the request. This is a little less secure but if it means that more user requests are making it through, that's a fair tradeoff. Just remember to mark it down as an accepted risk on your side so it's not forgotten.

The Tools

While the concepts presented here could apply in just about any language and whatever framework you happen to choose, I have to start with something. Since I'm going to focus on PHP examples in this series I decided one one of the simpler PHP frameworks that I know of, the Slim Framework. This framework, technically a "microframework", aims to provide the least amount of functionality to the developer for making web applications. At its base level it's really just a simple front controller with a router attached and request/response handling. There's not much else that comes with it. There's a few niceties that are bundled in but if you're looking for a framework that you can just drop in and have everything there for you, Slim's not it.

However, this minimalistic approach makes it perfect for our examples especially when APIs are even more about just the request and response cycle than a web application with a frontend. Slim provides everything we'll need to set up some basic routes and link them to the functional pieces of our API.

Just to give you an idea of how simple we're talking, once it's installed via Composer all it takes to make an application that responds to an index page request is:

require_once 'vendor/autoload.php'; $app = new \Slim\App(); $app->get('/', function() { echo 'Hello world!'; }); $app->run();

That's all...and within this simplified structure we'll build out our API and integrate a few other pieces to help secure it and the data it protects. Speaking of other pieces, lets look at the next one that will provide us with some reusable logic in our request/response cycle: middleware.

If you're not familiar with the concept of middleware it's a pretty easy concept to get a handle on. I'm more of a visual learner so I've always found this image (borrowed from the Slim framework site) helpful:

As the diagram shows, the basic idea of middleware is as a sort of "wrapper" around the main part of your application. It is designed to provide additional functionality that's centered around the request and response handling specifically. Sure, it can do other things too but most middleware excels at working with the flow of data across the HTTP request. The request comes into the application, passing through the layers of middleware and, once the internal processing is complete, it passes back out those same middleware layers in the reverse order.

This middleware layer is where we'll be doing some of the authorization logic in our sample application. Since with an API access levels need to be checked on every request, it just makes sense to wrap it in a middleware and check the incoming request for the data we need. This approach also allows us to kick the client back out if there's an authorization issue long before it gets to the controller and the logic living inside.

Next up are two packages that we'll be using to work with the database in our API examples: the Laravel Eloquent database layer and the Phinx migration tool. If you're a PHP developer these days chances are you've heard of the Laravel framework. This framework has seen a huge rise in popularity over the last several years and has gained quite a following due to its ease of use and "simple" feel. While the framework itself comes with a lot of features - more than we need for these examples - the Eloquent package is just for working with databases.

Fortunately this package can be used outside of the main Laravel framework thanks to a "capsule". With this we can pull Eloquent and its functionality into our Slim-based application and use it just as you would in a Laravel application. You can check out the Eloquent manual for more information but here's an example of what using it will look like:

$links = Link::all(); $users = User::where(['active' => 1])->get(); $userLinks = User::find(1)->links;

It allows for not only the direct fetching of records but also searching database information and creating relations between the models making it easier to cross-reference data in your PHP without a lot of messing around with SQL.

The Phinx tool provides us with the ability to make reusable database migrations. A migration is basically an automated way to execute SQL commands. The real benefit of migrations is pretty apparent when you think about some of the common database issues developers face: consistency in initial database setup and keeping it synced across the entire team. Phinx is a PHP-based tool that allows you to create migrations, also written in PHP, let you define the "up" and "down" logic of the migration: up for when things are changing or getting added and down for when things are removed. The Phinx tool will also keep track of which migrations have been run and allows for rollbacks if a change has unintended consequences.

Here's an example of what a Phinx migration might look like to create a table:

<?php use Phinx\Migration\AbstractMigration; class CreateSources extends AbstractMigration { public function change() { $table = $this->table('sources'); $table->addColumn('name', 'string') ->addColumn('type', 'string') ->addColumn('user_id', 'string') ->addColumn('source', 'string') ->addColumn('last_update', 'datetime', ['default' => 'CURRENT_TIMESTAMP']) ->addColumn('created_at', 'datetime') ->addcolumn('updated_at', 'datetime') ->create(); } }

In this case we're creating a sources table that contains columns for user references, a type, name and source values. You'll notice that there's not a specific up or down method in this example. With more recent versions of Phinx the tool has implemented the change method with some magic behind it. When you write a change method Phinx will do its best to try to figure out, based on which action is being performed (apply or rollback) what to do with the migration. In the case of that example it's relatively simple: on the up the table would be created and on the down the table will be removed.

There's also a few other random pieces of functionality that'll be included along the way like random number generation functions and custom exception handling but don't worry, those will all be covered in good time.

Next Up

That's the end of this first part of the series. I've given you an overview of the current state of the API ecosystem, outlined the basic flow of the application and listed out some of the tools we'll be using in the series to make the magic happen. In the next part of the series we'll spend a little bit of time getting things set up and talking through the planning of some of our basic API features.

Resources