Before we get started, if you want to follow along first install Up:

$ curl -sf https://up.apex.sh/install | sh

Then create a new directory with the following main.go file:

Edge-optimized APIs

First let’s take a look at how the “edge” optimized API Gateway endpoint type works, as they’re the default in Up.

Edge-optimized endpoints leverage of the Cloudfront CDN network—when a client sends a request to your domain it is routed to the nearest Cloudfront PoP location, which in turn sends it over the AWS network to your origin location.

Utilizing a CDN in this manner decreases latency when compared to sending traffic directly to a single origin. If we run the endpoint through the Apex Latency tool we can see that physics cannot be beaten here, we still have to transmit the request and response over-seas to the origin in Oregon, so we’re still favoring the west in this case.

If you have a single origin database & server this can be a good solution, but if you can make your data globally available there’s a better way to go, let’s look at that next.

Regional APIs

The second type of endpoint supported by API Gateway is the “regional” endpoint, these are effectively “bare” endpoints, bypassing Cloudfront. The use of Cloudfront can increase latency if your customers are close to the given region, as it introduces an additional hop and TCP/SSL handshake.

To use regional endpoints in Up all you need to do is change lambda.endpoint to “regional”. Note that this is an Up Pro feature.

First let’s deploy the application to us-west-2 Oregon:

$ up env set --region us-west-2 MESSAGE="Hello from us-west-2"

$ up --region us-west-2

When curling the app from my place in London, I get a response from us-west-2 , because that’s the only deployment we have.



Hello from us-west-2 $ curl https://up-example.com/ Hello from us-west-2

Hello from us-west-2 $ curl https://up-example.com/ Hello from us-west-2

As shown by the latency check for this setup, we only have a reasonable response latency in the west.

Let’s deploy an additional endpoint to the eu-west-2 region in London:

$ up env set --region eu-west-2 MESSAGE="Hello from eu-west-2"

$ up --region eu-west-2

Up utilizes a Route 53 Latency routing policy to select the quickest endpoint for your customer. If we check the response again we now get back eu-west-2 instead of us-west-2 from here in London, so it’s working correctly!



Hello from eu-west-2 $ curl https://up-example.com/ Hello from eu-west-2

Checking with the latency tool again you can see that regions close to London are now much more responsive:

Depending on your situation you may want to deploy all the regions, but without global data it won’t be effective, let’s look into that next!

Global Databases

So we have respectable API latencies now, but what about when our database is introduced?

If you have a single database located in one region, you’re going to have to send requests over-seas to this location, effectively replicating the “edge” endpoint behavior. In order to properly utilize regional endpoints you should have a replicated or globally available database, such as DynamoDB Global Tables.

This is where Up’s environment variables can come in handy, they’re encrypted so you can utilize them to store a region-specific settings, such as database connection string. Here we’re going to tweak DB to point to a western and eastern RDS instance:

# Set env vars

$ up env set --region us-west-2 DB='postgresql://...us-west-2.rds.amazonaws.com'

$ up env set --region eu-west-2 DB='postgresql://...eu-west-2.rds.amazonaws.com' # Deploy new env vars

$ up production --region us-west-2

$ up production --region eu-west-2

After redeploying the changes your European customers will now connect to the RDS instance in London, and Western countries will connect to Oregon.