Hey folks,

first of all, we wish you a merry Christmas and a happy new year. We hope you had awesome presents and much better food.

In this post, I’d like to show you how we configured an NGINX to act as a reverse proxy with load balancing in a high-availability Hashicorp Vault cluster. The main problem we tried to solve was to reject direct access to the Vault cluster and enable auto routing between two Vault instances.

We used an NGINX webserver due to a compiled Angular app which is responsible to log in our employees to Vault. I will explain some learnings I had while implementing the solution.

I created a Github Repository containing all examples. You can find it HERE.

Introduction to Vault

Manage Secrets and Protect Sensitive Data https://www.vaultproject.io/

Hashicorp Vault is used to store secrets centrally and provide a high grade of data protection. In a high-availability cluster, it is able to scale seamlessly when Hashicorp Consul is used as it’s backend.

Luckily Hashicorp already created a very good tutorial to build a Vault high-availability cluster.

https://learn.hashicorp.com/vault/operations/ops-vault-ha-consul

Our Setup is nearly the same but with three Consul instances and two Vault instances.

Vault Clusters are generally a hot-standby Setup which means there is always a leader (or primary) node which is able to answer all requests. All other nodes are either performance nodes or simply not able to answer your request. This behaviour depends on the subscription you made to Vault.

NGINX – very easy, isn’t it?

Well, NGINX on its own is straightforward and well documented. Sadly using NGINX as reverse proxy for Vault doesn’t seem to be an out of the Box solution. Hashicorp already published an example configuration for HAProxy but not for NGINX.

Problems

We came across two problems while we tried to implement NGINX as reverse proxy for Vault.

If you’re using the free version of Vault all non-primary nodes answer with HTTP 307 temporary redirect which points to the current primary node.

We wanted to intercept this redirect with the NGINX to fulfill the original request which means we need to follow it on the server side.

[email protected]:[~]: curl -XGET --header "X-Vault-Token: TOKEN" vault1:8200/v1/sys/config/cors -I HTTP/1.1 307 Temporary Redirect Cache-Control: no-store Location: http://vault2:8200/v1/sys/config/cors Date: Wed, 26 Dec 2018 15:22:52 GMT Content-Length: 0

The second problem was that it’s not possible to configure health checks with the free Version of NGINX.

This means we need to react to several errors received from Vault.

Intercept received HTTP Redirects

To intercept a received HTTP redirect we need to define a Named Location which is normally used for internal purposes. You can e.g. define a Named Location for error handling when the error document located on a different server which will be reverse proxied by the NGINX.

I created a very simple example configuration to explain Named Locations. You are able to take a look at the example repository as well.

server { location / { ## This Location always responds with a HTTP 404 for demo purposes proxy_pass http://localhost/client_error; ## We need proxy_intercept_errors to enable handling error within NGINX when using proxy_pass to serve requests proxy_intercept_errors on; ## This option tells the NGINX to catch HTTP 404 error and answer them with the 'Named Location' handling_client_error error_page 404 = @handling_client_error; } location /client_error { ## This Location always responds with HTTP 404 return 404; } location @handling_client_error { ## We set the Header Content-Type to text/plain because the default Content-Type is application/octet-stream which would result in a download add_header Content-Type text/plain; ## This simply returns HTTP 200 with text return 200 "Handled Client Error"; } }

You can start the NGINX Server with Docker to test Named Locations.

# Simply paste the content in a file called 'nginx_named_locations.conf' either with the following command or on your own [email protected]:[~]: echo -e 'server {

location / {

## This Location always responds with a HTTP 404 for demo purposes

proxy_pass http://localhost/client_error;



## We need proxy_intercept_errors to enable handling error within NGINX when using proxy_pass to serve requests

proxy_intercept_errors on;



## This option tells the NGINX to catch HTTP 404 error and answer them with the 'Named Location' handling_client_error

error_page 404 = @handling_client_error;

}



location /client_error {

## This Location always responds with HTTP 404

return 404;

}



location @handling_client_error {

## We set the Header Content-Type to text/plain because the default Content-Type is application/octet-stream which would result in a download

add_header Content-Type text/plain;



## This simply return a HTTP 200 with text

return 200 "Handled Client Error";

}

}' > nginx_named_locations.conf [email protected]:[~]: docker run -it -p 80:80 -v $(pwd)/nginx_named_locations.conf:/etc/nginx/conf.d/default.conf nginx

As soon as you started the container and enter http://localhost/ in your Browser you should receive the following answer.

http://localhost/



You are also able to directly open the client error page. Surprise you receive HTTP 404 response.

http://localhost/client_error

Defining Options to use next Upstream

Due to the limitation of NGINX regarding health checks we need to define several options which instructs the NGINX to send the request to a different Upstream.

I created a simple configuration to show you how to use upstreams and handle error properly. You are able to take a look at the example repository as well.

## Upstream defines a usable backend which allows sending requests to several servers upstream backends { server localhost:81; server localhost:82; } server { ## This Server listen to port 80 listen 80; location / { ## All requests to "http://localhost:80/" will send to the upstream named "backends" proxy_pass "http://backends/"; ## The option "proxy_next_upstream" can be defined with several arguments to tell the NGINX when it should send the request to the next upstream proxy_next_upstream http_500; } } server { ## This Server listen to port 81 listen 81; location / { ## The location "http://localhost:81/" always return HTTP 500 return 500; } } server { ## This Server listen to port 82 listen 82; location / { ## We set the Header Content-Type to text/plain because the default Content-Type is application/octet-stream which would result in a download add_header Content-Type text/plain; ## This simply return HTTP 200 with text return 200 "Upstream 2 Responded"; } }

You can start the NGINX Server with Docker to test several u pstreams and see how NGINX react to an error.

# Simply paste the content in a file called 'nginx_upstreams.conf' either with the following command or on your own [email protected]:[~]: echo -e '## Upstream defines a usable backend which allows sending requests to several servers

upstream backends {

server localhost:81;

server localhost:82;

}



server {

## This Server listen to port 80

listen 80;

location / {

## All requests to "http://localhost:80/" will send to the upstream named "backends"

proxy_pass "http://backends/";



## The option "proxy_next_upstream" can be defined with several arguments to tell the NGINX when it should send the request to the next upstream

proxy_next_upstream http_500;

}

}



server {

## This Server listen to port 81

listen 81;

location / {

## The location "http://localhost:81/" always return HTTP 500

return 500;

}

}



server {

## This Server listen to port 82

listen 82;

location / {

## We set the Header Content-Type to text/plain because the default Content-Type is application/octet-stream which would result in a download

add_header Content-Type text/plain;



## This simply return HTTP 200 with text

return 200 "Upstream 2 Responded";

}

}' > nginx_upstreams.conf [email protected]:[~]: docker run -it -p 80:80 -p 81:81 -p 82:82 -v $(pwd)/nginx_upstreams.conf:/etc/nginx/conf.d/default.conf nginx

As soon as you started the container and enter http://localhost/ in your Browser you should receive the following answer.

http://localhost:80/



When you open the first upstream manually you receive HTTP 500.

http://localhost:81/

The second upstream respond with the same message as http://localhost:80/ because it is used when the first upstream answer with HTTP 500.

http://localhost:82

Actual Config for NGINX

The following config can be used to run a Vault Cluster behind an NGINX server. We deployed one NGINX container on each Vault node and created a DNS entry which has two CNAMES as the destination (DNS Loadbalancing ).

# This upstream is used to load balance between two Vault instances upstream vault_backend { server vault1.tld:8200; server vault2.tld:8200; } server { listen 80; server_name vault.tld; # This location is used to handle redirects send by HA Vault Cluster location @handle_vault_standby { set $saved_vault_endpoint '$upstream_http_location'; proxy_pass $saved_vault_endpoint; } # This location is a failover loadbalancer for all vault instances location ~* ^/(.+)$ { proxy_pass "http://vault_backend/$1"; proxy_next_upstream error timeout invalid_header http_500 http_429 http_503; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_intercept_errors on; error_page 301 302 307 = @handle_vault_standby; } }

If everything is set up right and your Vault cluster is unsealed you should get valid responses when interacting with http://vault.tld/.*.

Conclusion

Well, I hope you enjoyed this article. It’s quite easy to set up a load balanced and reverse proxied Vault cluster with NGINX to e.g. reject direct access to Vault or automatically get the current active node.

If you have any questions or improvements please leave a comment or send me an E-Mail.