Why IPv6?

We apparently ran out of IPv4 addresses according to the Internet Architecture Board (IAB):

“the pool of unassigned IPv4 addresses has been exhausted”

It is the future. According to the IAB:

“Preparation for this transition requires ensuring that many different environments are capable of operating completely on IPv6 without being dependent on IPv4 [see RFC 6540]. We recommend that all networking standards assume the use of IPv6, and be written so they do not require IPv4. We recommend that existing standards be reviewed to ensure they will work with IPv6, and use IPv6 examples. Backward connectivity to IPv4, via dual-stack or a transition technology, will be needed for some time. The key issue for SDOs is to remove any obstacles in their standards which prevent or slow down the transition in different environments.”

As far as I can tell, this means that eventually IPv4 support will wane in favor of IPv6.

It is also supposed to improve security because it is supposed to have native support for end-to-end encryption, secure neighbor discovery, and a larger address space (harder to brute force through addresses), but many also argue that misconfiguration or inadequate support in firewalls along with untrained network admins and security specialists could result in less security in its current state.

Where are we at with adoption of IPv6?

As of September 24th, 2018 according to google we have around 20–24% worldwide adoption for IPv6 (depending on which day you look).

IPv6 Worldwide Adoption Trend (Jan 2009 — Sept 2018)

Some countries are doing better than others (Go Belgium!). Here is the breakdown:

IPv6 Adoption Rates by Country — Source from Google’s IPv6

Setting up IPv6 TCP Load Balancer on Google Cloud

Out of the box, Google’s Cloud Platform (GCP) offers a great solution for supporting IPv6 Termination with a TCP load balancer. Here is an illustration of this below.

I am not going to cover all the details of setting up the TCP load balancer (Google has documented that pretty well). Google allows you to use a TCP load balancer for the following ports: 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 5222.

The key here is to setup your TCP load balancer with proxy protocol turned on and with an IPv6 Frontend. The GCP load balancer will terminate the IPv6 address add it to the proxy header and send along as a IPv4 address. Proxy protocol looks like this across the wire (these examples come from AWS Docs):

PROXY_STRING + single space + INET_PROTOCOL + single space + CLIENT_IP + single space + PROXY_IP + single space + CLIENT_PORT + single space + PROXY_PORT + "\r

"

Here is an example: PROXY TCP6 2001:DB8::21f:5bff:febf:ce22:8a2e 2001:DB8::12f:8baa:eafc:ce29:6b2e 35646 80\r



Configuring HaProxy to Support Rate Limiting — in TCP mode and with accept-proxy

Unfortunately, many straight tcp load balancers don’t offer rate-limiting on a per IP basis. It is up to you to protect yourself from bad apples — this is where HaProxy comes to the rescue. You want to keep all the good apples.

In this example, let’s assume that we have this awesome app listening on port 5222 on our internal network and we are able to hit this app via external public dns/ip.

Configure HaProxy To Forward To Your App

frontend awesome_app_fe

mode tcp

bind *:5222 accept-proxy



default_backend awesome_app_be backend awesome_app_be

balance roundrobin

server awesome_app_server1 mydnsnameformyapp.com:5222

Probably the biggest takeaway of this very basic setup is the accept-proxy parameter after the bind. This is key because it tells haproxy to expect the proxy protocol (described above).

Configure Rate Limiting For HaProxy With Stick Tables

What are stick tables you ask? Let’s let the experts at haproxy.com explain:

“Released in 2010, stick tables were created to solve the problem of server persistence. However, StackExchange, the network of Q&A communities that includes Stack Overflow, saw the potential to use them for rate limiting of abusive clients, aid in bot protection, and tracking data transferred on a per client basis.”

This is where things got a little fuzzy. I found many people using stick-tables, but not many using them for haproxy in mode tcp and definitely not many using accept-proxy , and then throw in IPv6 and not many people were doing two of these things — especially not all 3. Although, here are a couple of great articles I found — one was a github repo and the other was an article by haproxy.com.

frontend awesome_app_fe

mode tcp

bind *:5222 accept-proxy # stick table definition for storing rates

stick-table type ipv6 size 500k expire 3m store conn_cur,conn_rate(60s)



# Only allow 10 connections per IP opened

tcp-request content reject if { src_conn_cur ge 10 }



# Only allow 50 connections per 60s

tcp-request content reject if { src_conn_rate ge 50 }



tcp-request content track-sc1 src



default_backend awesome_app_be

Now, for testing: The easiest way to test this is to try to hit your GCP load balancer front end IP to hit your app. Now, once you can connect, take a look at the stick-table by using the following command (Note: the show table command will vary with your haproxy frontend name):

sudo echo "show table awesome_app_fe" | sudo socat unix-connect:/var/run/haproxy.sock stdio

When I initial reviewed the table, I noticed that the GCP load balancer address was being stored not the client address. I also noticed anytime a ipv6 address came it was getting stored as 0.0.0.0.

On the stick-table line, be sure to use ipv6 not ip — it will work for both IPv6 and IPv4.

not — it will work for both IPv6 and IPv4. Many instructions point you to use tcp-request connection instead of tcp-request content , but if you don’t use the latter, then you will see the GCP load balancer address in the stick table instead of the client address. That isn’t going to help.

Hopefully this article saves someone else some pain. The HaProxy documentation can be daunting at times.

Please reach out if you have any feedback or comments!