In this writeup, I will be covering techniques which can be used to influence web systems and applications in unexpected ways, by abusing HTTP/1.1 hop-by-hop headers. Systems affected by these techniques are likely ones with multiple caches/proxies handling requests before reaching the backend application.

What is a hop-by-hop header?

A hop-by-hop header is a header which is designed to be processed and consumed by the proxy currently handling the request, as opposed to an end-to-end header, which is designed to be present in the request all the way through to the end of the request. According to RFC 2612, the HTTP/1.1 spec treats the following headers as hop-by-hop by default: Keep-Alive , Transfer-Encoding , TE , Connection , Trailer , Upgrade , Proxy-Authorization and Proxy-Authenticate . When encountering these headers in a request, a compliant proxy should process or action whatever it is these headers are indicating, and not forward them on to the next hop.

Further to these defaults, a request may also define a custom set of headers to be treated as hop-by-hop by adding them to the Connection header, like so:

Connection: close, X-Foo, X-Bar

In this example, we're asking the proxy to treat X-Foo and X-Bar as hop-by-hop, meaning we want a proxy to remove them from the request before passing the request on. This HTTP/1.1 ability to define custom hop-by-hop headers is key to the techniques and findings in this writeup.

The theory on abusing hop-by-hop headers

The act of removing a header from a HTTP request is not necessarily going to cause issues, however being able to remove headers that were not in the original request but were added by either the frontend or another proxy in the chain could create unpredictable results. It's basically like turning off a switch - it could do nothing at all, or it could be catastophic.

For example, perhaps a header is added somewhere in the chain which instructs the backend of an access control decision or of the fact the request came from an Internet user, and its absence in the request triggers a logic change in the application. Perhaps application logic is assuming a header will be present because a proxy unconditionally adds the header, and dumps out juicy debug error information when it is not there. It is the act of a frontend proxy forwarding the hop-by-hop header list that can create issues, as any header it adds to the request may be up for removal by the next hop, and each time a hop in the chain forwards the hop-by-hop list instead of consuming the headers, the opportunity for impact is increased.

You may have noticed that the Connection header itself is listed above as a default hop-by-hop header. This would suggest a compliant proxy should not be forwarding a request's list of custom hop-by-hop headers to the next server in the chain in its Connection header when it forwards the request - that is, a compliant proxy should consume the requests' Connection header entirely. However, my research suggests this may not always be occurring as expected - some systems appear to either also forward the entire Connection header, or copy the hop-by-hop list and append it to its own Connection header. For example, HAProxy appears to pass the Connection header through untouched, as does Nginx when acting as a proxy.

The following graphic shows how abusing hop-by-hop headers may create issues, if the backend is expecting X-Important-Header and incorporates its presence in a logical decision:

I will be covering some examples of abusing hop-by-hop headers causing affects in web applications that I have encountered below as well as some ideas on where impact could be found, but the potential outcomes are going to be very specific to the application and infrastructure being targeted, and the header(s) being targeted and what they mean to the backend.

Testing for hop-by-hop header abuse potential

Before we get in to real world examples, it may be handy to know if a system may be vulnerable to some sort of hop-by-hop header abuse before diving in too deep. Luckily, this is pretty quick to test - identify a request header that creates a noticeable difference in the response when it is and isn't present, and see what happens when it's added as a hop-by-hop header. If the system is removing the header, then the response when the header is in both the request and listed in the Connection header should be the same as when it isn't present in the request at all, but different to when it is present in the request and not listed as a hop-by-hop header.

A quick and easy test is the Cookie header, against an endpoint which requires authentication (assuming the target system uses cookie auth). Take for instance the following request:

GET /api/me HTTP/1.1 Host: foo.bar Cookie: session=xxx Connection: close, Cookie

If we say that /api/me is supposed to return a HTTP 200 with user details when the request is authenticated, and session=xxx is a valid authenticated cookie session value, then the above request may return something other than the anticipated response if the system is allowing hop-by-hop headers defined in the original request to modify which headers get sent to the backend.

In this example, the Cookie header was provided in the original request, so a proxy is not doing anything wrong by removing it before sending the request onwards, and as such this test is only a very basic indication that a proxy (either the frontend or another in the chain) is respecting our custom hop-by-hop headers list and actioning its removal - it doesn't confirm our custom hop-by-hop list is being forwarded along the chain to another proxy, which is where things get more interesting (and is what I'll refer to as "forwarded hop-by-hop" from here on out). To test for this, you'll likely need to employ the help of a tool like Burp's Intruder, or the following script I wrote (which also tests for cache poisoning, covered below):

https://gist.github.com/ndavison/298d11b3a77b97c908d63a345d3c624d

If you pass in a list of known headers, such as this one, you can observe which headers are causing effects despite not being in your original request:

for HEADER in $(cat headers.txt); do python poison-test.py -u "https://target" -x "$HEADER"; sleep 1; done

This will cycle through the entire header list and print out if its presence in the hop-by-hop list created a different status code or response body size. If its presence caused a different response and the header wasn't in your original request (in my script's case, very few are), you may have found an issue worth exploring, as this suggests the hop-by-hop list is being forwarded at least one hop.

So now the general theory and testing are covered, let's move in to some use cases where this technique may useful.

Masking the originating IP address by hiding X-Forwarded-For

When a frontend proxy accepts a user request, it may add the IP address of this user to the X-Forwarded-For (XFF) header, so infrastructure and apps in the backend can know the IP address of the requesting user. However, by instructing proxies that this header is hop-by-hop, we may end up removing this header from the request, and the backend app will either never receive it, or it will receive an IP address value that is not of the original user, but of a server elsewhere in the chain.

For example, gorouter in CloudFoundry will set the IP address of the device before it as the XFF header value if the header isn't already present in the request, before it forwards the request to the backend app. So if the device before gorouter forwards the requests' hop-by-hop list and this list contains X-Forwarded-For , gorouter will strip out X-Forwarded-For and it will then set X-Forwarded-For to whatever the previous device's IP address is and forward that to the backend app, effectively cleaning the header of the IP address of the original requestor, at least as far as the backend app can tell.

Other than disguising originating IP from some components of a system's infrastructure, this technique may offer a way to influence authentication or access control decisions. Imagine an application behind a load balancer that then forwards to a proxy that finally forwards to the app. When encountering a request originating from a local IP range (e.g. 10.1.2.0/24 ), this app treats the request as trusted in some way, perhaps granting unfettered access to /admin . Because it is behind a reliable proxy, the app may trust that, even if an attacker attempts a traditional X-Forwarded-For spoof, the load balancer will still append the real originating IP to the header, such that it looks like <attacker spoofed ip>, <real attacker ip> , so the app can safely handle spoof attempts. However, if the XFF header is being stripped out before reaching the app, which may be the case if an attacker adds XFF as a hop-by-hop header, then a proxy (like gorouter) will react to the absence of X-Forwarded-For by taking the IP address of the load balancer before it as the requesting IP (e.g. 10.1.2.3 ), and the final X-Forwarded-For value to reach the application will be 10.1.2.3 , with nothing else appended. In such an application, this request would grant access to /admin because that's a local address.

Another thing to keep in mind is XFF is only one header used for passing on the real IP address of a user - depending on the system being targeted, you may also have Forwarded , X-Real-IP , and a bunch of others that are less common.

Fingerprinting services

Using the technique to find forwarded hop-by-hop headers as outlined in the testing instructions above, one could potentially gather more information about a system based on the headers that, when removed from a request due to this technique, causes an error or an otherwise noticeable difference. Obviously, the more specific a header is to a particular technology or stack, the more telling having it trigger this sort of outcome may be - removing X-Forwarded-Proto causing an issue is perhaps less informative than if something like X-BLUECOAT-VIA does as well, for instance.

If it appears as if the frontend itself is erroring due to disliking your attempt to add a blacklisted hop-by-hop (which you may be able to conclude based on how quickly the response is coming back to you, relative to the target system's response times for other errors, cached hits etc), then this itself could be useful - whether the error is generated by a fault in the system or because the frontend system is hardened for just this technique doesn't really matter for the purpose of information gathering. For example, setting headers like Age , Host , X-Forwarded-For , Server and a fair few other common ones as hop-by-hop returns a very simple 0 body length HTTP/1.1 400 Bad Request from Varnish, which may be an alternative way to detect such a cache if it was otherwise not tipping its presence in response headers etc.

Cache poisoning DoS

This one is more theory than practice as I haven't encountered a real world example when testing against systems in scope for bug bounty programs, but the impact here is very similar to the outcomes covered in this cache poisoning DoS research, and the Responsible denial of service with web cache poisoning research on PortSwigger, however the technique is slightly different - rather than directly using or modifying request headers which create an undesired application state that poisons web caches, we're abusing hop-by-hop headers to create the undesired application state, by removing a header which the application relies on to function normally, in this case a header added to the request by a proxy.

For this to be exploitable, what we'd need to happen is: a system's frontend cache forwards the hop-by-hop header list instead of consuming it, an intermediate proxy processes the hop-by-hop list and removes a header either it, another proxy further in the chain, or the backend app requires, and the act of removing such a header results in an undesirable response such as a HTTP 400 or 501 being returned by something after the web cache. Like the above research covers, this might result in the web cache in front of the app choosing to accept this undesirable response as the copy to serve other users, and hence we have cache poisoning denial of service.

While I have not yet found a real world instance of cache poisoning DoS using hop-by-hop header abuse, I have encountered a Varnish configuration which allows hop-by-hop forwarding. By default Varnish appears to consume the Connection header in a request and doesn't add it to the backend request, however the official repo suggests the following config for Websocket support:

sub vcl_recv { if (req.http.upgrade ~ "(?i)websocket") { return (pipe); } } sub vcl_pipe { if (req.http.upgrade) { set bereq.http.upgrade = req.http.upgrade; set bereq.http.connection = req.http.connection; } }

This is found at https://github.com/varnishcache/varnish-cache/blob/78c243894bed86b3c0637fda49d47f3d33fc72b0/doc/sphinx/users-guide/vcl-example-websockets.rst.

What this config is doing is sending any Upgrade: websocket request into the vcl_pipe subroutine, which according to the official docs means "Varnish stops inspecting each request and just shuffles bytes to the backend". Once a request is inside the pipe routine, this config will reflect the request's Connection header into the backend request's Connection header, meaing we've achieved hop-by-hop forwarding to the backend, albeit with the Upgrade: websocket header in the request also. However, we can just send something like Upgrade: websocketz to try and prevent the backend from actually treating the connection as a websocket, which will probably be considered invalid by the backend and ignored, but passes the if (req.http.upgrade ~ "(?i)websocket") condition in the above config and triggers the pipe behavior.

The following request was tested against Varnish 6.3 with the above Websocket config:

GET / HTTP/1.1 Host: foo.bar Upgrade: websocketz Connection: keep-alive, xxx

And this is what Varnish sent to the backend:

GET / HTTP/1.1 Host: foo.bar X-Forwarded-For: 192.168.176.1 xxx: yyy X-Varnish: 11 upgrade: websocketz connection: keep-alive, xxx

And here's the relevant raw Varnish log:

24 Begin b bereq 23 pipe 24 BereqMethod b GET 24 BereqURL b / 24 BereqProtocol b HTTP/1.1 24 BereqHeader b Host: foo.bar 24 BereqHeader b Connection: keep-alive, xxx 24 BereqHeader b X-Forwarded-For: 192.168.176.1 24 BereqHeader b xxx: yyy 24 BereqHeader b X-Varnish: 23 24 BereqUnset b Connection: keep-alive, xxx 24 BereqHeader b Connection: close 24 VCL_call b PIPE 24 BereqHeader b upgrade: websocketz 24 BereqUnset b Connection: close 24 BereqHeader b connection: keep-alive, xxx 24 VCL_return b pipe

FYI, the xxx header was added in the Varnish config's vcl_recv routine, i.e. set req.http.xxx = "yyy"; . When a backend receives this request, if it actions hop-by-hop header removals, then we should see that xxx is gone.

My understanding of Varnish's pipe mode means there is no cache in play here, so this is unlikely to offer any cache poisoning DoS potential, however setups with this config may at least continue to offer a way to abuse hop-by-hop headers despite being behind an otherwise robust cache layer like Varnish.

So, what would it take to make Varnish vulnerable to a form of cache poisoning DoS via hop-by-hop header abuse? I eventually came to the following config, which forwards the request's hop-by-hop headers and should also be compatible with caching:

vcl 4.0; import var; backend default { .host = "foo.bar"; .port = "80"; } sub vcl_recv { set req.http.xxx = "yyy"; var.global_set("conn_string", req.http.connection); } sub vcl_backend_fetch { set bereq.http.connection = var.global_get("conn_string"); }

This requires the variables VMOD extension for Varnish to be installed. What this is doing is creating a global var out of the request Connection header, and applying it to the backend request. This is similar to the above Websocket config, but this pattern does not prevent caching. I have no idea why anyone would want to legitimately copy the Connection value from the user request to the backend request like this, but if they did and backend app is sensitive to the removal of the xxx header as used in this example, it could result in cache poisoning DoS.

For a demo of cache poisoning DoS using this config, see the following:

The xxx header added by Varnish is expected by the app, and without it errors with a (cacheable) 404. The full setup used was: Varnish 6.3 (using the above config) in front of Apache HTTPD (I couldn't get nginx to remove the headers in the hop-by-hop list) which uses ProxyPass to a Flask app running on gunicorn.

In server-side requests (by design or forged)

This one is a little out there, but bear with me. Some systems give users the ability to define requests that will be performed by the server side, such as when adding webhooks or similar. While it isn't normal for these to include the ability to define the connection's hop-by-hop header list directly, if you're able to add custom headers, you could try adding a Connection header and seeing if it is accepted along with your hop-by-hop headers.

Otherwise, if you have an exploitable SSRF vulnerability in a system, adding this technique could reveal more information or help make the SSRF more impactful. You would likely need the ability to inject headers along with the SSRF though, which is fairly rare.

This one isn't really a different type of impact, but more of a different launching point for the same uses already covered - that is, modifying the request coming from the target's systems rather than your client.

WAF rule bypass

If a system has a WAF rule which requires the presence of a header in the request, then this may be bypassed using hop-by-hop. In this case, the WAF would presumably have to not strip the hop-by-hop headers itself - otherwise it may do so before inspecting the request for conformance, which likely wouldn't bypass the rule check. If the WAF does happen to ignore the Connection hop-by-hop list and, better still, forwards it to the next hop, then you should be able to include the header in the request to pass the WAF rule, but also have the header listed as a hop-by-hop, so the next proxy strips it out, effectively sending a request through to the backend without the header.

Further research needed

It feels like there is more here to discover, in regards to common headers that are able to be abused by this technique more reliably than, say, a custom header, and other uses and impacts the technique may offer. It may also be useful to test various proxies and caches to see how vulnerable they are to this, and whether there is config that may make them vulnerable if they're not out of the box - as outlined above, my testing suggests HAProxy and Nginx (configured as a proxy) will forward the hop-by-hop list, where as Apache HTTPD does not, and Varnish doesn't unless you really go out of your way to configure it to do so, however my research didn't drill down into changes across versions and common configuration patterns. I also didn't do much in the way of testing popular caching/proxy services like Cloudflare, Cloudfront etc, although quick tests suggests they are not forwarding hop-by-hop lists.