Using HTTP Pipelining to hide requests

In this post I'm going to discuss using HTTP pipelining to hide malicious HTTP requests. This is not domain fronting but uses similar techniques to get the same result, an observer who is not able to perform TLS interception is only able to see the "good" request which conceals the "bad" request.

As before, lets start with some background. When the web was young, each object requested by a client was done in its own TCP connection, if a page had two images and one JavaScript library, then there would be four connections, the first for the page, then three for the extra elements. This was considered to be adding excessive traffic, especially when using HTTPS as the secure connection has to be negotiated for each element. To get around this, pipelining was introduced.

Pipelining allows multiple requests to be grouped together and sent over a single TCP connection, this eliminates the need to setup and tear down individual connections for each page element and so should improve performance. The concept worked well for a while and support was built in to most modern browsers and web servers, however, by 2018, too many issues had been found in the implementations and so the feature was disabled in the browsers. Despite this, support still exists in most servers and, more importantly, most CDNs.

For more information, this Wikipedia page is a good place to start - HTTP pipelining - Wikipedia.

Lets start with an example request to my site, here are the contents of the pipe file I'm going to send:

GET /pipeline/page1.php HTTP/1.1 Host: vuln-demo.com GET /pipeline/page2.php HTTP/1.1 Host: vuln-demo.com

As you can see, there are two requests, the first for page1.php, the second for page2.php, these will both be sent down the same connection, one after the other. These can both be sent, will be processed by the server, and then the responses sent back in order. One of the reasons this concept has now been rejected is because if the first request takes a long time to return, all subsequent requests will be blocked waiting for it to happen which can end up slowing things down rather than speeding them up as it is designed to do.

As an aside, before I started researching this, I assumed that pipelining was activated by sending the Connection: keep-alive header, I was wrong here. "keep-alive" enables persistent connections which are a different thing to pipelining. Persistent connections keep the TCP connection open between requests but enforce the original rule of waiting for any previous requests to return before making new ones, pipelining removes that rule and says the client can send multiple requests without having to wait. You have to have a persistent connection to pipeline but having a persistent connection doesn't necessarily mean you can pipeline. In HTTP 1.0, persistence had to be activated with the "keep-alive" header, in HTTP 1.1, persistence is assumed unless a connection is requested to be closed with the Connection: close header. For more about persistent connections see - HTTP persistent connection.

Now, lets send the requests:

$ (cat pipe ; sleep 5) | openssl s_client -connect vuln-demo.com:443 -servername vuln-demo.com <verbose setup stuff> --- HTTP/1.1 200 OK Date: Fri, 08 Mar 2019 20:42:47 GMT Server: Apache Strict-Transport-Security: max-age=63072000 Upgrade: h2,h2c Connection: Upgrade, Keep-Alive X-Content-Type-Options: nosniff Cache-Control: max-age=0, no-cache, no-store, must-revalidate Pragma: no-cache Expires: Wed, 11 Jan 1984 05:00:00 GMT X-XSS-Protection: 0; Access-Control-Allow-Origin: https://vuln-demo.com Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Access-Control-Allow-Origin Content-Length: 14 Keep-Alive: timeout=5, max=100 Content-Type: text/html; charset=UTF-8 This is page 1HTTP/1.1 200 OK Date: Fri, 08 Mar 2019 20:42:47 GMT Server: Apache Strict-Transport-Security: max-age=63072000 X-Content-Type-Options: nosniff Cache-Control: max-age=0, no-cache, no-store, must-revalidate Pragma: no-cache Expires: Wed, 11 Jan 1984 05:00:00 GMT X-XSS-Protection: 0; Access-Control-Allow-Origin: https://vuln-demo.com Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Access-Control-Allow-Origin Content-Length: 14 Content-Type: text/html; charset=UTF-8 This is page 2DONE

As can be seen, there are two responses, page 1 and page 2, showing that the pipeline worked.

As I already have a setup with AWS CloudFront, I'll reuse that to test against AWS. For full details of the setup, see the description at the start of my previous post on CloudFront, the TL;DR version is "fronted.digi.ninja" is the "good" domain which sits in front of my main site "digi.ninja" and "d1sdh26o090vk5.cloudfront.net" is the "bad" domain which points at the site "frontme.vuln-demo.com". Here is the request:

GET / HTTP/1.1 Host: fronted.digi.ninja GET / HTTP/1.1 Host: d1sdh26o090vk5.cloudfront.net

And the results:

$ (cat pipe2 ; sleep 5) | openssl s_client -connect fronted.digi.ninja:443 -servername fronted.digi.ninja | \ grep "<title>" depth=2 C = US, O = Amazon, CN = Amazon Root CA 1 verify return:1 depth=1 C = US, O = Amazon, OU = Server CA 1B, CN = Amazon verify return:1 depth=0 CN = fronted.digi.ninja verify return:1 <title>DigiNinja - DigiNinja</title> <title>Fronted Vuln Demo</title> DONE

I've grepped the results for just the page titles, but as you can see, the first result is from my site, the second is from the vuln-demo site showing that CloudFront supports pipelining and the requests worked as expected. Anyone watching the request will see the DNS lookup for the "good" domain, "fronted.digi.ninja" and, if they are looking for it, the SNI field in the TLS setup, which is also for "fronted.digi.ninja", but will see nothing of the request to "d1sdh26o090vk5.cloudfront.net"

As I said at the start, this is not true domain fronting but the results are the same, "bad" requests are hidden behind the "good" ones and, without TLS interception, observers will not see anything you do not want them to.

Let's see if the SNI protections that Cloudflare have in place help protect them against pipelining. Here is the request that will be sent:

GET / HTTP/1.1 Host: www.cloudflare.com GET /index.php HTTP/1.1 Host: digininja.org.uk

And here is the reply:

(cat pipe3 ; sleep 5) | openssl s_client -connect cloudflare.com:443 -servername www.cloudflare.com | \ grep "<title>" depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA verify return:1 depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert ECC Extended Validation Server CA verify return:1 depth=0 businessCategory = Private Organization, jurisdictionC = US, jurisdictionST = Delaware, serialNumber = 4710875, C = US, ST = California, L = San Francisco, O = "Cloudflare, Inc.", CN = cloudflare.com verify return:1 <title>Cloudflare - The Web Performance & Security Company <head><title>403 Forbidden</title></head> DONE

As the SNI being used in the request is for "www.cloudflare.com", the connection is tied to that host and so the request for "digininja.org.uk" is rejected with a 403 showing they are protected.

So, there we have another way to hide your HTTP traffic, one that works with AWS but not with Cloudflare. At some point in the future, I'll give all these approaches a go with Azure CDN and Google CDN, till then, you've got plenty to play with, so go experiment.

Related Posts