An attacker in a privileged network position, such as an ISP or the owner of a malicious hotspot, can cause an HTTPS request to be repeated by disrupting the TLS connection to the client browser at the right moment. Modern browsers usually retry failed requests automatically, which makes this attack invisible to the end user.

Thai Duong, Thiago Valverde, Quan Nguyen

Google Security Team

{thaidn, valverde, quannguyen}@google.com

Never, never, never, never give up. Winston Churchill

One of the authors was once advised by a self-help book that he should never give up, be confident in himself, and keep trying. The secret to success is failure, wrote the book. Said author had always believed that this is a great wisdom until he realized that it could lead to replay attacks.

Replay attacks against HTTPS

When a browser wants to send a HTTPS request, it passes the plaintext HTTP payload to the TLS (Transport Layer Security) stack, which divides the payload into records. Each record is then further compressed (just kidding!), encrypted, and delivered to the other side. TLS guarantees that the encrypted stream is non-replayable, by deriving a set of new keys for each connection and assigning a unique sequence number to each record. This prevents an attacker from copying these records and replaying them on another connection, because the encryption keys would not match. Replaying them on the same connection would not work either, because the sequence numbers would not match, and the records would be rejected.

We are not interested in replaying TLS records, however. We would like to replay HTTP requests performed over a TLS connection. The attack is trivial — you are not alone if you feel that we have cheated somewhere — but it works like a charm. We discovered that browsers would automatically retry requests, regardless of their methods, if their first attempt failed due to a network failure. Hence a man-in-the-middle adversary can replay HTTPS requests without any indication to the user, as follows:

The adversary sets itself up as a TCP layer relay for the targeted TLS connection to, say, example.com.

When the adversary detects a request that it wants to replay (using traffic analysis), it copies all relevant TLS records, and closes the socket to the browser instead of relaying the HTTP response from the server. It keeps the connection to example.com open.

Over a fresh socket, the browser automatically retries the (presumed failed) request. The adversary then forwards it normally to example.com.

The adversary sends the records copied in step 2 to example.com, which happily accepts them. Thus, the request sent in step 3 has been duplicated and replayed.

We successfully mounted this attack against a sample victim application, as well as an internal website at Google. We could duplicate HTTP POST requests sent by the latest version of Chrome and Firefox (we did not test any other browsers). As soon as the socket is closed, both browsers would automatically retry the request once, as long as there was an open, idle socket, which is often the case in a common navigation session. When there were no open sockets, the browsers would just display an error, at which point we speculate that most users would likely hit refresh to resubmit the request themselves.

In addition to browsers, we believe, but will not verify, that most SMTP or IMAP clients would also retry automatically when faced with a network error.

Correlating HTTP requests and TLS records

We discovered that TLS records and HTTP requests are highly correlated. For example, an HTTP GET request usually maps to a single TLS record. Chrome splits an HTTP POST request from its body into two or more TLS records, and this behavior makes it easy to identify POST requests within the TLS records.

By having prior knowledge of the target website and the ability to discern individual HTTP requests within a TLS stream, an adversary can selectively intercept requests and make them seem to have failed to the browser, even if they succeeded from the perspective of the server. We assume that traffic analysis would allow us to pinpoint precisely the requests we want to replay, so we will not discuss it further in this paper.

Countermeasures

Give up after the first failure and stop reading self-help books. Seriously.

Of course, browsers would not follow our advice, for a good reason. Transient network problems are frequent on the Internet, and browsers that give up on first failure would eventually frustrate their users. Thus, browsers will not likely change their behavior, but websites must assume that attackers have forced browsers resending requests, and must be able to detect and reject duplicated state-changing ones.

When we tried to mount the attack against PayPal. It did not work, because PayPal assigns a unique identifier (ID) to each transaction, and rejects any other transactions having the same ID. If the PayPal model does not work, another possible mitigation is to embed in each request a signed token with a short TTL, to act as a nonce.

If none of these options work for you, just ignore this attack and move on. This is perhaps desirable behavior - since users would likely resubmit their requests anyway — but we think serious attackers might never bother mounting it against you or your websites. We would be happy to be proven wrong, though.

Acknowledgements

We are grateful to many of our colleagues at Google, including but not limited to Krzysztof Kotowicz, Eduardo Vela Nava, Chris Palmer, Bill Cox, Ryan Hamilton, Adam Langley, who have shared with us their thoughts on this attack and its countermeasures.