The requests library is arguably the mostly widely used HTTP library for Python. However, what I believe most of its users are not aware of is that its current stable version happily accepts responses whose length is less than what is given in the Content-Length header. If you are not careful enough to check this by yourself, you may end up using corrupted data without even noticing. I have witnessed this first-hand, which is the reason for the present blog post. Let’s see why the current requests version does not do this checking (spoiler: it is a feature, not a bug) and how to check this manually in your scripts.



What Is the Content-Length Header?

Just to refresh your memory, in the HTTP protocol, the Content-Length header indicates the size of the body of a request or response. It is given in octets, where one octet is 8 bits. For simplicity, I will use the term byte instead of octet throughout the post. Generally, the Content-Length header is used to inform the receiving party when the current request (or response) has finished. Without it, you would not know whether you have received all the data (and so you should stop reading) or whether there are more data underway. Of course, the server could end the connection after every request/response (which is what HTTP 1.0 did), but since HTTP 1.1, all connections are considered persistent unless declared otherwise. This significantly speeds up the communication as you do not have to open a separate connection for each request.

After reading the above paragraph, the following question may have popped into your head:

What If I Receive Fewer Bytes Than Stated In Content-Length?

Under certain circumstances (network or server-side errors), the server may abruptly close the connection before sending the complete message. The HTTP 1.1 RFC specifies:

When a Content-Length is given in a message where a message-body is allowed, its field value MUST exactly match the number of OCTETs in the message-body. HTTP/1.1 user agents MUST notify the user when an invalid length is received and detected.

So, upon receiving fewer bytes than stated in the Content-Length header, one may rightly expect to be informed about it. To check this, I have put together a simple HTTP server that always answers with the following response and then closes the connection:

HTTP/1.1 200 OK\r

Content-Length: 10\r

\r

123456

Then, I wrote a Python script that sends a GET request to the server, checks whether it succeeded, and prints the received data:

import requests import sys response = requests.get('http://localhost:8080/') if not response.ok: sys.exit('error: HTTP {}'.format(response.status_code)) print(response.headers) print(response.content) print(len(response.content))

When you run it, it succeeds, without raising an exception:

$ python client.py {'Content-Length': '10'} b'123456' 6

This is unsettling. Well, maybe this is how all clients behave? To verify, I tried to use curl:

$ curl http://localhost:8080 curl: (18) transfer closed with 4 bytes remaining to read $ echo $? 18

Hmm. So maybe this is because requests is a library and curl is a tool? To find out, I have used reqwest, which is an HTTP library for Rust. The full implementation of my testing client is available here. When I ran it, it also notified me about the discrepancy:

error: failed to read the contents of the response cause: end of file before message length reached

There is something fishy going on here with requests …

Why Does the Requests Library Not Warn Me?

When you search the requests repository, you find numerous reports of this surprising behavior (#1855, #1938, #2275, #2833, #3459, #4415). Basically, the reason for not incorporating such a check into requests is threefold:

Firstly, I’d argue that Requests is not technically a user-agent, it’s a library. This frees us from some of the constraints of user-agent behaviour (and in fact we take that liberty elsewhere in the library, like with our behaviour on redirects).



Well, if it is not a user agent, why does it send the following User-Agent header by default? User-Agent: python-requests/2.18.4 Secondly, if we throw an exception we irrevocably destroy the data we read. It becomes impossible to access. This means that situations where the user might want to ‘muddle through’, taking as much of the data as they were able to read and keeping hold of it, becomes a little bit harder.



This is understandable. However, should this really be the default behavior? I would argue that this should be an opt-in, i.e. requests will warn you by default, but you should be able to suppress this warning and use the data that you were able to read. Finally, even if we did want this logic we’d need to implement it in urllib3. Content-Length refers to the number of bytes on the wire, not the decoded length, so if we get a gzipped (or DEFLATEd) response, we’d need to know how many bytes there were before decoding. This is not typically information we have at the Requests level. So if you’re still interested in having this behaviour, I suggest you open an issue over on shazow/urllib3.



urllib3 is the underlying HTTP library used by requests . The original poster submitted an issue in there (#311). It was closed with “I’m personally happy to leave this as-is too”, although there was a will to review a PR that does such a check. And luckily, one year and a half later, such a PR was submitted and accepted (#949)!

After reading the third point above, you may start to rejoice. Unfortunately, even though the urllib3 PR was merged on 2016-08-29, the current stable version of requests (2.18.4 at the time of writing, which is 2018-04-22) still uses an older version of urllib3 that does not provide this piece of functionality. On the bright side, there is a merged requests PR that brings a newer version of urllib3 into requests (#3563). The only problem with it is that it was merged into the requests:proposed/3.0.0 branch, which represents proposed changes for the 3.0 version of requests that is currently under development.

So, What Can I Do To Detect Incomplete Reads In My Scripts?

requests 3.x

If you come here from the future, just use requests 3.x. It should provide the enforce_content_length parameter, whose default value should be True . That is, if the requests library receives an incomplete content, it should raise an exception:

urllib3.exceptions.IncompleteRead: IncompleteRead(6 bytes read, 4 more expected)

requests 2.x

If you come here before the release of requests 3.0, you will have to perform the check by yourself. You can use the following piece of code:

response = requests.get(...) # Check that we have read all the data as the requests library does not # currently enforce this. expected_length = response.headers.get('Content-Length') if expected_length is not None: actual_length = response.raw.tell() expected_length = int(expected_length) if actual_length < expected_length: raise IOError( 'incomplete read ({} bytes read, {} more expected)'.format( actual_length, expected_length - actual_length ) )

The check works as follows. First, we ensure that the response has the Content-Length header. If not, the check is meaningless (more on that later). Then, we get the number of bytes that were actually read and compare it with the expected value. If we have read fewer bytes, we signal an error. Of course, instead of raising an exception, you can do whatever you want (retry, print an error message and quit, complain to a friend, etc.).

To verify, you can run the content-length.py HTTP server and send a request via client-with-check.py . The server is written in a way that it returns fewer bytes than stated in the Content-Length header of the response.

What About Compressed Responses?

Responses can be compressed. For example, a server may return a response having the Content-Encoding header set to gzip . This means that the body of the response is compressed via the Lempel-Ziv coding (LZ77). When the requests library receives such a response, it automatically decompresses it. When you then check the length of response.content (uncompressed body of the response in bytes ), it will most probably differ from the length specified in the Content-Length header. This is the reason we did not use len(response.content) to obtain the actual length of the response in the above check. Instead, we have to use response.raw.tell() , which returns the actual number of bytes that were read (prior to decompression).

To verify, you can run the content-encoding-gzip.py HTTP server and send a request via client-with-check.py . The server is written in a way that it returns fewer bytes than stated in the Content-Length header of the response.

What About Responses With Transfer-Encoding: chunked?

Alternatively, the Content-Length header can be omitted and the chunked Transfer-Encoding header can be used. This streaming data transfer, available since HTTP 1.1, works by splitting the response into chunks. The body of the response then has the following form:

size of the first chunk data of the first chunk size of the second chunk data of the second chunk ...

This has several advantages over Content-Length , including the ability to maintain a persistent HTTP connection for dynamically generated content whose complete size is not known in advance.

How should we check whether we have received all the data when we are dealing with a chunked transfer without a Content-Length header? Luckily, in this case, the requests library works as expected. That is, if the server sends incomplete data, the library raises an exception:

http.client.IncompleteRead: IncompleteRead(6 bytes read, 4 more expected)

To verify, you can run the transfer-encoding-chunked.py HTTP server and send a request via client.py . The server is written in a way that it returns fewer bytes than stated in the chunk size.

Final Recommendation

Always verify that the data that you receive are correct. Verifying that you have read the expected number of bytes is just the first step. For example, when downloading a file whose hash (e.g. SHA-256) is known, you should check that the hash of the downloaded file matches. Otherwise, you risk working with corrupted data, which may lead to nasty bugs.

Complete Source Code

The complete source code of all the servers and clients is available on GitHub.

Discussion

Apart from comments below, you can also discuss this post at /r/Python and Hacker News.