The web has changed a lot since the 90’s, so in May 2015, a new version of the HTTP-Protocol was presented by the Internet Engineering Task Force (IETF). With Java 9, the developers also updated the HTTP-API in the JDK and came up with an entirely new API for HTTP/2 and Websocket. This new API will replace the old HTTPURLConnection API, which is as old as HTTP/1.1 itself.

Problems with HTTP/1.1

With HTTP/1.1, you had many Request-Response cycles which increased the latency and loading times of webpages. Additionally, you had the problem of head-line-blocking, which means that the line of packets is held up by the first packet. Because of that, an often used approach on optimizing was to open several TCP-Connections to the server and request the elements equally distributed over the connections and use workarounds like image sprites. But these multiple TCP-Connections put some additional load on the web servers and are pretty inefficient, because TCP-Connections are expensive. The goal of HTTP/2 is, to lower latency, loading times and server load, but still being backwards compatible to HTTP/1.1.

What is HTTP/2 about?

HTTP/2 introduced the concept of Streams and Binary frames. So HTTP/2 is no longer text-based, like HTTP/1.1 but in binary format. This reduces the effort needed for parsing the messages. A stream should be something like a channel on a TCP connection. You can have multiple streams on one TCP connection.

A binary frame, on the other hand, is the smallest communication packet unit in HTTP. So the request- and response messages are split and packed into several of these frames. A frame is assigned to a stream via a field in the frames header containing the id of a stream.

This makes it possible to multiplex multiple streams asynchronously over one TCP-connection, which can solve the head-of-line-blocking and is the main reason for the big improvement in performance. You won’t need domain sharding anymore, you can send all your stuff over one TCP connection with one server.

In our example, a GET-Request from the client is split into 3 pieces that are put into the Frames G0, G1 and G2. These Frames are then assigned to Stream 1 and are sent over the TCP-connection to the server. The server reads the 3 frames from the stream and puts them back together to the GET-Request. Then it splits the response into 2 parts and sends it back via Stream 1 as R0 and R1. While all this happens, there are some unrelated data exchanged in Stream 2 over the same TCP Connection.

The second big new feature with HTTP/2 is the server-push feature, which makes workarounds like resource-inlining obsolete. With this new feature, the webserver is able to write data in the client’s browser cache, before the client even requests the data. Then the server tells the client to fetch the data, but (of course) they are already in the browser-cache. For example, if a client requests the index.html, the server can now push the Style.css and the Script.js directly with the response. This should greatly reduce the amounts of Request-Response cycles, which will reduce latency and the loading time of a webpage.

A big portion of the http-messages sent across the internet are very small messages, like header-requests, to check if a resource has changed. The payload of these messages is so small, that the HTTP-header makes up a very large part of the package. To account for this, the third big feature of HTTP/2 is Http-header compression. This is based on the observation, that the headers in one stream are very similar. So why send redundant information every time? Because of that, the server and client will now both have a table with all the header information cached and will only send those information, that changed since the last message. Additionally, the header-fields that are sent, will be sent Huffman-encoded.

Another feature of HTTP/2 is stream prioritization. That means, you are now able to prioritize your messages by sending them over a stream with a higher priority. A prime example would be, to give the images a lower priority, so HTML, styles and scripts are loaded first and the images are loaded afterwards.

If you want to cancel a request from the client-side, even if the server already started working on it, you are now able to do that.

To ensure, critical data can be encrypted, HTTP/2 comes as expected with full HTTPS/TLS-support.

Despite all these changes, backwards-compatibility with HTTP/1.1 will be preserved.

What is the new HTTP/2-Client coming with Java 9 about?

The new HTTP/2-Client aims not only to provide full HTTP/2 support from the JDK, but also renewed the API for HTTP/1.1. The developer stated, that given the big change since HTTP/1.1, it was easier to implement an entire new API then cramping the new HTTP/2 features in the old API somehow. A main reason for this is, that the old HTTP/1.1-API is as old as HTTP/1.1 itself.

At this time, nobody knew how the web will develop, so the old API was designed with multiple protocols in mind (http, ftp, …), the new API will focus on HTTP only. That, and the occurrence of undocumented behavior under certain circumstances made it not easy to use, so many developers chose NettyIO or Jetty, which work much better performancewise, anyway. So one of the goals of the new HTTP/2-API is, to be on par better than Netty/Jetty in performance and ease of use. To increase the ease of use, the new API aims a low API-footprint, but also being less abstract. Despite that, it is designed to cater or about 80-90% of the daily use cases. This includes basic authentication.

Of course, the new API will support all the new features of HTTP/2, but tries to expose only those, who are relevant for the user. It will provide an event-based system for event notifications, for example when a header or body is received, when an error occurs or when the server initiates a server-push. This system will run asynchronously and uses the CompletableFutures interface.

To lower the initial learning curve, the API also provides a simple HTTP-Client with blocking behavior.

Despite all these changes, the developers stated, that there will be backwards compatibility with existing WARs.

How would it look like?

Okay, we have 2 examples for you. One simple, blocking http-client and a second, slightly more complex, but asynchronous one. We used the server from the Vert.x article as our web-server. You might know the first one from the article about JShell. Anyway, let’s see it:

HttpResponse resp = HttpRequest.create( new URI("http://127.0.0.1:8080/hello")).GET().response(); int statusCode = resp.statusCode(); String body = resp.body(HttpResponse.asString()); System.out.println("[" + statusCode + "] " + body);

As you see in line 2, a http-request is created with the new fluent API, by calling the static factory-method “create(URI)” on the HTTPRequest-class. After that, we specify the method type by calling GET() and send the request with .response(). This blocks, until the response had been received and stored into “resp” (lines 2ff). We can retrieve the HTTP-Status code from it, as well as the body or the header or other information. We are only interested in the body, which we get by calling the body()-Method on the HTTPResponse-object and giving it a BodyProcessor (here one, that converts the body to String). In our case, the output looks like:



[200] Hello world.



Now the slightly more complex example.

CompletableFuture<HttpResponse> cResp = HttpRequest .create(new URI("http://127.0.0.1:8080/hello")) .GET().responseAsync(); Thread.sleep(5); if (cResp.isDone()) { HttpResponse resp = cResp.get(); System.out.println("[" + resp.statusCode() + "] " + resp.body(HttpResponse.asString())); } else { cResp.cancel(true); System.out.println("Too slow."); }

You see, this time, we call “responseAsync()” instead of “response” in line 3. This returns a CompletableFuture.

After that, we wait 5ms (line 5) and check, if the future is completed yet (line 7). If so, we output the response, if not we cancel the request in line 13.

Like this: Like Loading...

Related