As part of the Windows 10 Technical Preview, Internet Explorer will offer HTTP/2 support, performance improvements to the Chakra JavaScript engine, and a top-level domains parsing algorithm based on publicsuffix.org.

HTTP/2 is a new standard by the Internet Engineering Task Force. There are two components: Hypertext Transfer Protocol version 2 and HPACK - Header Compression for HTTP/2.

HTTP/2

HTTP/2 differs from the original HTTP standard in five key ways:

is binary, instead of textual is fully multiplexed, instead of ordered and blocking can therefore use one connection for parallelism uses header compression to reduce overhead allows servers to “push” responses proactively into client caches

By switching to a binary format, HTTP/2 is expected to significantly reduce the parsing complexity. The current standard requires “four different ways to parse a message” while the new version has only one.

While binary is usually more efficient than text, the real performance gains are expected to come from multiplexing. This is where multiple requests can be share the same TCP connection. With this, one stalled request won’t block other requests from being honored.

For example, let’s say the page makes a costly REST call for some data and then requests a set of static images. Under the current model, the connection will be blocked while the web server looks up the data for the REST call. With the new model, the images can start streaming in immediately.

Push responses are designed to reduce response times. If the web server can predict that the browser will need a specific set of data, it can proactively push it to the browser’s cache. An example of this would be sending the images, CSS, and JavaScript for a page even before the browser has fully rendered it.

HPACK

Header compression is another important performance concern for HTTP. According to the FAQ,

If you assume that a page has about 80 assets (which is conservative in today’s Web), and each request has 1400 bytes of headers (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out “on the wire.” That’s not counting response time - that’s just to get them out of the client.

Original proposals called for using the industry standard GZIP. However that was found to be susceptible to attack, even with TLS protected communications. Continuing from the FAQ,

As a result, we could not use GZIP compression. Finding no other algorithms that were suitable for this use case as well as safe to use, we created a new, header-specific compression scheme that operates at a coarse granularity; since HTTP headers often don’t change between messages, this still gives reasonable compression efficiency, and is much safer.

Chakra JavaScript engine

The changes to Chakra include streamlining the execution pipeline, optimizations in the Just-in-Time compiler, and “enhancements to Chakra’s Garbage Collection subsystem”. No specifics on these changes are available at this time.

Top-Level Domain Parsing

The handling top-level domains is surprisingly difficult. Browsers need to prevent websites from creating “supercookies” that span multiple domains owned by different organizations. But the rules so are not straightforward. For example, you are allowed to set a cookie for domains such as “academy.university” but not for “academy.museum”. The first example is considered to be a registrable domain while the second is a “public suffix” that is shared by multiple registrable domains.

Some cases are even more complex. In this example from Japan, you can’t set cookies for any domain following this pattern “[domain].kawasaki.jp” except for “city.kawasaki.jp”.

In order to bring some sanity to the situation, an organization called Public Suffix List has taken upon itself to maintain a list of top-level domains and the registration rules that are associated with them.

In order to ensure all browsers and tools interpret the list in the same way, they also provide a algorithm and matching set of test cases.