The Hyper Text Transfer Protocol (HTTP), the simple, constrained and ultimately boring application layer protocol forms the foundation of the World Wide Web. In essence, HTTP enables the retrieval of network connected resources available across the cyber world and has evolved through the decades to deliver fast, secure and rich medium for digital communication.

At Kinsta we’re addicted to optimizing the load times of the websites hosted on our platform and we’ve released numerous guides on the topic previously, just take a look at A Beginner’s Guide to Website Speed Optimization.

Since we’re always at the forefront of new technologies we made sure the whole Kinsta website and admin runs on HTTP/2 and our new Google Cloud infrastructure supports HTTP/2 for all our clients. This extensive information resource explains HTTP/2 for end-users, developers and businesses pursuing innovation. From basic information to more advanced topics, you’ll learn everything you need to know about HTTP/2.

This Guide Highlights The Following Key Aspects of HTTP/2:

What is HTTP/2?

HTTP was originally proposed by Tim Berners-Lee, the pioneer of the World Wide Web who designed the application protocol with simplicity in mind to perform high-level data communication functions between Web-servers and clients.

The first documented version of HTTP was released in 1991 as HTTP0.9, which later led to the official introduction and recognition of HTTP1.0 in 1996. HTTP1.1 followed in 1997 and has since received little iterative improvements.

In February 2015, the Internet Engineering Task Force (IETF) HTTP Working Group revised HTTP and developed the second major version of the application protocol in the form of HTTP/2. In May 2015, the HTTP/2 implementation specification was officially standardized in response to Google’s HTTP-compatible SPDY protocol. The HTTP/2 vs SPDY argument continues throughout the guide.

What is a Protocol?

The HTTP/2 vs HTTP1 debate must proceed with a short primer on the term Protocol frequently used in this resource. A protocol is a set of rules that govern the data communication mechanisms between clients (for example web browsers used by internet users to request information) and servers (the machines containing the requested information).

Protocols usually consist of three main parts: Header, Payload and Footer. The Header placed before the Payload contains information such as source and destination addresses as well as other details (such as size and type) regarding the Payload. Payload is the actual information transmitted using the protocol. The Footer follows the Payload and works as a control field to route client-server requests to the intended recipients along with the Header to ensure the Payload data is transmitted free of errors.

The system is similar to the post mail service. The letter (Payload) is inserted into an envelope (Header) with destination address written on it and sealed with glue and postage stamp (Footer) before it is dispatched. Except that transmitting digital information in the form of 0s and 1s isn’t as simple, and necessitates a new dimension innovation in response to heightening technological advancements emerging with the explosive growth of internet usage.

HTTP protocol originally comprised of basic commands: GET, to retrieve information from the server and POST, to deliver the requested information to the client. This simple and apparently boring set of few commands to GET data and POST a response essentially formed the foundation to construct other network protocols as well. The protocol is yet another move to improve internet user experience and effectiveness, necessitating HTTP/2 implementation to enhance online presence.

Goal of Creating HTTP/2

Since its inception in early 1990s, HTTP has seen only a few major overhauls. The most recent version, HTTP1.1 has served the cyber world for over 15 years. Web pages in the current era of dynamic information updates, resource-intensive multimedia content formats and excessive inclination toward web performance have placed old protocol technologies in the legacy category. These trends necessitate significant HTTP/2 changes to improve the internet experience.

The primary goal with research and development for a new version of HTTP centers around three qualities rarely associated with a single network protocol without necessitating additional networking technologies – simplicity, high performance and robustness. These goals are achieved by introducing capabilities that reduce latency in processing browser requests with techniques such as multiplexing, compression, request prioritization and server push.

Mechanisms such as flow control, upgrade and error handling work as enhancements to the HTTP protocol for developers to ensure high performance and resilience of web-based applications.

The collective system allows servers to respond efficiently with more content than originally requested by clients, eliminating user intervention to continuously request for information until the website is fully loaded onto the web browser. For instance, the Server Push capability with HTTP/2 allows servers to respond with a page’s full contents other than the information already available in the browser cache. Efficient compression of HTTP header files minimizes protocol overhead to improve performance with each browser request and server response.

HTTP/2 changes are designed to maintain interoperability and compatibility with HTTP1.1. HTTP/2 advantages are expected to increase over time based on real-world experiments and its ability to address performance related issues in real-world comparison with HTTP1.1 will greatly impact its evolution over the long term.

“…we are not replacing all of HTTP – the methods, status codes, and most of the headers you use today will be the same. Instead, we’re re-defining how it gets used “on the wire” so it’s more efficient, and so that it is more gentle to the internet itself…” Mark Nottingham, Chair the IETF HTTP Working Group and member of the W3C TAG. Source

It is important to note that the new HTTP version comes as an extension to its predecessor and is not expected to replace HTTP1.1 anytime soon. HTTP/2 implementation will not enable automatic support for all encryption types available with HTTP1.1, but definitely opens the door to better alternatives or additional encryption compatibility updates in the near future. However feature comparisons such as HTTP/2 vs HTTP1 and SPDY vs HTTP/2 present only the latest application protocol as the winner in terms of performance, security and reliability alike.

What Was Wrong With HTTP1.1?

HTTP1.1 was limited to processing only one outstanding request per TCP connection, forcing browsers to use multiple TCP connections to process multiple requests simultaneously.

However, using too many TCP connections in parallel leads to TCP congestion that causes unfair monopolization of network resources. Web browsers using multiple connections to process additional requests occupy a greater share of the available network resources, hence downgrading network performance for other users.

Issuing multiple requests from the browser also causes data duplication on data transmission wires, which in turn requires additional protocols to extract the desired information free of errors at the end-nodes.

The internet industry was naturally forced to hack these constraints with practices such as domain sharding, concatenation, data inlining and spriting, among others. Ineffective use of the underlying TCP connections with HTTP1.1 also leads to poor resource prioritization, causing exponential performance degradation as web applications grow in terms of complexity, functionality and scope.

The web has evolved well beyond the capacity of legacy HTTP-based networking technologies. The core qualities of HTTP1.1 developed over a decade ago have opened the doors to several embarrassing performance and security loopholes.

The Cookie Hack for instance, allows cybercriminals to reuse a previous working session to compromise account passwords because HTTP1.1 provides no session endpoint-identity facilities. While the similar security concerns will continue to haunt HTTP/2, the new application protocol is designed with better security capabilities such as the improved implementation of new TLS features.

HTTP/2 Feature Upgrades

Multiplexed streams

Bi-directional sequence of text format frames sent over the HTTP/2 protocol exchanged between the server and client are known as “streams”. Earlier iterations of the HTTP protocol were capable of transmitting only one stream at a time along with some time delay between each stream transmission.

Receiving tons of media content via individual streams sent one by one is both inefficient and resource consuming. HTTP/2 changes have helped establish a new binary framing layer to addresses these concerns.

This layer allows client and server to disintegrate the HTTP payload into small, independent and manageable interleaved sequence of frames. This information is then reassembled at the other end.

Binary frame formats enable the exchange of multiple, concurrently open, independent bi-directional sequences without latency between successive streams. This approach presents an array of benefits of HTTP/2 explained below:

The parallel multiplexed requests and response do not block each other.

A single TCP connection is used to ensure effective network resource utilization despite transmitting multiple data streams.

No need to apply unnecessary optimization hacks – such as image sprites, concatenation and domain sharding, among others – that compromise other areas of network performance.

Reduced latency, faster web performance, better search engine rankings.

Reduced OpEx and CapEx in running network and IT resources.

With this capability, data packages from multiple streams are essentially mixed and transmitted over a single TCP connection. These packages are then split at the receiving end and presented as individual data streams. Transmitting multiple parallel requests simultaneously using HTTP version 1.1 or earlier required multiple TCP connections, which inherently bottlenecks overall network performance despite transmitting more data streams at faster rates.

HTTP/2 features reduced latency, faster performance, better SEO rankings. 🚀

Click to Tweet

HTTP/2 Server Push

This capability allows the server to send additional cacheable information to the client that isn’t requested but is anticipated in future requests. For example, if the client requests for the resource X and it is understood that the resource Y is referenced with the requested file, the server can choose to push Y along with X instead of waiting for an appropriate client request.

The client places the pushed resource Y into its cache for future use. This mechanism saves a request-respond round trip and reduces network latency. Server Push was originally introduced in Google’s SPDY protocol. Stream identifiers containing pseudo headers such as :path allow the server to initiate the Push for information that must be cacheable. The client must explicitly allow the server to Push cacheable resources with HTTP/2 or terminate pushed streams with a specific stream identifier.

Other HTTP/2 changes such as Server Push proactively updates or invalidates the client’s cache and is also known as “Cache Push”. Long-term consequences center around the ability of servers to identify possible push-able resources that the client actually does not want.

HTTP/2 implementation presents significant performance for pushed resources, with other benefits of HTTP/2 explained below:

The client saves pushed resources in the cache.

The client can reuse these cached resources across different pages.

The server can multiplex pushed resources along with originally requested information within the same TCP connection.

The server can prioritize pushed resources – a key performance differentiator in HTTP/2 vs HTTP1.

The client can decline pushed resources to maintain an effective repository of cached resources or disable Server Push entirely.

The client can also limit the number of pushed streams multiplexed concurrently.

Similar Push capabilities are already available with suboptimal techniques such as Inlining to Push server responses, whereas Server Push presents a protocol-level solution to avoid complexities with optimization hacks secondary to the baseline capabilities of the application protocol itself.

The HTTP/2 multiplexes and prioritizes the pushed data stream to ensure better transmission performance as seen with other request-response data streams. As a built-in security mechanism, the server must be authorized to Push the resources beforehand.

HTTP/2 brings significant performance improvements for pushed resources and more.

Click to Tweet

Binary Protocols

The latest HTTP version has evolved significantly in terms of capabilities, and attributes such as transforming from a text protocol to a binary protocol. HTTP1.x used to process text commands to complete request-response cycles. HTTP/2 will use binary commands (in 1s and 0s) to execute the same tasks. This attribute eases complications with framing and simplifies implementation of commands that were confusingly intermixed due to commands containing text and optional spaces.

Struggling with downtime and WordPress problems? Kinsta is the hosting solution designed to save you time! Check out our features

Although it will probably take more efforts to read binary as compared text commands, it is easier for the network to generate and parse frames available in binary. The actual semantics remain unchanged.

Browsers using HTTP/2 implementation will convert the same text commands into binary before transmitting it over the network. The binary framing layer is not backward compatible with HTTP1.x clients and servers and a key enabler to significant performance benefits over SPDY and HTTP1.x. Using binary commands to enable key business advantages for internet companies and online business as detailed with benefits of HTTP/2 explained below:

Low overhead in parsing data – a critical value proposition in HTTP/2 vs HTTP1.

Less prone to errors.

Lighter network footprint.

Effective network resource utilization.

Eliminating security concerns associated with the textual nature of HTTP1.x such as response splitting attacks.

Enables other capabilities of the HTTP/2 including compression, multiplexing, prioritization, flow control and effective handling of TLS.

Compact representation of commands for easier processing and implementation.

Efficient and robust in terms of processing of data between client and server.

Reduced network latency and improved throughput.

Stream prioritization

HTTP/2 implementation allows the client to provide preference to particular data streams. Although the server is not bound to follow these instructions from the client, the mechanism allows the server to optimize network resource allocation based on end-user requirements.

Stream prioritization works with Dependencies and Weight assigned to each stream. Although all streams are inherently dependent on each other except, the dependent streams are also assigned weight between 1 and 256. The details for stream prioritization mechanisms are still debated.

In the real world however, the server rarely has control over resources such as CPU and database connections. Implementation complexity itself prevents servers from accommodating stream priority requests. Research and development in this area is particularly important for long term success of HTTP/2 since the protocol is capable of processing multiple data streams with a single TCP connection.

This capability can lead to simultaneous arrival of server requests that actually differ in terms of priority from an end-user perspective. Holding off data stream processing requests on a random basis undermines the efficiencies and end-user experience promised by HTTP/2 changes. At the same time, an intelligent and widely adopted stream prioritization mechanism presents benefits of HTTP/2 explained as follows:

Effective network resource utilization.

Reduced time to deliver primary content requests.

Improved page load speed and end-user experience.

Optimized data communication between client and server.

Reduced negative effect of network latency concerns.

Stateful Header Compression

Delivering high-end web user experience requires websites rich in content and graphics. The HTTP application protocol is state-less, which means each client request must include as much information as the server needs to perform the desired operation. This mechanism causes the data streams to carry multiple repetitive frames of information such that the server itself does not have to store information from previous client requests.

In the case of websites serving media-rich content, clients push multiple near-identical header frames leading to latency and unnecessary consumption of limited network resource. A prioritized mix of data streams cannot achieve the desired performance standards of parallelism without optimizing this mechanism.

HTTP/2 implementation addresses these concerns with the ability to compress large number of redundant header frames. It uses the HPACK specification as a simple and secure approach to header compression. Both client and server maintain a list of headers used in previous client-server requests.

HPACK compresses the individual value of each header before it is transferred to the server, which then looks up the encoded information in list of previously transferred header values to reconstruct the full header information. HPACK header compression for HTTP/2 implementation presents immense performance advantages, including some benefits of HTTP/2 explained below:

Effective stream prioritization.

Effective utilization of multiplexing mechanisms.

Reduced resource overhead – one of the earliest areas of concerns in debates on HTTP/2 vs HTTP1 and HTTP/2 vs SPDY.

Encodes large headers as well as commonly used headers which eliminates the need to send the entire header frame itself. The individual transfer size of each data stream shrinks rapidly.

Not vulnerable to security attacks such as CRIME exploiting data streams with compressed headers.

Similarities With HTTP1.x and SPDY

Underlying application semantics of HTTP including HTTP status codes, URIs, methodologies and header files remain same in the latest iteration of the HTTP/2. HTTP/2 is based on SPDY, Google’s alternative to HTTP1.x. Real differences lies in the mechanisms used to process client-server requests. The following chart identifies a few areas of similarities and improvements among HTTP1.x, SPDY and HTTP/2:

HTTP1.x SPDY HTTP2 SSL not required but recommended. SSL required. SSL not required but recommended. Slow encryption. Fast encryption. Even faster encryption. One client-server request per TCP connection. Multiple client-server request per TCP connection. Occurs on a single host at a time. Multi-host multiplexing. Occurs on multiple hosts at a single instant. No header compression. Header compression introduced. Header compression using improved algorithms that improve performance as well as security. No stream prioritization. Stream prioritization introduced. Improved stream prioritization mechanisms used.

How Does HTTP/2 Work With HTTPS HTTPS is used to establish an ultra-secure network connecting computers, machines and servers to process sensitive business and consumer information. Banks processing financial transactions and healthcare institutions maintaining patient records are prime targets for cybercriminal offenses. HTTPS works as an effective layer against persistent cybercrime threats, although not the only security deployment used to ward off sophisticated cyber-attacks infringing high-value corporate networks. The HTTP/2 browser support includes HTTPS encryption and actually complements the overall security performance of HTTPS deployments. Features such as fewer TLS handshakes, low resource consumption on both client and server sides and improved capabilities in reusing existing web sessions while eliminating vulnerabilities associated with HTTP1.x present HTTP/2 as a key enabler to secure digital communication in sensitive network environments. HTTPS is not limited to high-profile organizations and cyber security is just as valuable to online business owners, casual bloggers, e-commerce merchants and even social media users. The HTTP/2 inherently requires the latest, most secure TLS version and all online communities, business owners and webmasters must ensure their websites use HTTPS by default. Usual processes to set up HTTPS include using web hosting plans, purchasing, activating and installing a security certificate and finally updating the website to use HTTPS. HTTP/2 browser support includes HTTPS, increasing security on HTTPS deployments. Click to Tweet

The Main Benefits of HTTP/2