In 2012 a call went out for proposals for a new version of HTTP. This was highly significant due to the impact that HTTP has on our daily lives. In fact, it’s likely that for most of us in our current jobs and livelihoods would not exist without HTTP. So, the call for proposals heralded an exciting new era for the Internet community and was about to change and improve that venerable protocol.
The reason for change? Back in 1999 when the HTTP/1.1 specification was published web sites comprised some text, a few images, perhaps a banner ad. It was simple, and we were connecting to the familiar tune of a 56k modem. Fast forward 16 years - an eternity in Internet time – and everything has changed, not least our expectations. If a page is not instant, it’s too slow. On top of that, it should be responsive and render perfectly on whatever device we are using at that moment and the experience should be rich. And mobile - we want all of that to work over a wireless connection on tiny smartphones and tablets, anywhere in the world. That’s a lot to ask for something designed almost two decades ago.
The goals for HTTP/2 were simple: to improve the performance of HTTP by targeting the way the protocol is used today. In other words, a means to loading websites even more quickly.
Three years on and with many hours of work invested, the IETF has approved the HTTP/2 standard and leading browsers are supporting it. Anyone using up-to-date Firefox or Chrome, or running Internet Explorer on an early release of Windows 10, will probably have been using HTTP/2 for the past couple of months without realising it.
What is it?
HTTP/2 has a host of features to help address today's Web usage patterns. The top features are:
- Header compression
- Dependencies and prioritisation
- Server push
Multiplexing for HTTP is a means to requesting and receiving more than one web element at a time. It is the cure for the head of line blocking that is inherent in HTTP/1.1.
Each request from the client needs to wait until the server's response to the previous request arrives, which could be lengthy given that an average web page has around 100 objects. And any of these requests could stall for a variety of reasons, causing the whole page download to be delayed. So an HTTP/1.1 browser uses multiple connections to a server to achieve some semblance of parallelisation, but it has its own problems and still does not completely fix the head of line blocking issue.
HTTP/2 is a binary framed protocol. This means that requests and responses are broken up into chunks called frames that have meta information that identifies what request/response they are associated with. This allows requests and responses for multiple objects to overlap on the same connection without causing confusion, and they can be received in whatever order the server can respond with. So, for example, a first request might take longer to complete but it won’t hold up the delivery of any subsequent objects. The result: faster page load and render times.
Headers – the meta information the browser sends along with a request to better inform the server what it wants and can accept were added as part of HTTP/1.1 and originally were not that large. A browser, for example, will use a header to indicate to a server that it is able to handle gzip compression or a WebP image. It’s also where cookies are communicated and with more recent increase in the use and complexity these can get BIG. One characteristic of headers is that they do not change much between requests. Due to the stateless nature of HTTP/1, a browser still needs to advertise support for a given file format or language on every request. This can create many redundant bytes.
HTTP/2 helps to solve this problem. Using a combination of lookup tables and Huffman encoding it can reduce the number of bytes sent in a request down to zero. Over the length of an average web session, compression rates above 90% are not uncommon, and for an average web page on the response side this doesn’t have a big impact. But on the request side the results are significant. Even a modest page with 75 objects, for example, and an average header size of just 500 bytes, might take the browser 4 TCP round trips just to request the objects! With the same parameters and 90% compression with HTTP/2, a browser can send all the requests in a single round trip.
Dependencies and prioritisation
Multiplexing and Header Compression will make a significant difference, but they also deliver a new challenge. Being sophisticated, browsers have introduced pre-loaders to ensure they request the most important stuff first. For example, the CSS is critical to determining the page layout, but a logo at the foot of a page is not. If, in the new model, a browser simply requests everything at the same time and allows the server to return objects as quickly as possible, there will ironically be a reduction in page performance, because although everything may be faster overall, the important objects for page rendering are not necessarily getting to the browser first. Rather than push the problem onto the browser, the designers have built in the ability to address it in the protocol. By communicating to the server what objects are dependent on what other objects, and listing the priorities, the server can make certain the critical data is delivered to the browser right away.
One way to address the round trip latency of an HTTP request and response is for the server to send the browser an object before it is asked for, and this is where Server Push comes in. Superficially the advantage is obvious - instant page delivery even in the worst conditions. But in order to push the correct objects without wasting valuable bandwidth, the server needs to know what the user is probably going to need next, and what the state of the browser cache is. This is tricky, which is why general applications of push don’t exist today in supporting protocols such as SPDY. Whilst HTTP2 provides the tool for Server Push today, I am sure we will see some innovative uses over the coming years.
And for the everyday user?
No one will have to change their website or applications to ensure they continue to work properly. Not only will application code and HTTP APIs continue to work uninterrupted, but applications are also likely to improve performance and consume fewer resources on both client and server.
As HTTP/2 becomes more prevalent, organisations looking to benefit from its performance and security features should start thinking about how they are invested to fully capitalise on these new capabilities. Such considerations include:
Encrypting: Applications running over HTTP/2 are likely to experience improvements in performance over secure connections. This is an important consideration for companies contemplating the move to TLS.
Optimising the TCP layer: Applications should be designed with a TCP layer implemented to account for the switch from multiple TCP connections to a single long-lived one, especially when adjusting the congestion window in response to packet loss.
Undoing HTTP/1.1 best practices: Many best practices associated with applications delivered over HTTP/1.1 (such as domain sharing, image spriting, resource in-lining and concatenation) are not only unnecessary when delivering over HTTP/2, in some cases may actually cause sub-optimisations.
Deciding what and when to push: Applications designed to take advantage of the new server push capabilities in HTTP/2 must be carefully designed to balance performance and utility.
It is worth investing time and effort into addressing these and additional challenges, including optimising differently for HTTP/1.1 vs. HTTP/2 connections as browsers and other clients gradually transition over the next few years. This is, after all, the future of the web.
Words: Michael Gooding
Michael Gooding is web performance optimisation evangelist, Akamai Technologies EMEA. Read more about HTTP/2 in issue 271 of net magazine – in the shops this Thursday!
Like this? Read these!