Earth Notes: On CDNs, HTTP/2, HTTPS: Performance and Carbon Footprint (2017)

Updated 2024-12-18.
Carbon cutting and better UX with HTTP/2 and Content Delivery Networks?
I am trying to understand if moving some of my (lightly loaded and fairly static) Web sites into cloud CDNs can improve perceived performance. Should I make use of features such as HTTP/2, and am I likely to be making a difference to my total carbon impact in the process?

Ideally, I can make users happy with faster sites, and also reduce my carbon footprint through improved efficiency.

I'm nowhere near any bandwidth or capacity limits, nor have any of my sites been attacked (more than the usual background), so this is in the spirit of scientific enquiry!

Most of my views are single page (eg via search) and from the same country (UK), thus low latency/RTT (Round-Trip Time between the user and my servers).

First View, Empty Cache

Improving user experience is therefore quite a lot about reducing the "first page view" page load time of a new user with an empty cache.

Speedup may also help search engine optimisation (yes, the dreaded SEO) ie help my pages to be spidered and returned in search results!

(My general approach remains to cache like it was going out of fashion, but optimise for the first hit and assume near 100% bounce rate anyway!)

Google's Mobile Analysis in PageSpeed Insights suggests that above-the-fold (ATF) content should render in no more than 1s.

Though new and shiny and fashionable, the pros and cons of HTTP/2 seem quite subtle for smaller sites for example, with findings such as:

It seems that for lightweight sites near their end users (eg all within one country) neither CDNs nor HTTP/2 seem likely to help with either speed or bandwidth (nor likely cost, energy and thus carbon): see below.

Note that there are some things that can be done which are pretty much always a win for performance and carbon, such as lossless reduction of HTML/JavaScript/CSS (minifying) and images, though sometimes there is a conflict with maintainability of the site.

Target

First, I want to measure if HTTP/2 helps perceived site performance at all given the nature of the site and users, and in spite of any https/TLS costs, simply because of better use of TCP (eg, a single connection). Time to first render, and time to be 'visually complete' of essential content above the fold, are probably good candidate metrics.

Second, I want to estimate nominal carbon cost or savings from offloading some load into highly-tuned CDN engines in the cloud. This could include reduced travel distance for traffic, reduced numbers of connections or packets or bytes sent might all be a reasonable start.

Fossil Site

For this test I have ab/used a 'fossil' site that I use for one very narrow purpose (remembering which agents I have used for contracting in London and sharing that list with others).

It is basically a single relatively small HTML (~10kB) page with three small decorative images, being served from my Raspberry Pi 2 with Apache 2.2 over HTTP/1.1 unencrypted over a nice fat (~10Mbps outbound) FTTC (Fibre To The Cabinet) Internet link.

Traffic is tiny, and most of it is bots of some sort!

Typical human users are from the UK (the domain is .co.uk).

When I use the magnificent WebPageTest to fetch a broadly similar page from the same server (this page in fact, while being written) to a UK location to a desktop over 'DSL' (London, UK - EC2 - Chrome - DSL; 1.5Mbps/384kbps 50ms RTT), fetching twice to a fresh browser cache to simulate a typical first visit from a new user but with any intermediate hardware warmed up, I get for run 2:

Document Complete Fully Loaded
Load Time First Byte Start Render Speed Index Interactive (beta) Time Requests Bytes In Time Requests Bytes In Cost
First View (Run 2) 0.417s 0.247s 0.386s 580 > 2.943s 0.417s 3 7 KB 0.532s 5 11 KB $----

That's lots of 'A' scores and the following initial (HTML) connection details:

That is a time to first byte of ~100ms and from above a visually complete document in well under a second, which is all very good!

A couple of (HTTP/1.1) connections are opened, the second speculatively by the browser, the first transferring 3 objects and the second used for 2.

connection chart by WebPageTest

So this is a case where HTTP/1.1 is performing quite well.

The (fossil, and this peer) site content is entirely benign so there isn't a very strong case for encryption, and given the very low traffic any effort spent on managing https is arguably wasted. Opinions vary.

Testing HTTP/2 Easily: Cloudflare

To test HTTP/2 with mainline browsers requires a secure connection in practice because there are a lot of broken HTTP/1.x proxies out there. My current RPi2 installation is such that I'd have to reinstall the OS from scratch to get the Apache 2.4 and OpenSSL and other support needed: apt-get dist-upgrade is not going to get me there.

I do use some CDN tech with another of my Web sites, in a model where I push heavily-used static site furniture (thumbnails) in advance to the CDN for a non-static site, but that required significant coding effort and even breaking out the coporate credit card. Too heavyweight for a quick feasibility test...

I found out that Cloudflare offers a reverse-proxy model, that it can do this reverse proxying for existing non-secure sites, and that there is a free tier for piddling sites such as mine. And Cloudflare creates and manages all the relevant certificates too. Hurrah!

The main effort was switching the DNS delegtion for the domain to Cloudflare and waiting for old records to time out, but while I was having fun I tidied up the HTML, shunk the (minor) site graphics with on and off-line tools (TinyPNG and OptiPNG), and turned on some extra Cloudflare optimisations suitable for my very static site (years between updates).

I satisfied myself that everything worked as I wanted, and then stepped back to test things more systematically.

Blank Page HTTP/1.1 vs HTTP/2

A first really basic test is to create a zero-length HTML file and try to load it through the HTTP/1.1 and HTTPS + HTTP/2 URLs to compare speed, number of connections, etc.

Cloudflare's "Cache Everything" option is was turned on, relevant to this test, so that the (empty) HTML page would be served directly from Cloudflare, without trying to go back to my server. I also have long time timeouts (~1 month) in Cloudflare and all the Expires/Cache-Control headers, to ensure the first visit is fast (served from the CDN) and subsequent visits faster still since the brower should be able to retain everything in its cache.

For the HTTP/1.1 time to first byte is typically 185ms, with DNS very similar to the non-CDN case of 60ms. In this case a second, speculative, connection was opened by the browser, and in the end was used to attempt to fetch the site favicon.ico, but there isn't one, so this averaged 0.5 useful objects per TCP connection.

Here is more detail on one sample primary connection:

The similar DNS time to my own site and DNS managed on my primaries and secondaries suggests that even Cloudflare's world-class DNS doesn't out-do my own in this simple case where clients are close by.

For HTTP/2 (and https) the time to first byte is around 300ms, including similar DNS time to before. Here is a little more detail:

The extra https/TLS overhead (time and bytes) is noticeable in this rather extreme case. At least HTTP/2 avoided opening up another connection.

StatusCake

Update 2017-07-28: thanks to the nice people at StatusCake (Dan and the dev team) I compared a zero-byte HTML page served:

The StatusCake test servers used for this test are in the UK, as is my server, and Cloudflare has UK POPs at least in London and Manchester.

This is in line with the previous data that even a super-fast CDN and a transport newer than HTTP/1.1 doesn't compensate for the TLS overhead.

Ad-Free Page HTTP/1.1 vs HTTP/2

Next I constructed a slightly cut-down version of the real main page, still with the bulk of its HTML, all its images, but for example no ads.

Note that in Cloudflare there is an option to auto-minify HTML (and CSS and JavaScript) which is effectively lossless compression. I have enabled it, and it is relevant to this test.

There is a little extra whitespace to help quickly manually edit this file when I update it (old school vi; no CMS here!). Cloudflare is able to remove most of it automatically.

Note that Cloudflare will also serve the content compressed with gzip if the browser will accept that encoding, which along with the minifying results in minimising the number of bytes sent over the network to achieve the desired displayed Web page.

Cloudflare does not seem to support Brotli compression (br) even on HTML even when covered by "Cache Everything".

The images had already been manually roughly halved in size earlier.

A typical (unencrypted) HTTP/1.1 view gets lots of A scores, start render in 284ms and visually complete in 400ms, with a total transfer size of 13kB. All 4 objects (and the attempt to fetch the non-existant favicon.ico) were fetched over two TCP connections.

The primary HTML page download looked like:

A typical https HTTP/2 view yielded a start render of 590ms with visual completion 10ms later and a lotal of 24kB of HTML, images and certificates. All 4 objects (and the attempt to fetch the non-existant favicon.ico) were fetched on one TCP connection.

The first/main HTML transfer looked like (including ~94ms of TLS setup):

So with this simple page HTTP/2 saved one TCP connection but nearly doubled data transfered and time to display compared the the HTTP/1.1 version from the same world-class CDN server, and indeed from my own RPi! All the render times were entirely acceptable, but the https overhead is clear.

Fully Loaded Page HTTP/1.1 vs HTTP/2

This page has 3 AdSense 'responsive' ads on it.

For HTTP/1.1 the page content is visible in ~500ms and the page including ads is visually complete in 3.1s, with a speed index of 1716 and a total page weight including certificates of ~340kB. 9 TCP connections were used.

For HTTP/2 the page content is visible in ~600ms and the page including ads is visually complete in 3.0s, with a speed index of 2115+ and a total page weight including certificates of ~385kB+. 6 TCP connections were used.

So HTTP/2 still performs slightly worse than HTTP/1.1 from a user perspective, and though it saves some TCP connections HTTP/2 uses more data. Putting AdSense on the page largely swamps everything else.

It is probably reasonable to assume that Cloudflare's HTTP servers are well tuned; thus at the moment all I see is a minor detriment to the user experience and possibly to their and my carbon footprint from extra traffic from switching to HTTP/2.

Effectively compulsory use of TLS/https with HTTP/2 prevents helpful intermediaries from cacheing, transcoding, etc, which has scalability concerns for places further from the Web servers and with poor bandwidth, such as mobile use and the developing world. And the extra RTTs and opportunity for breakage on mobile hurt TLS/https even more.

A big advantage of a CDN, when using HTTP/2 and thus https, is that the time for any extra round-trips during handshaking is reduced, thus reducing latency and improving user experience, especially for the the first page fetched.

Playtime

2017-05-16: as a side-side-project I've tried pushing some of the static content from this (Earth Notes) site into the Cloudflare hosted site to see what benefits may accrue from a more subtle blend of hosting locally and quietly moving some stuff to the cloud.

In particular (and this isn't probably isn't a good idea for SEO purposes, I've mirrored my static images directory within the Cloudflare site. I've set those items up to have a long expiry, and with Cache-Control marked 'public' and 'immutable', and redirected key items needed during page loading to the Cloudflare site.

(A normal page reload is able to avoid reloading at least one 'immutable' item with Firefox.)

Typically the front page first load from a clean cache (via http:// in either desktop or mobile versions) takes three connections, one of which is an HTTP/2 (over TLS) connection to Cloudflare which scoops up several immutable items.

Note that because of the nature of the site, most page views are the first and only page view of the site for a given user. Thus it makes sense to concentrate on optimising that first view with an empty cache (ie minimising latency), even at some cost in cacheing for nominal subsequent page views. So, for example, inlining small amounts of core CSS is a win definitely for HTTP/1.1, and even quite likely for HTTP/2.

Pages are typically rendered in just over a second even over simulated 3G.

The trial policy is to explicitly link only to the mirrored versions of the static items from img tags for example, that happen implicitly as part of (and that could delay) page loading, but to leave any explicit clickthrough links pointing to 'normal' canonical earth.org.uk addresses. I can simply check in the logs for actual likely best wins in terms of hits on files under /img/ hit from pages on site.

Cloudflare has enough distinct edge servers (>100 at 2017-05) that it could take a long time to populate all of them for global traffic individually without using their tiered/hierarchical 'Argo' service for example. So my site will continue to see hits, and end users see slow cache misses. It may indeed be hard to keep the less-frequent geographies' edge caches live at all, and at ~2017-05-22 I can see this whenever a user from a 'new' country pops up. Thus the Cloudflare hit rate is typically somewhere in the range 50--70% overall.

2017-05-26: I have managed to tweak things such that for the desktop front page all objects are delivered in parallel down two connections for a new visitor: 4 down the initial HTTP/1.1 connection from index.html to favicon.ico, and 5 via Cloudflare over HTTP/2, with first render in 655ms (speed index 750) and complete in ~1.5s with a total weight (including certs) of 51kB. (For mobile an extra connection is used even though only 5 objects transferred.)

2017-05-31: with more tweaking, including better handling of JavaScript and the extra CSS used on the home page, mobile (m) and desktop (www) home pages are essentially visually complete in 0.5s and ~1s respectively, with 5 objects, 2 connections and 17kB for mobile cf 8 objects, 3 connections and 46kB for desktop.

2017-06-17: a few days ago I switched back to serving all static content directly from the RPi (from the 'main'/www URL), not via the CDN. I haven't turned off the aliases via the CDN, and some spider requests are still coming though, but that's not a problem.

Carbon

Given some estimates (see Sources) for sending 11g--19g per 1MB of data across the Internet (in one case across 3G, in another for email) we might reasonably start from an estimate of 10g/1MB, so maybe ~0.1g to visit the ad-free page over HTTP/1.1, ~0.2g over HTTP/2 with https, rising to ~3.4g with ads, and to ~3.9g with ads and over HTTP/2 with https. This does not account for distance (byte miles?) for traffic, nor many other factors.

Also to be accounted for are any differences in serving from efficient CDN servers (assumed more efficient than my RPi, even if it is running off-grid) closer to the end user (reduced byte miles), and less obvious effects such as any extra costs of (eg) handling uncached traffic such as 404s from robots.txt and malicious probe request 404s now taking a longer route.

Note also that objects from very lightly-used sites may be continually be evicted from CDN caches, thus requiring extra work to refetch them every time (or nearly so) that they are requested, possibly doubling traffic and CPU effort to deliver to the end user. Though it is also possible that some indirect uncached routes via a CDN are faster and more efficient than directly.

Not yet resolved...

2019-01-14 note: supporting AMP now means that the first page 'hit' on EOU from (Google) search may in fact not touch my site at all and will instead be entirely satisfied from the AMP Cache. A first-hit CDN. And indeed very few of the nominal AMP hits make it through to the solar-powered back end.

References

(Count: 1)