A few weeks back, we were chatting about the architecture of the Individual Electoral Registration web service. We started discussing the pros and cons of an approach that would provide a significantly different interaction for any people not running JavaScript.

"What proportion of people is that?” an inquisitive mind asked.

Silence.

We didn’t really have any idea how many people are experiencing UK government web services without the enhancement of JavaScript. That’s a bad thing for a team that is evangelical about data driven design, so I thought we should find out.

The answer is:

So, 1 user in every 93 has JavaScript disabled?

No. Surprisingly, the proportion of people that have explicitly disabled JavaScript or use a browser that doesn't support JavaScript, only makes up a small slice of people that don't run JavaScript.

So what: shouldn’t we support people without JavaScript anyway?

Yes, we do support them.

This isn’t about whether we should offer a good service to non-JavaScript people, progressive enhancement, done well, ensures we always will. But it’s interesting to know that 1 in 93 people will experience the design without JavaScript enhancement - especially when it comes to prioritising how much of our time to spend on that design.

How did we calculate these numbers?

Unlike other interesting numbers (such as IE6 or mobile device usage) it wasn’t a simple web analytics query - not least because standard analytics packages typically capture usage through the execution of JavaScript.

Web server logs tell us more, but they won’t tell us whether people are running JavaScript. Perhaps a combination of the two then?

Web server traffic - JavaScript analytics traffic = non-js traffic?

Well, we tried this, but in short, it wasn’t accurate enough. There was enough variance in the data as a result of local and corporate caching, bots, analytics blockers, timing, latency in the disparate logging etc to worry us about the accuracy of the data coming back. Particularly when we’re dealing with relatively small proportions.

What was the solution?

So @tombaromba hacked some code in the GOV.UK homepage (similar to an approach inspired by an experiment Yahoo! Conducted in 2010). We chose this page because of its high volume of traffic and low likelihood of any bias towards a particular user group or demographic.

This code included three images, of which browsers should request two.

First, an image that virtually all browsers would request (the ‘base image’).

And either

an image that only browsers executing JavaScript would request (the ‘script image’)

an image that only browsers not executing JavaScript would request (the ‘noscript image’)

We deployed this code and then collected the log data from over half a million visits. I expected that number of ‘base image’ requests would closely equal the combined ‘script image’ and ‘noscript image’ requests.

I was wrong.

509,314 visits requested the ‘base image’.

503,872 visits requested the ‘script image’.

1,113 visits requested the ‘noscript image’.

Which meant that 4,329 visits weren’t requesting either the ‘script image’ or ‘noscript image’. Significantly higher than the 1,113 visits requesting the ‘noscript image’.

Why is there such a big difference?

I *now* know that ‘noscript’ tags will only be followed by browsers that explicitly have JavaScript disabled or don’t support JavaScript at all. So a significant number of people had a JavaScript enabled browser but still didn’t run the scripts successfully.

It’s hard to know exactly why these browsers didn’t run the JavaScript, but a number of possible reasons are:

corporate or local blocking or stripping of JavaScript elements

existing JavaScript errors in the browser (ie from browser add-ons, toolbars etc)

page being left between requesting the base image and the script/noscript image

browsers that pre-load pages they incorrectly predict you will visit

network errors, especially on mobile devices

any undoubtedly many more I haven’t even thought about...

So while these are interesting reasons, ultimately the reason why someone doesn’t receive the enhancements is largely irrelevant. What's important is understanding how many people this is, and now we know.

Is there a trend?

This is the first time that we have carried out this analysis at GDS. We have earlier results from Yahoo!, which suggested that in the UK in 2010, 1.3% of people were disabling JavaScript.

Since 2010 there has been strong growth in the use of smartphones, most of which will receive and run JavaScript, so it’s not unexpected that the numbers have fallen slightly and I would expect that to continue.

We can’t be sure how comparable this data is with the Yahoo! data. The user base may be different and we can't be sure if Yahoo! was just measuring people explicitly disabling JavaScript or also including those not running it.

We will look to repeat this analysis on a more regular basis and will share anything interesting we find.

Pete Herlihy is a Product Manager, GDS

Follow Pete on Twitter: @yahoo_pete