Have you ever gone to website you usually go on, only to find yourself unexpectedly challenged by a captcha?

None shall pass!

So you mash the keyboard a few times, and eventually you get to your website. Perhaps it was Google from a large shared connection like a university, but more likely you were trying to access a CloudFlare protected website coming from a disputable network like a Tor exit node.

But why is this a thing? Well, many websites don’t want to be DDOSed or intensively crawled by random web robots — so they invest in such counter measures to deny undesirable automated behaviours.

Such countermeasures are a fact of life on the internet, and will only become more so. I would argue the internet is in the process of splitting into 3 major classes of network.

Normal, regular internet

Protected, inspected networks behind services like Cloudflare. Let’s call them ‘inspected nets’ for now

Protected, anonymised darknets like Tor

Over time, this distinctions will only become more pronounced, and thus the interactions between the darknets and the inspected nets will only become more antagonistic.

Since we need to live in world of both privacy and security, it is important we have a go at solving these conflicting requirements through means other than annoying captchas which penalise users for using darknets.

Alas if we were to assume ubiquitous JavaScript use, the solution would be use a basic library to perform a series of browser-based functions which a human using a web browser could execute in under a second but a bot could not. This is a popular form of captcha which is gaining ground where keeping low-level unwanted bots out is the primary objective implemented often by both Google and Cloudflare.

Unfortunately this creates a hard JavaScript dependency for any site doing this, which is both a step backwards for user security, and runs the risk of being bodged by the developer who implements the function in any case. But let’s take this javascript function — maybe this could be performed with a hypothetical native browser function? It’d go like so:

User clicks on a link to example.com

Browser sends a ‘GET’ request to example.com

Example.com says, ‘perform this browser function to show you’re not a bot with cryptographic nonce of 123456abcdef (expires in 1 hour)’

Browser crunches the function and replays its get request with the solution in the request header. Thus reponding with a proof-of-work

Example.com accepts the amended request and returns the site contents

User access the site seamlessly

[ edit- the web server would hold an in-memory, local or remote database, or remote-API index of approved nonces ]

So, what if this were a browser standard? Websites would be much more easily be able to deny DDOS and unwanted web crawling activity, and this entire function could be performed by the edge / caching layer of a web application.

There would presumably need to be way of allowing desirable web bots (such as web crawlers) through this protection layer in many cases — but crawlers already are carefully configured to slowly crawl websites like a human user would do in order to avoid common rate-limiting.

This system would allow friendly bots and humans in seamlessly, whilst relegating unfriendly bot activity to the edge network.