A Border Gateway Protocol (BGP) route leak caused problems for Cloudflare, Amazon Web Services and Google on Monday night, with many websites that rely on these and other large operators either slow or inaccessible.

The problems started at about 9.00pm Sydney time on Monday night and appeared to have been resolved just before 11.00pm.

In a status advisory, Cloudflare blamed a route leak for the problems.

“We have identified a possible route leak impacting some Cloudflare IP ranges and are working with the network involved to resolve this,” it said.

It later said that the network responsible for the route leak had fixed the issue and that traffic was returning to normal.

In a post on ycombinator, sharing more detailed status advisories from Cloudflare, the company said it worked "with networks around the world" to resolve the problems, which it also said impacted "network routes for Google and AWS" in addition to those of its own.

A BGP route leak occurs when a provider advertises unintended paths over which to route internet traffic. They are usually accidental.

The impact was felt by a sizable number of internet services and site owners.

Many sites that sit behind Cloudflare - either for security protection or performance reasons - were inaccessible.

Australian cryptocurrency exchanges CoinSpot and BTC Markets were among local users of Cloudflare that were impacted.

AWS also said in a statement that "an issue with an external provider outside of our network, ... is impacting internet connectivity between some customer networks and multiple AWS regions."

“Connectivity to instances and services from other providers and within the region is not impacted by the event," it said.

WordPress host WP Engine meanwhile reported issues in Asia/Pacific, Europe and North America as a result of the global routing problem.

Popular workplace collaboration tool maker Slack also suffered problems for those accessing its services via a browser, though its apps were working without issue.

BGPmon.net said Monday night that the "large BGP incident caused 20,000 prefixes for 2400 network [sic] to be rerouted through AS396531 (a steel plant), and then on to its transit provider: Verizon" - confirming an earlier tweet on the suspected root cause.

Cloudflare later said in a blog post Tuesday that "a small company in Northern Pennsylvania became a preferred path of many Internet routes through Verizon (AS701), a major Internet transit provider.

"This was the equivalent of Waze routing an entire freeway down a neighbourhood street - resulting in many websites on Cloudflare, and many other providers, to be unavailable from large parts of the internet," it said.

"During the incident, we observed a loss, at the worst of the incident, of about 15 percent of our global traffic."

Cloudflare said the problem "should never have happened because Verizon should never have forwarded those routes to the rest of the internet."

It repeated calls it made last year for operators in the industry to consider changing "operational practices for BGP routing and filtering ... in order to finally stop route leaks and hijacks."