This article is part of a series on PayPal’s Cross-Domain Javascript Suite.

CORS is pretty powerful, as far as web technologies go. It enables you to write web apis which can be consumed across whatever client origins you decide on, which opens up a whole world of possibilities for web applications to do things like share apis, build serverless architectures, expose data to third party sites, and more.

There are some things about CORS which make it a little painful though:

Any CORS policy change requires a server-side code change (or at the very least, config change), which is likely to require some kind of deploy. You can allow urls, headers, methods, etc. but there’s no built-in way to allow for more complex rulesets, like “allow origin A to send header X, and allow origin B to send header Y, but block origin C entirely”. Anything other than a simple get request is likely to trigger a preflight request to determine if the url is CORS eligible, before making the actual call. This is another potential point of failure on the network, and also introduces extra latency. Browser support for CORS is mostly good, but if you have to support some older browsers (even IE9), you’ll need a backup option like JSONP on top of (or instead of) CORS.

So I had a thought: why not take advantage of post-messaging with post-robot and send these requests through an iframe to the target domain? That would mean:

No preflight request

A single place where a cross-origin policy could be defined

Client-side only cross-domain requests

No browser support for CORS needed!

A little googling showed up the xdomain project which has done something similar in the past. This is awesome, but this didn’t quite fit with what I was looking for, for a few reasons:

It hooks into XMLHttpRequest , and I’m hoping to take advantage of fetch (ideally without a polyfill when it’s native to the browser).

, and I’m hoping to take advantage of (ideally without a polyfill when it’s native to the browser). It appears to take over any request to the given domain; this has advantages, since it works seamlessly with XMLHttpRequest like jQuery and Angular’s $http . But I’d prefer proxied calls to be explicit, so I know when my code is making a proxied http call.

like jQuery and Angular’s . But I’d prefer proxied calls to be explicit, so I know when my code is making a proxied http call. There doesn’t seem to be any way to set up a policy for which paths, headers, methods, cookies etc. are allowed or disallowed. This seems dangerous to me: if you’re hoping to expose your api layer to external callers, you probably want to be explicit about what you allow through.

It seems like the ‘master’ domain needs to know which ‘slave’ domain will be making requests, without any clear way to allow calls from multiple wildcard domains or domain patterns (without enumerating them all explicitly).

With all this in mind, I spent a few days hacking on something I’ve dubbed ‘fetch-robot’:

The idea is, in the ‘server’ frame (where the actual request will be made), you specify a policy for what you want to allow, and from which origins:

If any of the allow rules passes, the request will be allowed through. This allows extremely granular configuration of who and what you want to allow, which paths and headers you allow, etc.

This config essentially serves as a CORS api spec, which also serves as documentation for any external callers.

Then in the ‘client’ window, you just connect to the proxy url (fetch-robot will set up an invisible iframe in the background), then you can just use fetch with the usual parameters:

fetch-robot doesn’t yet have 1:1 parity for features with fetch (it’s just the result of a weekend of hacking, so far), but most of the fundamentals are there. Would love to know if it’s useful for anyone else!