So the podcast app is a proof-of-concept to try out a bunch of APIs, and, as is normal for me, to battle-test ideas I have for the best way to approach modern development. Ultimately I’d love to push the source code and the app itself, much as I have done for Voice Memos and Guitar Tuner.

This kind of app, at many levels, feels like something that the web should be good for. It’s a case of ingesting others’ feeds & album art, putting their mp3 files in <audio> elements, and making a neat UI in which to present the whole experience. Looking at the tools at our disposal for all this, we have the Cache API, we have Service Workers, we have IndexedDB (which remains a horror show, but hey it does work if you talk to it nicely), and the graphics performance of many browsers can support the kinds of effects I love to make.

An artist's impression of two idiots who have a podcast about the web

This kind of app, at many levels, feels like something that the web should be good for.

Here’s the thing, though: I can’t ship the app. I can’t because I’d also need to spin up a proxy for all the assets, and, if I mean business, the proxy would also need to handle the (very large) mp3 files as well. All of this is due to CORS, Mixed Content, and the lack of an escape hatch.

TL;DR

If you want the cheat list (hey, who wouldn’t?), here’s what functionality is locked off unless I use a proxy. It’s worth noting that I want to be on HTTPS so that I can use a Service Worker:

Feature Prevented by CORS Prevented by Mixed Content Reading insecure podcast feed data No Yes Analyzing insecure image assets (for UI colors) Yes Yes Background-loading insecure mp3 files for offline (in a Service Worker) No Yes

CORS and Mixed Content Warnings Are Necessary

CORS saves the bacon of many an intranet. The last thing you want is a random site being able to ping a load of addresses in the background and discover all manner of things about your SuperSecretProject™ without your knowledge. That would be bad, so it’s very good that browsers prevent that from happening. Browser vendors, then, can feel good about things: they have protected intranets and other sites from inadvertently leaking data. Internet high five!

The last thing you want is a random site being able to ping a load of addresses in the background and discover all manner of things about your SuperSecretProject™ without your knowledge.

Mixed Content warnings, or even just plain not-loading-a-resource-because-it’s-over-an-insecure-connection, is also a good thing because the user should know that the contract of HTTPS was broken by the developer loading something over HTTP. Another Internet high five!

While we’re here, however, it’s interesting to note that native doesn’t observe CORS restrictions or Mixed Content (today, at least, though there does seem to be a shift) and, as it happens, most people seem to consume podcasts through native apps. The knock-on effect means that nobody includes the CORS header (even, ironically, podcasts about the web), and they also serve their assets over HTTP. There’s no incentive to not.

I think the difference between web and native’s permissions model is a function of usage: we actively install apps, granting permissions as we do so (in some cases implicitly), but we skim through sites, and we expect browsers to keep us safe while we do so. It’s right in the web’s model for CORS to exist. CORS, however, has no user permissions component, so the restriction ends up being almost entirely in the hands of SysOps.

CORS and Mixed Content have Consequences

CORS, and its good buddy Mixed Content warnings, excellent as they are for protecting users (and they are excellent), bring developers, users, and publishers some unpleasant side-effects:

Developers need to make proxies / directories. I have started calling this “resource laundering”, because that’s what it really is. Depending on the app in question, this ranges from “meh” to “excuse-me-wat”. The user is now really being MITM’d by the app for any HTTP resource (or potentially HTTPS, too, depending on the remote side’s config). As the app developer, you’ll probably want to offer some app-wide store or directory, so that you can validate the feed once, then offer it to all of your users. That’s going to involve a fair amount of stream parsing, validation, and caching logic. All of which equates to time, cost, and expertise.

I have started calling this “resource laundering”, because that’s what it really is. Depending on the app in question, this ranges from “meh” to “excuse-me-wat”. The user is now really being MITM’d by the app for any HTTP resource (or potentially HTTPS, too, depending on the remote side’s config). As the app developer, you’ll probably want to offer some app-wide store or directory, so that you can validate the feed once, then offer it to all of your users. That’s going to involve a fair amount of stream parsing, validation, and caching logic. All of which equates to time, cost, and expertise. Developers have to pay the bandwidth bill. There’s an interesting side-effect of Service Workers being HTTPS: mixed content warnings . Most podcast audio files are served over HTTP. If you request the file from a page you’re going to get a warning, which is ick but understandable. But, if you want to request the file in the background with a fetch inside of a Service Worker, you’re out of luck: if there’s no client making the request the fetch will fail. This is the correct behaviour because there’s no client to put up the mixed content warning, and that wipes out the possibility of doing background sync to get mp3 files. The solution is to also take a copy of the mp3 files yourself, so you can guarantee they are served over HTTPS, but now you have to pay every time a user downloads an mp3 file.

There’s an interesting side-effect of Service Workers being HTTPS: . Most podcast audio files are served over HTTP. If you request the file from a page you’re going to get a warning, which is ick but understandable. But, if you want to request the file in the background with a fetch inside of a Service Worker, you’re out of luck: if there’s no client making the request the fetch will fail. This is the correct behaviour because there’s no client to put up the mixed content warning, and that wipes out the possibility of doing background sync to get mp3 files. The solution is to also take a copy of the mp3 files yourself, so you can guarantee they are served over HTTPS, but now you have to pay every time a user downloads an mp3 file. Publishers’ ads & analytics get broken. If I start proxying on behalf of all my users, the publishers of that content will get – say – 1 hit from me, and then the users of my podcast won’t register their hit because they got it from my server. That would affect any business they get based on their traffic since it’s no longer being reported accurately. The more successful the app, the worse it would be for publishers.

If I start proxying on behalf of all my users, the publishers of that content will get – say – 1 hit from me, and then the users of my podcast won’t register their hit because they got it from my server. That would affect any business they get based on their traffic since it’s no longer being reported accurately. The more successful the app, the worse it would be for publishers. Developers have to take undue responsibility. From a personal point-of-view – and I doubt I’m the only person for whom this is true – I don’t want to take on the responsibility of proxying others’ data, because there’s nothing I can do to stop people requesting all kinds of awful things with it. I have to think super defensively about how best to protect, launder, and vet everything that passes through it, which shifts the nature of building for the web to somewhere I find genuinely uncomfortable. That’s over and above the cost and complexity, which were already dealbreakers for me.

From a personal point-of-view – and I doubt I’m the only person for whom this is true – I don’t want to take on the responsibility of proxying others’ data, because there’s nothing I can do to stop people requesting all kinds of awful things with it. I have to think super defensively about how best to protect, launder, and vet everything that passes through it, which shifts the nature of building for the web to somewhere I find genuinely uncomfortable. That’s over and above the cost and complexity, which were already dealbreakers for me. User privacy potentially takes a hit. A side effect of using an app with a proxy is that everyone’s usage habits are shared by default with the app creator, which, in this case, would be me. Now, sure, anyone could beacon that data back, and maybe I’d make the most of it on their behalf and offer recommendation services, but it seems to me that making that the default state isn’t necessarily in the user’s best interests.

Of course, the upside is you get to make sure every podcast’s feed is valid, and you get to serve it with HTTPS. I guess every cloud has a silver lining. It still doesn’t feel to me like the end justifies the means.

The Purist Answer

Every site should be HTTPS and serve its publicly accessible content with the CORS header… In Realitysville (population: me, sadly), that’s not going to happen any time soon.

Some of the people to whom I’ve spoken have given me the “right answer”, which is that every site should be HTTPS and serve their publicly accessible content with the CORS header: Access-Control-Allow-Origin: * . That is correct, and it would evaporate my issue overnight. In Realitysville (population: me, sadly), that’s not going to happen any time soon. So far not one podcast I’ve checked includes the CORS header and, even if it did, if it serves assets over HTTP then background sync fetches in Service Workers would fail, or if I loaded them directly there’d be a necessary but scary-looking warning to users, which isn’t the experience I would want to ship.

The “C’mon, it’s Okay” Answer

Many people I’ve spoken to don’t see writing a proxy as tragic, let alone an issue. At the pragmatic level I can see that: if you’re in the business of shipping an app, you may as well just suck it up and get on with it. But there’s something really wrong to my mind if the best solution we can muster is tantamount to “just send your HTTP traffic through me; I’ll not do anything with it, pinky promise.” Sorry, I don’t want to be that developer, and I don’t see why others should be, either.

I also think there’s a reasonable economic argument against this objection, too: if your app is relatively successful, and its user base means you transfer in ~1TB of data, and ~10TB out every month (not an unreasonable estimate if the average podcast file weighs in at 50MB, there are four new episodes a month, your user base subscribes to 4 or 5 podcasts, and you have in the region of 10k people using your app) then your bandwidth bill would be around $920 a month on Amazon’s S3 (hit their calculator and pop in 10TB for outgoing and 1TB for incoming).That’s a significant cost, and I didn’t even get to handling feeds, images, APIs, disk space, RAM, or CPU time.

Also, as I said before, the content publisher misses out on content hits if I cache their file.

The “How About a Big Proxy?” Answer

Mozilla’s Anne van Kesteren has suggested that browsers could perhaps ship with a proxy. Other people I spoke to have also suggested this as a possible approach. This could be helpful inasmuch as it would circumvent the need for developers to ship their own proxies. That’s a win for ergonomics.

But I think there are some issues here, too, depending on what your view of the proxy is. The proxy could act as a fallback mechanism where, should the request fail because of a lack of CORS header, it could confirm that it can get at the resource and, therefore, the lack of CORS header becomes a non-issue. What does that mean if the asset is requested over HTTPS, though? It would also be beaconed to the proxy, which seems to break the HTTPS concept that your traffic is solely between the client and the server. Now it’s the client, the server, and the proxy.

Any proxy, however big or small, requires the user’s trust, because it is included in the traffic between the client and server. Which I guess is another way of me saying that to me it doesn’t matter if the app proxies the traffic, or another entity does; in both cases it carries the same issues.

The Web as an App Platform

People tell me they want the web to be a first class app platform; I want them to be right, but as it stands I don’t see how it can be without some progress in this area. Yes, intranets and other sites are protected, and that’s absolutely the right thing to do (please don’t read this post as me having a direct objection to CORS or Mixed Content; I don’t), but we need something more here to not simply move the issue elsewhere, and create new problems in the process.

I don’t want to vomit another proxy onto the web, and I don’t want anyone else to, either.

I have no idea what the right solution is, but I also didn’t think up many of the awesome improvements the web has seen in the past few years, so while I might not have an answer, I really hope someone else might. Maybe the right solution is that browsers get better at detecting what is on an intranet and what is publicly available outside of the current network. I don’t know, but I believe we need something if we genuinely want the web to be first class app platform. I don’t want to vomit another proxy onto the web, and I don’t want anyone else to, either.

If you have any great ideas, let me know.

Thanks to Mike West, Joel Weinberger, Adrienne Porter Felt, and Alex Russell for reviewing this post and helping me figure out where I wasn’t being clear. Thanks to Mike Mahemoff for helping me understand what running a service like this is like for real. Thanks to Paul Kinlan for not insisting I include a Corrs lyric before pushing this post live… that was a very real danger.