Fifteen years on, this paragraph from a Bill Gates’ memo is a bit cringe-inducing:

The events of last year — from September’s terrorist attacks to a number of malicious and highly publicized computer viruses — reminded every one of us how important it is to ensure the integrity and security of our critical infrastructure, whether it’s the airlines or computer systems.

Equivocating computer viruses with the worst terrorist attack in U.S. history may be a bit over-the-top, but for Microsoft, anyways, 2001 was a period of real crisis: the company’s software was hit by seven different worms, all following on the heels of the previous year’s massively damaging ILOVEYOU worm. More and more consumers were scared to even use their computers.

That was the context for perhaps the second-most famous Gates’ memo — Trustworthy Computing — from which the above excerpt was taken. This was the core takeaway:

There are many changes Microsoft needs to make as a company to ensure and keep our customers’ trust at every level – from the way we develop software, to our support efforts, to our operational and business practices. As software has become ever more complex, interdependent and interconnected, our reputation as a company has in turn become more vulnerable. Flaws in a single Microsoft product, service or policy not only affect the quality of our platform and services overall, but also our customers’ view of us as a company… In the past, we’ve made our software and services more compelling for users by adding new features and functionality, and by making our platform richly extensible. We’ve done a terrific job at that, but all those great features won’t matter unless customers trust our software. So now, when we face a choice between adding features and resolving security issues, we need to choose security. Our products should emphasize security right out of the box, and we must constantly refine and improve that security as threats evolve.

‘Trustworthy Computing’ was in many respects the inevitable counterpart to Gates’ most-famous memo: 1995’s The Internet Tidal Wave:

In this memo I want to make clear that our focus on the Internet is crucial to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of the graphical user interface (GUI).

Obviously Gates was right, but the memo went further: it is packed with ideas about how Microsoft can “superset the Web” in order to “make it clear that Windows machines are the best choice for the Internet”; to that end Gates wrote, “I want every product plan to try and go overboard on Internet features.” And, when Microsoft did exactly that, the result was a set of products with massive security holes, resulting in a crisis. The faster you move towards the future, the more unintended consequences — security debt, if you will — there inevitably will be.

The analogy to Facebook is straightforward: operating with the motto of “Move Fast and Break Things” the company has spent the last decade going overboard, as it were, on connecting everyone and everything. And then, to handle the deluge of information that resulted, the company helpfully presents an algorithmically curated News Feed that shows exactly what it thinks its users will enjoy seeing the most (engagement being a necessary proxy for enjoyment). It is truly a marvel: individual customization at global scale.

There have, though, been side effects.

Russian Ads

I wrote about Russian political ads on Facebook two weeks ago, explaining how the ads were bought through Facebook’s self-serve ad model; this allows the company’s five million advertisers — given that number, by definition the vast majority are small and medium-sized businesses — to run ads without having to interact with another human.

This, I argued, was a good thing, and I absolutely stand by it. From that article:

The biggest beneficiaries of zero transaction costs on the super-aggregators are not traditional advertisers, whether that be companies like CPG conglomerates or presidential campaigns. Both have the resources to advertise anywhere and everywhere, and indeed, often find that the fine-tooth targeting on super-aggregators isn’t worth the effort required. The folks that do benefit, though, are those that wouldn’t have a voice otherwise: startups and niche offerings, both in terms of business and politics. Google and Facebook have opened the field to far more entrants, and while that means there are more folks with bad intentions, there are also a whole lot more folks with ideas that were shut out by the significant transaction costs inherent in pre-Internet platforms.

That line, “folks with bad intentions”, should sound familiar: that is exactly what led to Microsoft’s crisis in 2001. Instead of building for local networks that were protected by the fact that access was non-scalable (i.e. physical access was required), Microsoft products were now on the Internet where they could be attacked from anywhere by anyone. And, when you have to defend against anyone, the likelihood of facing “folks with bad intentions” becomes a certainty. So it is with Facebook self-serve ads.

What is just as important to note, though, is that a scalable solution is also required. In the case of Microsoft, it obviously wasn’t viable to simply rip out Internet connectivity from its products; it is similarly foolhardy to suggest that Facebook abandon all of the benefits of the self-serve model by, for example, reviewing every ad.

To reiterate the point, this is impossible. To use the Russian ad numbers as a proxy, consider the math:

The $100,000 spent by 470 inauthentic account identified by Facebook was good for 3,000 ads, which means each ad cost an average of $30.

As a quick but essential aside, this exercise is going to be a very rough approximation, because the price paid for an ad varies hugely depending on how finely targeted it is, and how competitive said targeting opportunities are. In the case of these ads, Facebook revealed yesterday that for 50% of the ads less than $3 was spent, and for 99% of the ads less than $1,000 was spent (and 25% weren’t even shown because they failed to win the auction for the audience they targeted). However, given that Facebook only reveals the percentage change in its average price per ad, not the actual amount, $30 is the best we can do.

Last quarter Facebook had $9.2 billion in ad revenue, which was an increase of 47% over the year prior. Using that $30/ad number, that means last quarter there were approximately 276 million unique ads on Facebook (each of which could be shown multiple times, of course).

Again, the actual number could be different than this by a huge margin — it is very likely that this Russian ad buy is not at all representative — but that margin could go in either direction. The important takeaway is that looking at every ad means effectively killing self-serve, which not only kills Facebook’s revenue model, but, far more importantly, removes a truly accessible and disruptive advertising channel for small and medium businesses, particularly those uniquely enabled by the Internet.

Fixing Facebook Ads

What makes far more sense is for Facebook to find a point of leverage; for Microsoft, this was relatively easy — harden the operating system, which the company did with XP Service Pack 2. Facebook’s challenge is harder, but the point of leverage seems clear: advertisers themselves, not advertisements; after all, 5 million all-time is a much more manageable number than 276 million a quarter. To that end, the company also announced yesterday a change in how it handled U.S. political advertisers:

Increasing requirements for authenticity. We’re updating our policies to require more thorough documentation from advertisers who want to run US federal election-related ads. Potential advertisers will have to confirm the business or organization they represent before they can buy ads. As Mark said, we won’t catch everyone immediately, but we can make it harder to try to interfere.

This is the right point of leverage, but this policy change is inadequate. The only advertisers affected here are those that explicitly declare they are running ads for US federal elections; what about state elections, or other countries, or, pertinent to this case, bad actors?

Facebook should increase requirements for authenticity from all advertisers, at least those that spend significant amounts of money or place a large number of ads. I do believe it is important to make it easy for small companies to come online as advertisers, so perhaps documentation could be required for a $1,000+ ad buy, or a cumulative $5,0000, or after 10 ads (these are just guesses; Facebook should have a much clearer idea what levels will increase the hassle for bad actors yet make the platform accessible to small businesses). This will make it more difficult for bad actors in elections of all kinds, or those pushing scummy advertising generally.

Secondly, the most scalable counterweight to bad ads is massively increased transparency. Facebook took steps in this regard as well; from the same post:

Making advertising more transparent. We believe that when you see an ad, you should know who ran it and what other ads they’re running — which is why we show you the Page name for any ads that run in your feed. To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well — including ads that aren’t targeted to you directly. We hope that this will establish a new standard for our industry in ad transparency. We try to catch content that shouldn’t be on Facebook before it’s even posted — but because this is not always possible, we also take action when people report ads that violate our policies. We’re grateful to our community for this support, and hope that more transparency will mean more people can report inappropriate ads.

This will eliminate so-called Dark Ads which could only be seen by those targeted; again, though, Facebook didn’t go far enough. These ads can still only be seen by going to the actual pages, which are impossible to know about unless you are shown an ad; the company should have a central, searchable, repository of all those hundreds of millions of ads. Again, it is worth pointing out that this will hurt some small businesses (larger competitors can easily pick up on their marketing strategies), but the tradeoff when it comes to oversight of not just political ads but ads of all types is worth it.

What Facebook has to realize is that while both of these proposals are likely to hurt the bottom line — the first will increase friction in advertisers coming on board (or ramping up spend), while the second will have a commodification effect on ads — this scandal is, to use Gates’ words, “not only affect[ing] the quality of our platform and services overall, but also [their] customers’ view of [them] as a company.” This matters because Facebook’s biggest risk is government regulation, and that is ultimately a political question, where the opinion of the body politic matters greatly.

Filter Bubbles

All that said, its worth stepping back for a moment and putting this scandal in context. I gently mocked Gates for equivocating computer viruses to terrorist attacks, but the suggestion that $100,000 in Facebook ads — of which only 46% ran before the election — swung the presidential results is just as questionable. Frankly, if spending $100,000 on Facebook had that level of return, the company would be worth many multiples of the $492 billion it is today! It is concerning and frustrating to me as a citizen to see so many spend far more time prosecuting these ads at the expense of a broader reflection on the state of the country.

That includes Facebook, by the way. I actually tend to agree with Zuckerberg’s post-election comment — which he since apologized for — that it was “crazy” to think that ‘Fake News’ influenced the election; my view is that Fake News is a symptom of a far more serious problem: filter bubbles.

To that end, the Zuckerberg statement that truly concerned me was on the company’s Q2 2016 earnings call; this was a few months after the brouhaha over alleged bias in the Trending Topics module, and Zuckerberg was asked about the filter bubble problem:

So we have studied the effect that you’re talking about, and published the results of our research that show that Facebook is actually, and social media in general, are the most diverse forms of media that are out there. And basically what — the way to think about this is that, even if a lot of your friends come from the same kind of background or have the same political or religious beliefs, if you know a couple of hundred people, there’s a good chance that even maybe a small percent, maybe 5% or 10% or 15% of them will have different viewpoints, which means that their perspectives are now going to be shown in your News Feed. And if you compare that to traditional media where people will typically pick a newspaper or a TV station that they want to watch and just get 100% of the view from that, people are actually getting exposed to much more different kinds of content through social media than they would have otherwise or have been in the past. So it’s a good sounding theory, and I can get why people repeat it, but it’s not true. So I think that that’s something that if folks read the research that we put out there, then they’ll see that.

Actually, this is…questionable news (I can’t quite bring myself to use the obvious term). The Facebook-commissioned study Zuckerberg referenced had massive problems, including a non-representative sample, a non-reviewable proprietary data set (thus making the study non-reviewable), and beyond that, the study’s results actually did support the idea of filter bubbles.

It’s rather a meta problem: I suspect Zuckerberg’s own bubble makes him inclined to dismiss the possibility of filter bubbles, while the bubble Facebook’s most strident critics live in means they too are focusing on the wrong thing. Certainly this is a conversation where everyone has more to lose; those scapegoating Facebook probably don’t want to think about their own responsibility, such that it may be, for an election result they disagree with, and the stakes are even higher for Facebook: giving people what they want to see is far more important to the company’s business model than $100,000 in illegal ads, unintended consequences or not.

Share Facebook

Twitter

LinkedIn

Email

