Facebook founder and CEO Mark Zuckerberg is testifying in front of Congress this week. To accompany the testimony, Select All is publishing transcripts of interviews with four ex-Facebook employees and one former investor, conducted as part of a wider project on the crisis within the tech industry that will be published later this week. These interviews include:

• Former Facebook manager Sandy Parakilas on privacy, addiction, and why Facebook must “dramatically” change its business model.

• Early Facebook investor Roger McNamee on Facebook propaganda, early warning signs, and why outrage is so addictive.

• Former Facebook designer Soleio Cuervo on Facebook’s commitment to users, what the media gets wrong, and why regulation is unnecessary.

• Former Zuckerberg speechwriter Kate Losse on how the Facebook founder thinks and what is hardest for him to wrap his mind around.

This interview is with Antonio Garcia Martinez, a product manager on the Facebook Ads team between 2011 and 2012. He is the author of Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley Kindle.

There’s an assumption that what Facebook does for advertisers is hugely influential. But on the other hand, as Zuckerberg said right after the election, it was a pretty crazy idea that fake news could influence an election in a meaningful way. Where do you think that dissonance kind of came from? As somebody who worked on the ad side.

So you’re referring to his somewhat jaw-dropping initial post after the election on November 11th or 12th or whatever it was, right? The one where he basically dismissed the claim that Facebook could have impacted the election. I mean, that was kind of a crazy claim. I also was clearly astonished when I read it. Until literally a few days before, this entire ad sales team at Facebook was literally telling every politician with any budget that Facebook can actually hand them the election. It is incredibly disingenuous and strange for an exec to get up and say that there’s no way Facebook could have potentially impacted the election.

Two or three days later someone sat him down, and he very quickly backpedaled. I mean, that I think is the combination of a few factors. At the exec level, it might just be a function of the fact that he doesn’t know too much about ads, and when I say that it sounds horrible, but I don’t mean it in a necessarily critical way. Zuck has famously never been very interested in either money or revenue of the company. Obviously he understands that it’s a necessary evil, but it’s the sort of thing that he just outsources to Sheryl and whatever lieutenants that are taking care of the ad system.

He’s a micromanager, and I never even saw him once in the ads area micromanaging anything. He just doesn’t care. I’m sure he knows but not in a top-of-mind sort of way that there’s an ad political sales force numbering in the hundreds and a huge D.C. office and that there’s this entire effort actually to make politics more sensational. Look, it’s probably not something that he thought of when he sat down in a postelection moment of panic or emotion or whatever to write about, right? So I think some of it’s just that. Some of it’s … people always approach this with this kind of overweening techno-optimism, right?

They only ever see the positive side of the technologies they create. I mean, part of that is because we really, at heart, really are just such optimists, they can’t imagine negative scenarios, they don’t have some kind of, sort of tragic history.

At what moment would you say that blindness actually became a really big problem? It feels like the election was just sort of the bubbling over of a lot of long-simmering kinds of issues in this realm.

﻿I do think the election was, certainly in the case of Facebook, a key inflection point, not just in how the public perceives them but how they perceive themselves, right? The biggest sign that things are seriously amiss, or that there’s real turmoil in Facebook is the number of leakers that journalists have managed to find. Historically, Facebook was like the most impenetrable company ever. Nobody would ever leak or talk bad about it and now they are, and a lot of this dirty laundry and the palace intrigue, all that shit is leaking out now in a way that it wouldn’t in the past. I think the election definitely fractured that internal sort of mission focus and cohesion that Facebook has traditionally enjoyed. I mean, to your broader question of when was there a transition point? I don’t know, I think it certainly fits the narrative to say, “Aha, the key moment in the movie, when everything changed …”

I think a lot of the save-the-world stuff comes from the origins of the Valley. If you go back to the Seventies and the counterculture, the hippy flower children dropped out, and Silicon Valley was this alternative to mainstream, industrial life. Steve Jobs would never have gotten a job at IBM or a more conventional firm. He had to create this other thing, which was imbued with a lot of that hippy dippy whatever. After a while, the whole thing became more sharp elbowed. It wasn’t hippies showing up any more.

There was a lot more of the libertarian, screw-the-government ethos. That whole idea of move fast, break things, and damn the consequences — which by the way is a very powerful philosophy. I’m not completely dinging it, you kind of need to have that attitude to get things done but, yeah, you do end up in situations like you did in this.

I think Silicon Valley has changed. It still flies under this marketing shell of “making the world a better place.” But under the covers it’s this almost sociopathic scene. Even me, when I had my shitty little start-up that I acquired, I was also in total asocial personality disorder mode, and I think it characterizes a lot of people in this world.

The other thing, though, and of course, the response to that and I’ll put my Facebook cap back on right, the response to all this is, Look, no matter what Facebook does, there’s going to be this loud angry chorus of complaints and that’s always been true. Literally, anywhere there’s been an A/B split in the road of what Facebook could do, it always gets criticized, and the internal joke about it used to be “Oh, bring back the old Facebook.”

Of course, now, if you pulled the plug on Facebook, there would literally be riots in the streets. So in the back of Facebook’s mind, they know that they’re stepping on people’s toes. But in the end, people are happy to have the product, so why not step on toes?

What changes could have been made in the ad business and in the business models of internet advertising that could have averted some of this at least? Take Cambridge Analytica just as a recent example of this. I’m kind of curious about the ways in which the business model could have been reoriented to avoid some of this.

﻿Look, I mean, advertising sucks, sure. But as the ad tech guys say, “We’re the people who pay for the internet.” It’s hard to imagine a different business model other than advertising for any consumer internet app that depends on network effects. I just can’t think of many examples of viable businesses of that nature that weren’t based on advertising. What else are you going to do? How else do you pay for this?

One proposal would be something like subscriptions. Is there an alternative that is meant for the public benefit, that you think makes sense? Is that something that resonates at all?

﻿Normally I just discard the subscription proposal. Facebook’s actual average revenue per user in developed nations like the U.S. is pretty damn high. So it would put it on the order of a Netflix subscription, or more, potentially. Even though people might derive that much value from it, it’s unlikely that they’re just going to fork out a couple hundred bucks a year for Facebook.

Maybe in mature markets where everyone who is going to become a Facebook user is already a Facebook user, maybe there you could do it, if you combined it with some premium features, like I post a lot on Facebook and I have a blue checkmark and I guess I’m a power user, whatever. Posting on Facebook is at least as much about pumping my own personal brand as it is keeping touch with friends. Yeah, would I pay 23 bucks a month for really nice, advanced feature and a better UI, and posts that went out a certain time and better analytics? Yeah, maybe I would. It’s not crazy.

The other question is what do you charge, right? How does the pricing work? Part of the point of advertising is that it’s a price-discovery mechanism. You just don’t know what that time is worth until you actually subject it to an ads auction model. So, I mean, how’s it going to work? The reality is, it could end up being the developed world subsidizing Facebook for the developing world. Which is already the case, I guess, in the sense that the amount of money that Facebook makes in ads sort of provides Facebook for India. India doesn’t pay for itself, frankly. The U.S. and Europe pay for Facebook and then the marginal cost of it is relatively small, so they give it away in Brazil or India, or whatever, right?

You get into these pricing problems because there’s going be an old lady in Arkansas who’s only willing to pay 50 bucks for Facebook and then there’s me, I’d pay $1,000 a year probably for it, for personal branding reasons. But, I mean, how do you distinguish those two?

Do you think that there’s a way in which the process and the tools and the structure of the industry can be reformed, and you can say, “Actually, there’s a way for advertising to exist in a modestly healthy way on the internet?”

﻿I don’t know if advertising really corrupts. I’m old enough to remember the early days of the internet, the rise of mass-scale consumer internet. I remember using Pine — internal-based email — in grad school and college. The original internet was built by super geeky engineers who didn’t fully understand the commercial implications of it. What would now smack of a government or socialist attitude towards things — “Oh, we’re going to have an RFP, a request for proposals, for some protocol that lets you log into a machine called telnet, with zero security.” Or, “This thing called FTP that lets you download a file from a remote server,” or, “We’ve got this thing called emails and we’ve come up with a protocol so that different apps can actually manage to talk to each other. We’ve got a thing called SMTP,” right?

But then, somewhere in there, and it would take a better tech historian than me to sketch out exactly how this happened, we went from a world where one particular company controlled everything. There was no notion of an open protocol, or an open standard thing that companies built upon, right?

Imagine if you somehow went back in time and tried to create or architect a social network in the ‘90s or the ‘80s. What would it look like? You’d have an RFP that would stipulate your own, user-controlled social-media file, which would basically be the data that you could pull off of Facebook’s platform, for example, right? That’s something that I would own and it would almost be like an email. Like, I’ve got my data and I can shop it around to whatever social network I wanted. If I want to join Cat Net, which is a bunch of people who are into cats, and everything’s cat-themed, I can do that. With relatively little friction, I can then join a more mainstream social network, like, say, some early Facebook. It would just be a fundamentally different thing.

Another example. If email were being invented now and Facebook … well, I mean, that’s effectively what Messenger is, right? If email were being invented now and somehow Mark Zuckerberg had concocted it, it would be a completely vertically, integrated, proprietary thing that nobody could build on. Nobody who’s a Facebook user could send a message to some other network and you’d just be stuck. Then the incumbents would duke it out for mind share and it would be this ruthless, winner-take-all battle. Email now is a standard. A guy who’s on Gmail, on Google’s thing, can actually email somebody who’s on Microsoft’s thing.

We went from open standards, designed by super-geeky engineers — and then maybe you can commercialize it somehow — to “No, no, no, it’s commercial, and owned and operated proprietary first. We’re going to ship a suite of tools and products and either you’re with us or you’re against us, and we’re just going to battle it out, and that’s it.”

I think, if anything, that attitude switch was probably a much bigger deal than the corrupting influence of advertising.

I want to move to some of your recent writing and analysis on Facebook’s platform and targeting. It seems like they built these tools and were totally caught unawares of how they could be abused. How is that possible?

﻿In the Cambridge story, the abuse was really on the platform side, it wasn’t on the ad side. I mean, they used ill-gotten data, but the ad system worked as designed.

I had a hand in some of the targeting stuff that was designed and I had a hand in the early versions of custom audiences. At the time, I wouldn’t have imagined the political implications, not because I was necessarily naïve, I just didn’t know much about politics in general. Frankly, political advertising in 2012, it was not zero, but it wasn’t a huge amount of spend, so it wasn’t something that a product manager would have spent a lot of time thinking about.

I guess it’s funny. Everyone focuses on ads. I’ve always wondered why ads are so tainted. I think it’s because money’s involved, and money always gives it this sort of evil taint. In my mind, the scary sort of parts of Facebook is the fake news, the filter bubble, the online tribes that don’t speak to each other, the political polarization. The organic side, to me, is scarier than the ad side.

Why’s the organic side so scary?

﻿Well, because what do you do about it, right? I mean, at the end of the day, Facebook is tyrannical about its ads. If you do anything to fuck it once, it’ll just kick you off. There’s no freedom of speech, there’s no anything. It’s like, “You’re here. You give us money, otherwise fuck off.” That’s Facebook’s attitude toward its advertisers.

It’s very different when you go back into feeds. Everyone uses feeds. There’s free-speech concerns, there’s political concerns, there’s “What is hate speech?” concerns. There’s just so many contending factors and stakeholders there. Humans need narratives to survive and live, and the new source of narratives is Facebook.

This is where they just wade into the whole cesspit of human psychology. I mean, the reality is that Facebook is cognitive dissonance at scale. Cognitive dissonance is the feeling of awkwardness you feel when your worldview gets contradicted in some concrete way. You’ve got a worldview, you’re presented with evidence that it’s wrong, but you don’t sort of lose your belief, you actually dig your heels in and believe in it even more because you see contrary evidence. It’s this weird, knee-jerk reaction the human mind has.

So Facebook, I wouldn’t say it exploits it, like no one’s sitting there on Facebook going, “Ha, ha, ha, ha. Cognitive dissonance.” But the algorithm, by default, is designed to placate you by shielding you from the things you don’t want to hear about. That, to me, is the scary part—because there’s no changing human psychology on a timescale that’s relevant to us. The real problem is not Facebook—it’s humans.

Well, in a sense, though, there’s also the problem of Facebook attempting to answer a lot of these questions through the use of algorithms by weighting certain things above or below others. Do you think that there’s a possibility that there’s a future in which we either rely on the algorithm to do that kind of work less, or that we have algorithms that are much more strictly regulated? What’s the next step for the algorithm in this, if the algorithm is what’s doing so much of that internal regulation?

﻿There’s this notion of the algorithmic path. Way before Facebook, going back to Google, they’ve always claimed, “Look, we’re just intermediaries. The algorithm optimizes for a metric, whether it’s engagement, clicks, or whatever. We’re not responsible for what you see. At the end of the day, it’s you, the users, through your actions, that ultimately define what you see. It’s not really us.”

I think we’re reaching a point where people are unwilling to write them that blank algorithmic check. Hopefully, I’m not using too many metaphors there. But you see what I mean, right?

The real issue is that people don’t assign moral agency to logic and algorithms. When shit goes sideways, you want someone to fucking shake a finger at and scream at. But Facebook just says, “Don’t look at us. Look at this pile of code.” Somehow, the human sense of justice isn’t placated.

To reframe it in sort of more traditional terms, it’s as if GM makes a bunch of cars and there’s a faulty brake pad and somebody dies, and then the CEO says, “Don’t look at me. Look at this switch on the assembly line that didn’t work in the right way.” Silicon Valley had a traditionally effective bulwark against this — blame the algorithm — and increasingly you see the curtains being lifted up and the wizard is revealed to be Mark Zuckerberg and a group of executives who don’t necessarily have a great handle on how that works.

﻿Again, channeling Zuck — he wouldn’t say this in public, but what he’s thinking, or someone like him would think, is, “Well, again, the algorithm just reflects our own prejudices.”

The other way of looking at it is that historically, we had editorship for a few reasons. One, it was an appeal to authority. “These people just understand these political issues, whatever, better.” The other is — maybe not so much in the U.S., but certainly in Europe — that the editor should edify us, right? Sure, more people would sit there and gape at a photo of Kim Kardashian’s big ass on Instagram, or whatever, but you should be feeding us a story about, I don’t know, the situation in Yemen. Basically, editors are there to tell you to eat your vegetables. Facebook is kind of like, “Well, the algorithm says to feed you just endless sugar and fat, right, and that’s what we’re going to do.” These people have just abdicated any sort of responsibility toward informing or educating the public. Somehow, we have subconsciously accepted that.

Does Facebook just have to hire tons and tons of people who are going to eat into its bottom line on payroll because they need to have more people who are effectively making editorial decisions on behalf of the public and working to make, at the very least, a stable, civil, online ecosystem?

﻿Sure, but imagine Facebook goes ahead and hires every J-school grad from every journalism school in the United States and creates some sort of editorial team. Think about that for a second. Do you really want Zuck, and by Zuck, I don’t mean, a bunch of code that he wrote, I mean, like, Zuck himself, and humans that he appointed, to decide what you read every day? Can you imagine that editorial meeting with Zuck presiding it?

Does it have to be internal? Does it have to be Zuck? In a different era the FCC created broadcast standards requiring a certain amount of a certain kind of programming for the purpose of the edification of the public. There was a time in which the government took on some of that role. Maybe there’s a degree to which other people can do this work. Does that make sense to you?

﻿It’s funny, I was reading up on the BBC, and under their charter — because, of course, they’re a government corporation — they actually have hard bounds on how much cultural versus entertainment programming they have, and so they have exactly the regulations that you describe.

I mean, on a practical level, I think it’s too outside the Overton window in the U.S. but, yeah I guess you could. But then, the BBC gets subsidized by government tax money ‘cause they can’t make enough money, or probably couldn’t make enough money given how it’s configured.

So you’d have an edifying, BBC-style Facebook with a lot of cultural content and historical stuff, and none of the clickbait, but it’s the postal service and it needs to be subsidized to get by. It wouldn’t exist as we know it today, which, maybe you would find much better, but, the world would look nothing like what it looks like now.

Is there anything that we haven’t talked about that you think is a critical concern here?

﻿One thing that I think is annoying is the lack of moral courage in Silicon Valley. No one takes a stand on anything ‘cause the opportunity costs can be so great given the winner-take-all nature of it, right? Missing out on being employee five at the next Uber is like a life-defining event and no one wants to fuck that up, right?

Some of the early Facebookers do hate me just because their whole identity in life is so wrapped up with having been part of the crazy Facebook adventure in the early days; any hint of criticism is just outrageous to them. I think that’s also part of it. I think people at Silicon Valley are so attached to their companies, right, the way that people consider religion, or region, or a community, or a family elsewhere, people have these feelings here about the companies they work for, which is so strange and is the ultimate triumph of capitalism.

The ultimate test of that, to me, is going to be what happens when these companies start building their mini-villages, like Facebook is proposing, and Google, and others are because of the housing crisis in the Bay, essentially creating these little fiefdoms. I’m super curious to see, is there another supply of 23-year-old CS graduates who are willing to jump right into that? Do you think that with everything that’s going on now, that the people who actually want to go work in Silicon Valley, that they’re not going to get the same kind of idealistic, in equal parts idealistic and talented cream of the crop that they’ve been previously able to tap into?

﻿We have seen industries go from being cool to being uncool, like consider Wall Street and finance. Post credit crunch, the whole view there changed. Tech, in many ways, reaped the harvest of that. The first thought for the American cognitive elite coming out of Harvard or Princeton is no longer McKinsey or Goldman, it’s Google or Facebook. That could change.

I remember reading that some Facebook employees felt uncomfortable when they went back home over Thanksgiving, after the election. They suddenly had all these hard questions from their mothers, who were saying, “What the hell? What is Facebook doing?” Suddenly they were facing flak for what they thought was a super-cool job.

Look, as long as the stock market is up, calculations are up, VCs are still there, the party’s not going to stop. The punch bowl and the party is still there. I think the party really stops when the asset values go down, not when people have moral crises of confidence, to be honest.

This interview has been edited and condensed for length and clarity.