Let’s stipulate something: The debate over the Google diversity memo is not about the memo itself.

The memo — if the document is even official enough to be called a memo — is an attack on Google’s diversity programs posted on an internal message board by a then-midlevel engineer. It does not mount a persuasive argument for its thesis. It was not written by someone with significant power at Google. It is not part of a debate the company is trying to hold, nor is it being considered as part of an effort to revamp Google’s hiring policies. Its author was swiftly fired. If this same document had been written at McDonald’s, or Staples, or Safeway, with the same outcome for the author, few would care. So why has this story caught such fire?

Behind the furor over the memo is our unease with the unaccountable, opaque power Google in particular, and Silicon Valley in general, wields over our lives. If Google — and the tech world more generally — is sexist, or in the grips of a totalitarian cult of political correctness, or a secret hotbed of alt-right reactionaries, the consequences would be profound.

Other industries offer more choices and less necessary products. If you don’t like McDonald’s, go to Wendy’s. But Google wields a monopoly over search, one of the central technologies of our age, and, alongside Facebook, dominates the internet advertising market, making it a powerful driver of both consumer opinion and the media landscape. It owns the world’s most popular smartphone operating system and is muscling its way into everything from restaurant reviews to driverless cars to artificial intelligence. It shapes the world in which we live in ways both obvious and opaque.

Compounding the problem is that the tech industry’s point of view is embedded deep in the product, not announced on the packaging. Its biases are quietly built into algorithms, reflected in platform rules, expressed in code few of us can understand and fewer of us will ever read. And yet those hidden commands and unexamined choices can lead to discrimination in housing and jobs, to a public sphere that fosters continual harassment of women and people of color, to a world where conservative news is suppressed, to a digital commons that everyone must use but that only a certain kind of person gets to build.

This is why trust matters so much in tech. It’s why Google, to attain its current status in society, had to promise, again and again, that it wouldn’t be evil. But what if it actually is evil? Or what if it’s not evil but just immature, unreflective, and uncompassionate? And what if that’s the culture that designs the digital services the rest of us have to use?

What the Google memo said

I was surprised when I read through the core document just how thin it actually is.

The author, James Damore, wrote it on a long plane flight and posted it to an internal company message board (which is part of why I question whether “memo” is really the right descriptor here). It’s titled “Google’s Ideological Echo Chamber: How bias clouds our thinking about diversity and inclusion,” and it tries to cover a lot of ground very quickly: biases in Google’s political culture, differences between men’s and women’s aptitude for and interest in engineering and executive-class jobs, the promise and perils of diversity programs, the differing moral foundations of left- and right-wing political views, and so on.

Damore’s basic argument is that Google is too politically correct to admit that the heavy male lean of its engineering staff reflects fundamental differences between men and women — in particular, that women are more people-oriented while men are more thing-oriented, and that women are more anxious, less status-obsessed, and more desirous of work-life balance. “We need to stop assuming that gender gaps imply sexism,” he says.

This part of Damore’s argument has attracted most of the attention, but there’s very little there: The entire section is under 400 words, and it doesn’t offer analysis of any of this research, examine counterarguments, consider industry or job structure, or really engage with the debate it’s part of in any serious way at all.

If you’re interested, for example, in how Damore came to make his assertions, or why he thinks they prove what he says they prove, you won’t find a satisfying answer (or any answer). Reading the memo, you might wonder why a tendency toward higher anxiety or a preference for a more humane work-life balance would keep women out of coding but not out of law or medicine, but you won’t even find a discussion of the question.

Nor will you find a discussion of any of the other research on sexism in Silicon Valley, or in science and math education. But perhaps that’s because Damore’s point isn’t so much to settle the debate over gender and tech as it is to push back on Google’s diversity programs, which he alleges are potentially illegal and definitely harmful:

To achieve a more equal gender and race representation, Google has created several discriminatory practices: • Programs, mentoring, and classes only for people with a certain gender or race 5 • A high priority queue and special treatment for “diversity” candidates • Hiring practices which can effectively lower the bar for “diversity” candidates by decreasing the false negative rate • Reconsidering any set of people if it’s not “diverse” enough, but not showing that same scrutiny in the reverse direction (clear confirmation bias) • Setting org level OKRs for increased representation which can incentivize illegal discrimination

The final argument Damore makes is that this isn’t just wrong, but actively bad for the company. “We’re told by senior leadership that what we’re doing is both the morally and economically correct thing to do, but without evidence this is just veiled left ideology that can irreparably harm Google,” he writes. He says that Google’s diversity push “need[s] principled reasons for why it helps Google; that is, we should be optimizing for Google,” not for leftist ideology.

Given the notoriety Damore’s writing has achieved, it’s striking how poorly the whole thing holds together.

As Josh Barro writes, even if you grant Damore’s various premises — Google leans left, leftists and rightists have different moral foundations, there are population-level differences between men and women, etc. — you would likely end up in the opposite place he does:

If it is true that aggregate population differences mean that a majority of the suitable candidates in a field are men, that can make it more important for firms in that field to undertake aggressive efforts to recruit and retain women. Otherwise, firms may end up with an employee base of which only a small minority is women, even when women make up a larger minority of the suitable candidates.

So here’s where we are: An obscure Googler made a not particularly persuasive argument, which he himself admits is completely contrary to his company’s hiring and diversity policies, on an internal company message board. The company disavowed the memo and ultimately fired its author. So why do we care?

We were right all along

Damore’s memo is a wonderful document in that it has united observers of all political persuasions in the comforting knowledge that they were right along.

If you believed that Silicon Valley’s gender disparities are caused, in part, by a hostile culture filled with alt-right nerds who think women are biologically less capable of being computer programmers than men, Damore’s memo showed you were right all along.

If you believed that technology companies like Google are intolerant of alt-right views, and are so committed to a pro-diversity agenda that they won’t even permit discussion of whether modern workplaces are ignoring essential differences between the genders, Damore’s termination proved you were right all along.

If you believed the left’s commitment to workplace speech protections is a ruse that would be abandoned as soon as that speech tilted right and the employee being protected was a conservative, the reaction to Google’s firing of Damore proved you were right all along.

If you believed the right’s commitment to at-will employment, belief in the sanctity of private contracts, and opposition to safe spaces was a ruse that would be abandoned as soon as it was politically convenient, their reaction to the reaction to Google’s firing of Damore proved you were right all along.

All of these groups really were at least partially right all along, and the things they are right about are consequential.

Silicon Valley really is filled with engineers who believe women are, as a group, innately less capable at, and interested in, computer programming, and who use that belief to justify the outcomes of a culture that’s often hostile to women (see: Uber).

Major tech companies really have committed themselves to pro-diversity rhetoric and efforts that leave little space for those who think diversity isn’t an achievable or even desirable goal.

The left really does think certain kinds of speech should be discouraged in service of creating a more inclusive country.

The right really does demand the protections and safe spaces it so often mocks the left for favoring.

What has made this a national story, though, is that the organization everyone is now sure they were right about all along is Google, and perhaps the entire technology industry.

All this matters because tech matters, and its motivations are hidden

In 2015, the brilliant technology researcher danah boyd gave a speech titled “What world are we building?” In it, she traces her early love of the internet, the haven it provided for “a geeky, queer kid” from small-town Pennsylvania, the utopian hopes early adopters had for the way it could crush social and cultural hierarchies. And then she looks at how those dreams came crashing down.

What is easy to forget when looking at a computer program or an app or an algorithm, boyd says, is that “technology is made by people. In a society. And it has a tendency to mirror and magnify the issues that affect everyday life.”

In his remarkable report on how artificial intelligence learns to be racist, Brian Resnick describes a recent paper published in the journal Science that shows how as “a computer teaches itself English, it becomes prejudiced against black Americans and women”:

They used a common machine learning program to crawl through the internet, look at 840 billion words, and teach itself the definitions of those words. The program accomplishes this by looking for how often certain words appear in the same sentence. Take the word “bottle.” The computer begins to understand what the word means by noticing it occurs more frequently alongside the word “container,” and also near words that connote liquids like “water” or “milk.” This idea to teach robots English actually comes from cognitive science and its understanding of how children learn language. How frequently two words appear together is the first clue we get to deciphering their meaning. Once the computer amassed its vocabulary, Caliskan ran it through a version of the implicit association test. ... She found that African-American names in the program were less associated with the word “pleasant” than white names. And female names were more associated with words relating to family than male names. ... Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. That’s not because African Americans are unpleasant. It’s because people on the internet say awful things. And it leaves an impression on our young AI.

This tendency can, of course, be caught. The AI’s algorithm can be reworked, modified, examined. But none of that is going to happen if the people building the AI aren’t themselves conscious of the racism laced throughout American society. It’s certainly not going to happen if the people building the AI don’t believe there is racism in American society, and if they tell themselves that the outcomes we see around us are simply biology at work. (You can also reverse this argument to see the alt-right perspective: If the AI is being forced to ignore important information that offends the PC police, then the world it builds will be dangerously skewed.)

Either way, if the AI’s conclusions aren’t caught at the development stage, it’s unlikely they’ll be caught later on, when the machinery is being used to spit out credit scores and make predictive hypotheses about consumer behavior or which job applicant a company should interview and hire. At that point, none of the people using the machine know how the algorithm works, much less how to modify it; they just know that the unbiased, objective digital hive mind is giving them this answer and they had better listen to it.

Nor do you need to reach all the way into our AI future to find unconscious, pervasive bias affecting seemingly neutral, technological processes. In his dive into how the internet enables discrimination, Alvin Chang describes an experiment that shows how the targeted ad markets that form the core of Google’s business entrench gender stereotypes and occupational inequality:

[The researchers] decided to test the Google Ad ecosystem, which uses information it gathers about you to show personalized ads. The researchers wanted to test a simple question: How do Google's assumptions about us affect the ads we see? The only way to find this out was to simulate the experiences of 500 internet users using a computer program. For each of these experiences, the researchers found ways to tell Google about themselves. For example, there was a toggle in the settings menu to set each user’s gender — so that let them test how men and women were treated differently when looking for a job. From there, they visited two news websites and observed what kinds of ads Google showed them. What they found was a clear discrimination based on gender: The male users were much more likely than women to see ads for executive-level career coaching.

It’s perhaps worth noting here that only 25 percent of Google’s leaders are women, and that a key argument of Damore’s memo is that a likelier explanation than bias or structural discrimination is that women are biologically less interested in high-status jobs and less capable of handling stress.

What you don’t know can hurt you

Many groups are afraid they are being left out when the technology that dominates our world is designed, and they are right to be — they are being left out. Google’s technical staff, for instance, is only 20 percent female and only 1 percent African-American.

But that leads to a secondary fear, which is that the people who are in the room don’t like you, don’t think much of you, and are going to consciously or unconsciously hurt you.

That’s what Damore’s memo said to women, and it’s been notable how many powerful women inside Silicon Valley have spoken up to say that they feel, and fear, that views like Damore’s are widespread. “It’s insidious and it’s all around the culture,” Megan Smith, a former vice president at Google, told Bloomberg. It’s “pervasive,” wrote Susan Wojcicki, the CEO of YouTube (which is owned by Google).

The reaction to the reaction to Damore spoke to the fear conservatives have that they are increasingly unwelcome in the upper echelons of Silicon Valley, and that their views will see them drummed out of power. The forced resignation of Mozilla CEO Brendan Eich over his support for an anti-same-sex marriage ballot initiative remains an open wound on the right.

The technology industry’s power is vast, and the way that power is expressed is opaque, so the only real assurance you can have that your interests and needs are being considered is to be in the room when the decisions are made and the code is written. But tech as an industry is unrepresentative of the people it serves and unaccountable in the way it serves them, and so there’s very little confidence among any group that the people in the room are the right ones.

So long as that’s true, any indication that the builders of tomorrow are quietly against you, which is what Damore’s memo was, will be explosive.