“He wrote this post, and then people were hurt. There appears to be no rational evaluation of how or why this happened. No consideration of other factors in play, and no meaningful allowance for any alternative explanations.”

Normally I’d feel like writing this on the blog would be gauche. Believe it or not, I am trying to avoid fueling the culture war when possible, but something very specific caught my eye, and I’ve been feeling the need to address it for a long time.

This Medium post has been written about the Google Manifesto. Please read it. And then please read the following paragraph five times:

I am not staking out a position on the Google Manifesto. I am not staking out a position on this blog post. I am not making any comment whatsoever on anything object level or anything meta level concerning the political event or the blog post. What I am doing is taking an excerpt from this blog post that concerns me, and that I believe to be broadly representative of a social trend, and using this blog post as a convenient springboard to provide a context in which I can introduce my discussion. This is not a response to the Medium author. If at any point you wish to respond to anything I’ve said with “the Medium post didn’t say that”, then please don’t. I am not responding to the Medium post. I am responding to a broad trend that I perceive across certain elements of society.

Have you read the Medium article in full? Have you read the preceding paragraph five times? Then let’s begin.

For a while I have been considering a vague, half-formed thought that keeps floating around the edges of my ideas. To be blunt: when interacting with people who fall in the general direction of “Social Justice Politics,” their comments and actions are sufficiently baffling to me that I have a very hard time assuming good faith. Of course, assuming bad faith is a rather grim disposition, and if you find yourself assuming bad faith, you should consider this a warning light that you are missing some key detail. I’ve spent a lot, a lot of time trying to identify a key detail that fits into my mystery. I’ve considered some candidates, but they all inevitably strike me as “how can someone possibly believe that?”

A passage in this Medium post has put words to one of my more promising candidate key details. I still find myself somewhat at a loss for how anyone could believe this, but at least it feels right. Consequently, it should be noted explicitly that this blog post is epistemic status: exploratory. Every single statement that I make in this post should be interpreted as something like “is this true?” or “suppose this is true, what happens?”

The passage I’m interested in is the bolded part. The rest is quoted for a little bit of context within the article.

So it seems that someone has seen fit to publish an internal manifesto about gender and our “ideological echo chamber.” I think it’s important that we make a couple of points clear. (1) Despite speaking very authoritatively, the author does not appear to understand gender. (2) Perhaps more interestingly, the author does not appear to understand engineering. (3) And most seriously, the author does not appear to understand the consequences of what he wrote, either for others or himself.

The author does not think it important to address upfront whether or not the manifesto’s statements are true. However, the author does think it is important to address upfront that the manifesto’s statements are dangerous. Further, he elevates the danger to be his biggest and most important concern.

He appears to believe that the side effects of saying words are strictly more important than the actual information content of these words.

It is my position that normatively this is a bad idea. I have two main reasons. First of all, it plays into something I’ve been calling “active denial of causality.” In a nutshell, it supposes that if an action (in this case, a statement) correlates with any negative effects whatsoever, then this is automatically bad and must be stopped, by a point intervention that forcibly shifts society in the direction away from the bad thing. In this case: Manifesto Author wrote the manifesto (action), and then people were hurt (correlation), so we must do whatever it takes to punish him and stop him or others from writing things like this again (forcible shift).

The critical detail missing from that line of reasoning is the mechanisms by which any of those things are related to each other. There is a post-hoc-ergo-propter-hoc fallacy: He wrote this post, and then people were hurt. There appears to be no rational evaluation of how or why this happened. No consideration of other factors in play, and no meaningful allowance for any alternative explanations. So, for example, it is assumed as an inviolable axiom that the manifesto author is the only person with agency. It is assumed that his action always necessarily causes the harms posited and that the only way to stop them is to prevent his action. The idea that someone could possibly disagree with him and address him head on without exploding into threats and insults is written off without so much as a thought.

This dogmatic, unthinking insistence on a direct cause-effect relationship, with no attempt made at understanding contexts or motivations, creates a bigger problem. If the causes are arbitrary and direct, then changes can be as well. There appears to be a sincere, genuine belief that all it takes to stop this from ever happening again is a well-timed banhammer. This completely denies any discussion of the underlying causes of why someone might write this. It completely denies any other underlying causes for why people reading it may be hurt by it. It completely denies that there will be any side-effect reaction to the implementation of this banhammer. It is an authoritarian high modernist fallacy, assuming that anything not legible doesn’t exist.

This is bad and dangerous, in general, because it will cause people to make mistakes. This kind of thinking leads to an attitude that underlying structures do not matter, and can be blown away without a second thought if they are inconveniently in the way. It is the philosophical equivalent of an architect who smashes down load-bearing walls without hesitation, purely because the room was the wrong size. This makes buildings collapse. Or, to use an analogy the left might like: “If we blow up terrorists, there will be no terrorists to blow us up” is an insane and ridiculous statement because, among other things, it assumes that blowing up terrorists will not radicalize any new terrorists.

The second reason is somewhat more mundane and practical, but even more important.

If, when somebody makes a statement of fact, our primary concern is with whether or not those facts are true, this is a tractable problem. We can establish, for a certain subset of claims anyway, an objective and unchanging standard of truth. This allows all of us to coordinate against that standard, which facilitates communication and understanding.

In practice, one of our main standards of truth is a cluster of ideas that are broadly referred to as “the scientific method.” Essentially: An idea is proposed, data is collected, statistical methods are applied to the data in order to validate it, and then a conclusion is written, tying the observed data to the proposed idea.

Observed data is not up for argument. It is not up for debate. You cannot deny it, unless your argument is “the person providing this data is lying.” Statistically concluded data is not quite as ironclad, but it still cannot be denied unless one has concerns about the underlying data or statistical confounders.

Contrasting to this, if one’s primary concern is not with whether or not a statement is true, but whether or not it is dangerous, this is a much more subjective thing. If I measure something at 100kg, it is 100kg. It is 100kg regardless of who you are, where you are, what you are. However, if your assertion is “telling me that this thing is 100kg is dangerous,” that is more subjective. You can’t make the claim “this is universally dangerous,” only “this is dangerous to me.” Because different people will have different thresholds for danger, this makes it much harder to coordinate, communicate, and understand each other. It creates impossible-to-understand situations. It creates fundamental disagreements that ultimately cannot be resolved without resorting to violence, whether symbolic or physical.

This is important when one is creating systems that are supposed to stand as absolute, objective standards that we coordinate around. For example, if your rule is “anyone under 100kg can attend”, this is a rule everyone can easily understand. If your rule is “anyone who is not dangerous can attend”, what then? How do I know if I am dangerous? Do we use my standard of danger? Your standard? What if our standards are different? How am I supposed to know ahead of time whether or not I am allowed to attend?

In extreme cases, major ambiguities in social norms like this can be exploited by sociopathic bad actors. I don’t think I need to get into details on this; if you read our blog regularly, you can think of some examples.

If we adopt a social norm that the side effects of speech are more important than its truth value, another problem arises: we can no longer trust the truth value of anything by default, ever. If we know that, when one’s priorities come in conflict with the truth, society prioritizes one’s priorities, then a necessary precondition to trusting someone is “knowing that their priorities are in line with the truth.” However, as priorities are subjective things, we can never know this for sure. Especially in a world in which people’s communication about their priorities is itself subject to the same truth/pragmatism tradeoff.

Once we can no longer trust the truth values of anything by default, we can no longer be confident in the correctness of our own reasoning or actions. At the extreme, this creates a miserable society, where nobody can trust one another, where everyone is disconnected from the fundamental constraints of reality, and consequently where everyone is constantly accidentally hurting themselves and each other with foolish mistakes and petty manipulations.

From speaking to many left-leaning people, I understand that many of them would at this point like to make an argument of the form “yes, yes, I understand that, but you’re seriously defending $REALLY_BAD_THING. Can’t you tell that $REALLY_BAD_THING is really bad, and that in this case the trade-off is not only worth it, but imperative?”. The problem with this is that your perception of its badness is not objective. The problem is that you think that this is the really bad thing that needs an exception. And that guy over there thinks a different thing is the really bad thing that needs an exception. He thinks that your thing is actually a really dangerous exception, and you think the same about his. And, because you have both thrown out the window any hope of an objective, absolute standard to measure your claims by, ultimately the only way to decide which of you two is correct is “might makes right” aka “whoever actually pulls off the deceit wins.”

So, consider the following. Perhaps you sincerely and in good faith believe that, despite everything that the Google Manifesto says being true (because if it wasn’t, you could just say “this document is false” and be done with it), it will cause harm and danger to many people and so it should be nonetheless denied. Well, so, here’s a problem.

In 2003, the Bush administration knew damn well that there were no WMDs in Iraq. However, they really truly sincerely believed, according to whatever weird messed up metric they were using, that not bringing war to Iraq would cause massive harm and danger to many people, so that nevertheless we should lie about there being WMDs in Iraq. In a world where truth is prioritized, they have to come to us and say, “Ok, so, we don’t have a slam dunk argument for why we need to do this, but we still need to do this because of XYZ”. And then we can say “yeah but if we do this, ABC,” and then we can do a proper discussion of the tradeoffs of XYZ vs ABC and, hopefully, come to a good path forward. But in a world where “But I FEEL that that would be bad” is considered the trump card, then this discussion of whether or not we should glass the desert doesn’t happen, and mans with guns go murder a fuckload of people unless you are smart enough to realize that they are lying to you before they start deploying a carrier group. I generally think that not-murdering people is preferable to murdering people, especially when they didn’t threaten us with weapons of mass destruction.

I believe that SJWs, activists, etc., in general, act-as-if they don’t think truth matters. I believe that this concisely explains a lot of things that otherwise appear to be irrational or nonsensical. I also believe that this is a very dangerous attitude to hold, and I am genuinely afraid of our society becoming one in which this attitude is the normalized, default attitude.

Ultimately, a focus on truth is equivalent to saying “I trust you, fully informed with all the facts, to do the right thing”. And a focus on harm (or any subjective metric) is equivalent to saying “so-and-so has the right to impose their decision on you”. Of course this probably sounds great when you’re doing the imposing. It’s just somewhat laughable that the people doing the imposing are also the ones claiming to be oppressed.