I help organize a weekly rationality meetup in New York; a couple of weeks ago we had the pleasure of hosting evolutionary psychologist and author Geoffrey Miller for a very wide-ranging discussion.

I’ve read two of Dr. Miller’s books so far and highly recommend both. The Mating Mind tells how many of humans’ most impressive achievements (art, humor, altruism, blogs) evolved to attract mates. Mate turns that scientific knowledge along with Tucker Max’s real-nightlife experience into a dating guide for men that covers most dating advice you’ve seen on Putanumonit and a thousand things besides. Aside from the two books, in preparation for the interview I did a deep dive into Miller’s archive of papers and snarky tweets – both are worth your while.

Instead of subjecting you to a podcast, my friend Brian and I turned the conversation into a 7,500-word transcript which I’ll publish in three parts:

Our questions are in bold, Dr. Miller’s answers are in normal font and lightly edited for readability, my post hoc comments are [in brackets].

You are now teaching your first class on effective altruism, I want to start with that topic. There are a few explanations for the origin of altruism in humans: kin selection, expectations of reciprocity, and sexual selection. Donating to AI safety or malaria or animal welfare has nothing to do with reciprocity or helping relatives, so it must depend on your area of expertise – impressing mates. And yet, few things are less sexy than nerds talking about utilitarianism.

How do we make EA sexy and high status? Or would it require sacrificing our core values too much, and we should keep it to utilitarian nerds?

It’s a really good question. One extraordinary thing about modern humans is that we do anything other than kin selection and taking care of our family or reciprocity and trading favors. The idea of a social primate that cares about other members of the species on the other side of the world and does anything at all to help is extraordinary.

In 2007 I wrote a paper called Sexual Selection for Moral Virtues that was trying to explore why we care about anything beyond members of our clan. The only evolutionary pressures I could think of were sexual selection to attract mates and social selection to attract friends and allies. Those are all signaling theories basically, you’re showing off moral virtues which are a confluence of agreeableness, kindness, empathy, and rationality. You show those by caring about other human beings or other species and sentient beings.

On the other hand, Effective Altruism only appeals to a tiny fraction of one percent of the population. Basically, high IQ aspie people who can think systematically about suffering. Very few people seem good at that. It seems very weird to people, in fact, it’s a turn-off. There’s psychological research coming out that says that if you adopt utilitarian views on moral dilemmas like the trolley problem, most people feel moral disgust towards you.

But, people mate so assortatively that you don’t have to impress everybody. You just have to impress people within your mating market. And Effective Altruism is, among other things, a fascinating mating market. And hopefully, it will grow.

But the strategic question is: how fast should it grow, how big should the church be. The talk that I gave at EA Global a couple of years ago was about how high an IQ is needed to be an EA member, how broad do you go with personality traits, how broad do you go in terms of political orientation. EAs tend to skew progressive and liberal but not social justice activists. Could you imagine Christian conservative Texan EAs? I’ve no solution to that.

I think for the moment it makes sense for EA to stay pretty small and elite, until it develops its principles, cause prioritization, infrastructure, and coherence as a movement.

In your book Mate, you and Tucker Max wrote that men should avoid all sugar and grains, but you’re also vegan. So what do you actually eat?

I’m a breatharian!

We wrote that book five years ago when both Tucker and I were on a Paleo kick. We hung out at the Paleo FX conference in Austin where between talks everyone was lifting kettlebells. If you set aside the ethical issues, Paleo would be a great diet for most people most of the time. For purely health reasons it’s excellent.

But then I met my current girlfriend, Diana Fleischman, who is a long-time vegan and veganism advocate, and she introduced me to that and to effective altruism. She made me read animal welfare books like Do Fish Feel Pain by Victoria Braithwaite. And I thought “Oh shit, I can’t be Paleo anymore”.

And then we had a negotiation phase because I don’t feel well if I go full vegan. So, in her style, she said: “Let’s get quantitative about it. If you just give up chicken, then in terms of life years of subjective suffering by the animals per pound of meat you’re 85% vegan ethically. And also some farmed fish – you can eat tuna because they’re big, but not anchovies.” So I gave up chicken and small farmed fish, and then we had a discussion about beef. I think that pastured beef cows have a net-positive utility life. Of course, the last day is kind of bad.

The last day is kinda bad for humans as well. Is it ironic that Diana’s last name literally means “meat man” in German?

It’s hilarious, I know.

I also offset: I give $1,000 a year to Vegan Outreach. The logic there is that there are now a dozen vegans being ethical on my behalf, and I paid to convert them. That’s our compromise.

[Jacob: I’m not vegetarian, but I buy the argument about suffering-per-calorie and try to eat large animals that are humanely raised. Right now my desire to avoid causing chicken suffering is really conflicting with my desire to order Chick-fil-A after seeing this nonsense.]

There’s a stereotype of rationalists that we’re not in touch with our System 1, which I think is a strawman. A lot of the Center for Applied Rationality techniques are about listening to your System 1 and aligning all the parts of your mind together.

With that said, rationality researcher Keith Stanovich wrote: “[System 2] is more attuned to the person’s needs as a coherent organism than is System 1, which is more directly tuned to the ancient reproductive goals of the subpersonal replicators.” He basically says that System 1 does what’s good for our genes, and System 2 knows what’s good for us. So he says that when they clash we should let System 2 override our intuitions.

You think a lot about the goals of our subpersonal replicators. Do you agree with Stanovich on this?

I think it depends a lot on the domain. Most of my 30-year career has been in evolutionary psychology, studying the adaptive challenges and problems we faced in prehistory. And often, how those don’t match what we face in our modern life.

If you’re making financial investment decisions, prehistoric hunter-gatherers didn’t really do that at all. They did certain forms of it, but they had no concept of compound interest. So in that domain, you probably shouldn’t trust System 1 much at all. But if you’re doing mate choice you probably should trust System 1 pretty strongly, except insofar as you’ve been ideologically indoctrinated to have sexual values that are contrary to your biological interests.

It’s very domain-specific, and we should have a mental map of which domains System 1 works in, the domains in which we faced ancestral challenges that are analogous to modern challenges. That’s really useful.

We hear about signaling a lot recently. Rationalists talk a lot about the pursuit of truth, but also about how everyone is signaling all the time. Do you see those as being fundamentally in tension?

I think it’s all signaling all the way down. The question is what traits are you signaling and how, and what are the social norms around that. I think that everyone here does a lot of intelligence-and-rationality signaling, showing how well your System 2 works. You’re also signaling open-mindedness. This is very different from standard SJW virtue signaling of caring about the oppressed.

Public culture decides which kinds of signaling are validated and which are discouraged. Thinking that rationality is opposed to signaling is a false dichotomy. It’s all signaling, you just choose what kind you value.

I think human cognition works well enough that your intelligence, openness, curiosity, and epistemic hygiene can allow you to discover true and novel ideas that will also get the approval of other people who value truth. That’s valid, and that’s great.

[Jacob: Putanumonit is basically just me signaling my intelligence, rationality, and openness, and hoping that some bits of truth fall out in the process.]

Robin Hanson talks a lot about our deep desire to signal group loyalty, and he’s saying that even people who pride themselves on being “non-conformists” are just playing to a different, narrower audience. According to the logic of sexual selection, should we expect women to be attracted to someone who boldly breaks convention or someone who signals loyalty to the woman’s tribe?

It depends on what convention you break. If you’re the real badass in your tribe and you have so much formidability that you can break social norms with impunity, that can be a way of signaling formidability. That might be a reason why adolescent girls are famously attracted to “bad boys” who don’t care what the teachers think because they’re the only ones who can get away with it.

At the intellectual level, signaling that you’re an apostate, or that you have heterodox views, or that you’re a contrarian is more of an IQ signal rather than a formidability signal. It can also signal the personality trait of openness, which is attractive to other highly open people. If you’re an eccentric who’s interested in other eccentrics, you would want to signal eccentricity rather than conformity.

A have a follow up to that, about the Intellectual Dark Web. There are two narratives around the IDW. Their own is that they are speaking honest truths that other people are afraid to say because of political correctness, and those people attack the IDW for the purpose of virtue signaling. The counter-narrative is that they’re not really thinking hard about the truth, and just trying to piss off progressives.

Robin Hanson would say that once you see someone as your outgroup there’s a strong temptation to succumb to your identity biases and show your tribal loyalty by pissing off the “enemy”. Do you see the IDW having leftist media as an outgroup hurting their ability to pursue the truth?

I think some people who are non-PC clearly get a thrill out of pissing off Social Justice Warriors. And sometimes on Twitter, I do that, just because I fucking hate SJWs and they have given me a lot of grief over the years. Everyone in my field of evolutionary psychology has had their career handicapped in one way or another by the Leftist domination of academia, just in terms of getting grants, jobs, and publications in journals. So a lot of us love to fuck with those people to the extent that we can.

But I think that the better people in the Intellectual Dark Web, to the extent that it’s a thing at all, what they’re doing looks very tribal but I don’t think it is. If you’re really after the truth about human nature or society, you tend to adopt certain views that are diametrically opposed to certain kinds of progressive ideology. There’s a coherence to progressive ideology that centers around the blank slate, egalitarianism over everything, and multiculturalism – it’s a package. As soon as you break away from that package you start questioning many elements of it because it’s so interwoven.

There’s something I’m getting really worried about recently, I hope this will make sense.

In the early days of AI alignment research, people talked about Coherent Extrapolated Volition and figuring out the utility function that humans are (or would want to be) maximizing. But from an evolutionary perspective, this “utility function” just doesn’t exist. What we’re programmed to maximize is inclusive genetic fitness, but we’re not pursuing that explicitly.

All we have are desires and motivations that fall out of us executing our evolutionary adaptations given our current abilities, environment, etc. Nothing we care about is explicitly written down anywhere, including in our brains, and it’s all contingent on the particular physical and social context we’re in right now.

How worried does this make you about the prospect of friendly AI? Is it even coherent to imagine an AI with totally different abilities and environment having stable “human” values?

As a psychologist, I don’t think it’s very useful to talk about human motives, preferences, and decisions as being driven by some master utility function. You can describe certain kinds of behavior using utility functions if you’re an economist, but that’s not how human motivations actually work. It’s not how evolution would have designed things.

I just visited MIRI six months ago – they still work as if we can solve the problem of AI risk by giving the artificial general intelligence the right utility function and it will forever afterward behave in a way that’s aligned with us. I worked a lot in machine learning 20-25 years ago, ao I’m pretty familiar with that. I can’t imagine any way you can actually implement a master utility function in an AI that’s truly general. And I can’t imagine that you can lock in an all-purpose, safe, helpful utility function in a way that protects against mutation, drift, learning, and damage. I think it’s fun to think about, but I don’t think AI safety will make a lot of progress in that framework.

If MIRI-style research can’t result in a benevolent singleton or friendly AGI, what kind of research can be helpful? Or are we just screwed?

I wrote a paper a year ago about the risks of not just the search for extraterrestrial intelligence, SETI, but also METI, the effort to message aliens. There’s a guy in Russia who’s been using the Yevpatoria radio telescope to send megawatt messages to nearby candidates for extraterrestrial civilizations. In the first draft of the paper, I said that the UN should have the equivalent of SEAL Team Six to take out anybody who’s doing that. The potential risks of doing that are so high it’s insane. Nobody should be allowed to do that without total planetary consensus.

Part of me thinks that there should be a SEAL Team Six with respect to AGI safety, that if anybody gets close to AGI it should be shut down with extreme prejudice. We should not rely on some informal social norm like the Asilomar principles. That won’t work, because China won’t do that. And then the question is what happens when a Chinese group in 15 years gets close to AGI and their incentives are such that they don’t care about the Asilomar principles.

The problem is serious enough that we should have the same institutions we have to address nuclear proliferation. Not just symposia at AI conferences encouraging everyone to be careful, I don’t think that’s enough.