“Human values (‘Elua’) mean hedonism and free love and namby-pamby happiness, and I’m not on board with that.” (example)

Are you a human? If so, congratulations. Your values are human values. As I wrote loooong ago in the Consequentialist FAQ:

Preference utilitarianism is completely on board with the idea that people want things other than raw animal pleasure. If what satisfies a certain monk is to deny himself worldly pleasures and pray to God, then the best state of the world is one in which that monk can keep on denying himself worldly pleasures and praying to God in the way most satisfying to himself. A person or society following preference utilitarianism will try to satisfy the wants and values of as many people as possible as completely as possible; thus the phrase “the greatest good for the greatest number”.

I grok the value of martial glory. My heart stirs as much as anyone else’s when Achilles goes forth in his god-forged armor, shouting boasts and daring the bravest champion of the Trojans to take him on.

But if some modern Achilles tried that today, he would be shot dead with a machine gun in about three seconds. Or bombed by a drone operated remotely from ten thousand miles away. Moloch has been far less kind to the older and grittier values than it has even to hedonism. The proponents of mysticism, art, martial glory, et cetera are on even weaker grounds than the hedonists. And the ground is only getting weaker.

Whatever your values are, the world being eaten by gray goo, paperclip maximizers, or Hansonian ems is unlikely to satisfy them. I think there’s room for a broad alliance among people of all value systems against this possibility.

And it is not just an alliance of convenience. I predict that human values, lifted to heaven by a human-friendly superintelligence, would end up looking something like the Archipelago – many places for people to pursue their own visions of the Good, watched over by a benevolent god who acts only to ensure universal freedom of movement. Indeed, given a superintelligence to magic away the problems – no inter-community invasion, no competition for (presumably unlimited) resources – it seems to that a plurality of humankind would endorse this scenario over whatever other plans someone could dream up.

It is a minor sin to speculate on what could happen after the Singularity. I’m not saying it will be a world like this. This is something I thought up in ten minutes. It is a lower bound. Something thought up by a real superintelligence would be much, much better.

“Gnon represents the laws of physics and causality. You can’t conquer the laws of physics and causality.” (example)

Horace says: “He is either mad, or writing poetry”. If you encounter that dichotomy with me, please assume at least a 66% or so chance that I am writing poetry.

On a base level you can’t beat the laws of physics. On a metaphorical level, you can.

The laws of physics include gravity. For someone in 1500, the idea that you might be able to travel really far straight up seems like defying – even conquering – the laws of physics. But with sufficient knowledge, you can build rockets. We poetically speak about rockets “defying gravity”.

Rockets don’t literally defy gravity, but “defying gravity” is a pretty good shorthand for what they do. And of course they work on physics, but it does seem like once rockets are good enough in some sense a patch of physical law has been “conquered”.

We can never conquer Gnon in a literal sense. But we might be able to do something that looks very very much like conquering Gnon, in the same sense that making a very large metal object fall straight up until it reaches the moon looks very very much like conquering gravity.

Anyhow, the wrong thing to do would be to worship gravity as a god and venerate staying earthbound as a moral principle.

“If you really believed what you’re saying, you would realize [current progressive value] is just a result of Cthulhu, the blind marketplace of memes.” (example)

This gets into the old philosophical question of “why should we expect our beliefs to correspond to reality at all?”. It tends to be asked a lot by religious people, who mean it in a way like “I think the human mind was created by God to perceive reality, but if you think it was just the result of blind evolution, how do you know it has any truth-discerning value?”

To which the answer is that evolution selected for brains that were at least marginally competent. Brains that could distinguish “lion” from “non-lion” survived; those that couldn’t, didn’t.

There’s no such thing as a “fit animal”, only an animal that is fit for its environment. Likewise, there’s no such thing as a “virulent meme”, only a meme that is virulent to specific hosts.

We say “the human brain is designed to distinguish true and false ideas”, but another way to approach the same idea is “the human brain is designed to be an environment such that true memes survive and false memes die out.”

The overwhelming majority of our beliefs are true, and this should be obvious with a second’s thought. The sky is blue. I am sitting in Michigan right now. 2 + 2 is four. I have ten fingers. And so on.

Morality is really complicated, but if we are to believe moral discussion can be productive even in principle, we have to believe that our brains are less than maximally perverse – that they have some ability to distinguish the moral from the immoral.

If our brains are built to accept true ideas about facts and morality, the default should be that many people believing something is positive evidence for its truth, or at least not negative evidence.

“This meme is virulent”, in the context of “this idea is widely believed” is not proof that the idea is false or destructive. Some memes can be both virulent and false/destructive – and indeed I think this is true of many of them, religion being only the most obvious case – but the burden of proof is on the person making that claim.

“All your human values are just the results of blind evolution and memetic drift – a Molochian process if ever there was one. Enshrining human values against the blind will of the universe would just be the triumph of one part of the universe’s blind idiocy over another.” (Spandrell here)

Yes, this is the The Gift We Give To Tomorrow