A sequence on how to see through the disguises of answers or beliefs or statements, that don't answer or say or mean anything.

Mysterious Answers to Mysterious Questions is probably the most important core sequence in Less Wrong. Posts in the sequence are distributed from 28 Jul 07 to 11 Sep 07.

Main sequence

Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"

Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.

You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof.

A woman on a panel enthusiastically declared her belief in a pagan creation myth, flaunting its most outrageously improbable elements. This seemed weirder than "belief in belief" (she didn't act like she needed validation) or "religious profession" (she didn't try to act like she took her religion seriously). So, what was she doing? She was cheering for paganism — cheering loudly by making ridiculous claims.

When you've stopped anticipating-as-if something, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.

Justifies the use of subjective probability estimates. Let's say you get paid to explain movements of the financial markets after the fact. You'd like to prepare your explanations for each way things could go in advance, and you can do your job better if you spend more time on preparing explanations for outcomes that are actually more likely. Being able to estimate probabilities could be useful even if you get paid to explain anything.

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.

A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of evidence may be a strong indicator or a weak indicator that the hypothesis is false, but it's always an indicator.

If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory.

Hindsight bias makes us overestimate how well our model could have predicted a known outcome. We underestimate the cost of avoiding a known bad outcome, because we forget that many other equally severe outcomes seemed as probable at the time. Hindsight bias distorts the testing of our models by observation, making us think that our models are better than they really are.

Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.

People think that fake explanations use words like "magic", while real explanations use scientific words like "heat conduction". But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.

In schools, "education" often consists of having students memorize answers to specific questions (i.e., the "teacher's password"), rather than learning a predictive model that says what is and isn't likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don't do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result.

When seemingly unanswerable questions are answered with, say, "God did it," that answer doesn't resolve the question, but it does tell you to stop asking further questions; it functions as a semantic stopsign. But religion is by no means the only source of semantic stopsigns. If you're tempted to solve any problem or explain any event with a word like "government" or "big business" or "terrorism," and you fail to ask the obvious next question "How exactly does [government|business|terrorism] explain this thing or solve this problem?", then that word is a semantic stopsign for you.

Sometimes it seems that people want their semantic stopsigns not to actually explain anything, because they like feeling that the universe is too grand and mysterious to understand. But nothing is mysterious in itself; if we can't understand the universe, it's not because the universe is grand but because we are ignorant, and there's nothing wonderful about that ignorance.

The 2-4-6 experiment suggests that humans tend to look for positive evidence ("My theory predicts this, and it happens!") rather than negative evidence ("My theory predicts this won't happen, and it doesn't.") This is similar to, but separate from, confirmation bias. To spot an explanation that isn't helpful, it's not enough to think of what it does explain very well — you also have to search for results it couldn't explain.

Although science does have explanations for phenomena, it is not enough to simply say that "Science!" is responsible for how something works -- nor is it enough to appeal to something more specific like "electricity" or "conduction". Yet for many people, simply noting that "Science has an answer" is enough to make them no longer curious about how it works. In that respect, "Science" is no different from more blatant curiosity-stoppers like "God did it!" But you shouldn't let your interest die simply because someone else knows the answer (which is a rather strange heuristic anyway): You should only be satisfied with a predictive model, and how a given phenomenon fits into that model.

Policy proposals need to come with specifics, not just virtuous-sounding words like "democracy" or "balance". These words can stand for specific proposals (democracy: resolving conflicts through voting) but they are often used in an uninformative way to convey mysterious goodness. To test whether a proposal actually carries new information, try reversing it.

How much of your knowledge could you regenerate, if it were deleted from your mind? If you don't have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all?

See also