These are some mental models I find useful. They’re rooted in decades of experience of thousands of experts – a modern equivalent of folk wisdom. Mental models are useful to quickly and correctly reason about seemingly intractable problems. They require quite a bit of intuition to properly internalize, but once you’ve internalized them they’re relatively easy to apply. They’re also easy to forget in the moment – use this post as a checklist when thinking about complex problems.

This is a living document. Instead of creating an exhaustive list on day one, I will add models as they arise (and as I discover new ones).

Planning fallacy – the observation that humans are overly optimistic when predicting success of their undertakings. Empirically, the average case turns out to be worse than the worst case human estimate. Corollary: Be really pessimistic when estimating. Assume the average case will be slightly worse than the hypothetical worst case. Corollary: When estimating time, upgrade the units and double the estimate (e.g. convert “one week” to “two months”).

The teaching method – Richard Feynman’s observation that teaching the basics is an excellent method for generating profound new ideas, and for putting consciousness in a productive state. Corollary: If you’re stuck, put yourself in a position where you have to teach someone the basics.

The LRU prioritization method – since you can only work on one problem at a time, it’s usually sufficient to pick the most important problem, work on that, and ignore everything else. This method also works with organizing most things (from email to physical possessions).

The top-five-problems method – Richard Hamming’s algorithm for doing important work. Periodically ask yourself: “what are the top five most important problems in my field (and life), and why am I not working on them?” Corollary: What are the top five most important problems in your field (and life), and why aren’t you working on them?

The just-get-started method – Joel Spolsky’s observation that just starting to work on a small, concrete, finishable problem puts your consciousness in a productive state. Corollary: Just do something concrete. Anything. Do your laundry, or dust the counters, or add a single unit test. Just do something.

Emic vs etic (aka inside vs outside view) – two perspectives you can choose when evaluating persuasive arguments. The inside view is time consuming and requires you to engage with the arguments on their merits. The outside view only requires you ask “what kind of person does sincerely believing this stuff turn you into?” Corollary: You can usually predict correctness of arguments by evaluating superficial attributes of the people making them. Example: If someone is wearing funny clothes, purports to know the one true way, and keeps talking about the glorious leader, you can usually dismiss their arguments without deeper examination. Warning: This method usually works because most kooky people aren’t innovators, but will misfire in important situations because many innovators initially seem kooky.

Base rates – you can approximate the likelihood of a specific event occurring by examining the wider probability distribution of similar events. Example: You’re evaluating the probability of success of a given startup. Ask yourself, if you saw ten similar startups a year, how many of them are likely to succeed? Example: You caught an employee stealing, but they claim they need money to buy medication and it’s the first time they’ve ever stolen anything. Ask yourself, if you saw ten employee thefts a year, how many of them are likely to be first offences? Note: This method is especially useful to combat optimism and overconfidence biases, or when evaluating outcomes of events you’re emotionally close to. See also: Techniques for probability estimates , reference class forecasting , prior probability .

Statistical mechanics – probabalistic systems that follow certain laws in the long run can have perturbations that diverge from these laws in the short run. Corollary: Occasionally the status quo can be easily improved without significant resources (but it is unlikely that you found such an occassion). Idiom: In the short run the market is a voting machine, but in the long run it is a weighing machine. Idiom: If an economist saw a $100 bill on a sidewalk they wouldn’t pick it up (because if it were real, it would have been picked up already).

Efficient market hypothesis – the state of any given issue in the world is roughly as close to optimal as is currently possible. Corollary: It’s unlikely that the status quo can be easily improved without significant resources. Example: Cucumber juice probably doesn’t cure cancer. Example: The iPhone app you wrote in a weekend probably doesn’t double the phone’s battery life.

Inversion – the observation that many hard problems are best solved when they’re addressed backward. In other words figure out what you don’t want, avoid it, and you’ll get what you do want.

Corollary: Find out how people commonly fail doing what you do, and avoid failing like them.

Example: If you want to help India, ask “what is doing the worst damage in India and how can we avoid it?”

See also: Failure mode.

Bias for action – in daily life many important decisions are easily reversible. It’s not enough to have information – it’s crucial to move quickly and recover if you were wrong, than to deliberate indefinitely.

Idiom: One test is worth a thousand expert opinions.

Idiom: The best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.

Note: The best people do this naturally, without brooding, and with a light touch.

Expected value – a simple model for evaluating uncertain events (multiply the probability of the event by its value).

Corollary: Sometimes you’ll have to estimate probabilities when it feels really hard to do.

Example: Chance of winning NY lotto is 1 in 292,201,338 per game. Let’s say the grand prize is $150M and ticket price is $1. Then the expected value is roughly $0.5. Since $0.5 < $1, the model tells us the game isn’t worth playing.

Warning: Looking at expected value often isn’t enough. You need to consider utility to make good decisions.

See also: Techniques for probability estimates, shut up and multiply, scope insensitivity.

Marginal utility – the change in utility from the change in consumption of a good. Marginal utility usually diminishes with increase in consumption.

Example: The first car in your garage improves your life significantly more than the second one.

Example: Because utility loss from losing a dollar is negligible relative to utility gain from winning NY Lotto at ridiculously low odds, it might be worth buying a ticket even at negative expected value (but seriously, don’t).

Corollary: Think through your utility function carefully.