Well, that’s a far fetched title, isn’t it? Let me explain. If you ask a person on the street, they’ll tend to say that compromise is the best way for people with opposing views to work together towards a better common future. You hear it in politics, you hear it in families, you hear it in workplaces. If you agree with this premise, let’s dive into why I think it may be one of the most wasteful, destructive ideas out there.

Let’s say two co-workers disagree about the setting of the temperature in the office. If one wants 22 °C (72 °F) and another wants 25 °C (77 °F), then compromise might indeed be the way forward. Set it somewhere in the middle, wear a bit more or a bit less clothing, and move on. But this can only go so far. If the difference is larger, or if one of the co-workers wants an outlandish temperature, then everyone may end up unhappy. What’s worse. people are incentivised to stake out extreme positions, in order to pull the eventual compromise towards their actual desired optimum.

These may be issues with compromise, but actually these kinds of straightforward disagreements are both uninteresting and less common than you may think. When we look at compromise in other situations, it gets much worse. Instead of the co-workers in the office, let’s think about two people, in a car, going at high speed on the highway. If one person wants to keep straignt, and the other wants to turn right and take the upcoming exit, compromising means crashing on the barrier, causing destruction and possibly death, for the people in the car and others on the highway. Of course, none of the people in the car want this, and would agree that any of the previously proposed options is better than crashing.

The right way to go would be answering the question “which option is likeliest to get us to our destination”, ranking them in order of preference, and following the first, or the second, or one in the top ten. “Crashing into the barrier” doesn’t make the top quadrillion options. The wrong way to go is to answer the question “which angle should the steering wheel be at”, each proposing a number, and ending up in the middle. In other words, to average out the steering wheel is to answer the wrong question. It is to operate at the wrong level of abstraction.

Once the question becomes “which road do we take”, then things like “but the next exit leads us to a toll road and I don’t want to spend the money” and “but google maps says there’s a traffic jam ahead of this exit and we’ll spend an extra hour waiting to go through it” become relevant information, can be combined, and maybe the best option is to take the exit after the next, which takes them to their destination in a reasonable amount of time and no toll road. Notice that effectively both of the initial options were bad in some way, and the actual solution was something else entirely. If you're interested in diving deeper into decisions like this, Robert Aumann got a Nobel Prize for his work including his Agreement Theorem which this paragraph oversimplifies.

But none of that will be solved by averaging out the angle of the steering wheel. Only when the “why” question is asked, and honestly answered, can the information be combined, followup questions asked, the options ranked, and a credible plan be chosen, ideally, but not necessarily, supported by all. As the barrier taught us, unanimity is less important than consistency, clarity, and reaching a decision in time.

You may think this kind of problem doesn’t come up in the real world. But ask any executive in a corporation about “why is x being done that way” about some obviously failed or failing initiative they are putting out. If you’re lucky enough to get a honest answer, you’ll inevitably hear things like “well, department X needed to push their core technology, department Y wanted to satisfy their big customer, department Z insisted we adopt such and such standard, and this is what could be done within these constraints. Of course, neither the customer, nor the core technology, or the standard benefit by the result of the failed initiative. This is the equivalent of compromising on the angle of the steering wheel. The result is dumping billions of dollars into a big pile, and setting it on fire, using the goodwill of your employees, your partners, and your customers as fuel. And this isn't limited to new product launches. Whenever two departments in a company are moving in two opposite, mutually contradictory directions, the "plan" being implicitly executed is a contradictory plan.

When the question being asked is “is everybody ok with this plan” rather than “does the thing we’re doing make sense” or “how do we maximise our chances of success”, the only way forward is to make these frankenstein decisions. I am not saying such products never succeed. What I’m saying is that and any success that might come is in spite of, not because of, this kind of decision making process. There may be an uncompromised core (get it? uncompromised?) that the deadweight didn’t manage to drag down, so the combined result succeeds anyway, and hopefully future iterations reduce the deadweight, to the extent possible. Which it might not be, if the wider ecosystem is now dependent on such misfeatures.

As an aside, I actually believe this is why startups on occasion win against large corporations. A complex problem presents itself, and the startups are both more desperate to solve it, and less likely to make frankenstein decisions, not due to not being inherently vulnerable to politics, but because they have fewer commitments and fewer people, and therefore fewer temptations to compromise their vision. As soon as the startups have something to lose, compromises start being made, and the cycle starts all over again. Either the engine, designed in the age of purity, keeps working, the leadership is credible enough to push forward a clear vision for as long as possible, or the company falls back to maintaining their income stream by obvious, incremental, defensive moves, for as long as such can be sustained.

So what’s the answer? How should a team or company (nevermind country) run? My personal answer is something I have not quite seen discussed or described very much, though I suspect it drives some of the largest companies, while getting missed as a pattern. It is “transparent decision making with strong leadership”. Again, something that startups do naturally, and stop doing as they grow. The first part is all about packing as much information as possible into the decision. The second part is about making sure that the solution chosen has strong internal consistency. Keeping the decision-making process as a black box means you risk not maximising the stimuli, and therefore missing options you would have chosen had you been aware of them. Making the decisions a matter of consensus in a “flat” organisation means crashing into the barrier. Unfortunately this middle road doesn’t seem to be very popular, as the authoritarians will opt for the black-box “respect mah authoritah” approach that validates their superiority, whereas the egalitarians will opt for making sure everyone has their say, and as few people as possible are unhappy about any decision.

See it as a compromise, if you like irony. Holding the middle is not easy, but if we focus on the actual results rather than people’s feelings, we will sacrifice both the decision-makers’ ego, and the contributors’ ego, in exchange for maximising the area considered, while maximising the clarity of the output. And a team that understands the rationale of decisions made, and can observe the results as well, is more likely to learn collectively, and more likely to make better decisions together in the future. What’s more, people’s feelings end up better off in the long run, as there is no better cure for all ills than success. And a team that succeeds because of decisions they all understand and contributed to, is a team that grows and stays together.

I did not realise this when we were designing the internal process at resin.io, or when I started writing this blogpost, but our decision making and overall operating model might as well have been designed with this essay in mind. Since it fits so well as a followup, I will consider, but can’t promise, writing a next post, or a few, describing how we collect information and make decisions at resin.io, since I do enjoy putting forward concrete plans in the place of vague ideas.

I hope you have enjoyed this essay and look forward to hearing your thoughts