I am sometimes asked whether doing a PhD was worth it, given that I left academia and research to become a full-time software developer. My answer is an unequivocal “yes”, despite the fact that my thesis is about as relevant to what I do now as a book on the sex lives of giraffes.

By far the most important skill I learnt during that time was not any particular technical knowledge, but rather a general approach to critical thinking—how to evaluate evidence and make rational choices. In a profession such as software engineering, where we are constantly bombarded with new technologies, products and architectural styles, it is absolutely essential to be able to step back and evaluate the pros and cons to form sensible technology choices. In this post I’ll try and summarise the approach I take to making these decisions.

Critical thinking has a long history, with modern Western critique having its roots in the Enlightenment. It is hard to summarise this long tradition of thought, but the basic theme is one of moving away from accepting arguments on authority or dogma, and instead placing emphasis on reasoning and evidence.

Charles Sanders Peirce classified reasoning into three basic forms:

Deduction applies general rules to known facts to derive new facts that follow. In other words, given a rule IF a THEN b and known fact a then we can derive b. This is the primary form of reasoning in logic and mathematics.

Induction attempts to derive general rules from observations and known facts. That is, given observations that b seems to always follow a, then infer the rule IF a THEN b. This is a form of reasoning most closely associated with science.

Abduction tries to explain observations by reference to general rules and known facts. That is, given an observation of b and knowledge of a general rule IF a THEN b then we can posit that a may also be true. This kind of reasoning is associated with diagnostics and explanation.

Of these three forms, only deduction is usually sound. That is given, true initial facts and sound rules of inference, then the derived facts are guaranteed to also be true. The same is not true of induction or abduction: there may be other rules that provide a better explanation of observations, and there may be many possible causes that could explain an observation.

Unfortunately, much of the reasoning we must do as software engineers is not deductive. Given a number of similar problems and the success or failure of their solutions, we might induce new general design patterns or architectures. Given a number of apparently successful projects that all used a particular product, a vendor may like us to reach the (abductive) conclusion that the product was at least partially responsible for their success. So how do we evaluate these kinds of reasoning if we cannot hope to directly prove them? The answer is to try to gather as much evidence as possible both for and against and to weigh up the pros and cons:

Adopt a skeptical approach and try to find flaws or mistakes in the reasoning. If you’ve ever presented a paper at an academic conference, you will be well aware of this technique! While it may initially appear as mean-spirited to try to pick holes in other peoples’ work, it serves an absolutely critical purpose. While we may not be able to prove positively that an idea is correct, we can disprove it by finding counterexamples or other flaws. Once we have tried our (collective) hardest to disprove an idea and failed then we begin to have confidence in its validity. This idea is known as falsification (pdf), and is most closely associated with Karl Popper. A theory that it is impossible to disprove is of no value at all.

Try to find as much evidence for or against an idea as possible, from as wide a number of sources as possible. If we cannot directly prove or disprove an idea, then it will come down to a balance of evidence, and the more we have the better.

Evaluate the source of evidence and any bias that might be present. For instance, a vendor clearly has an incentive to promote successful uses of their product while downplaying unsuccessful ones. Likewise, a consultancy company has an incentive to promote methodologies and architectures that might drive more use of their services, such as those that are complicated or new (and therefore need most advice).

What assumptions are being made? Do those assumptions hold for the cases you are considering? For example, an architecture proposed by Google may need to handle hundreds of millions of users and very high load rates. To deal with these high loads they may be willing to accept much higher up-front development costs than may be necessary for a much smaller workload. Do you really need to deploy that big data cluster when all your data would fit into RAM on a single machine?

Is the proposed solution at an appropriate level of abstraction or generality? While a general solution may seem appealing, if you only need to solve a one-off special case then maybe there are simpler alternatives.

Discuss the idea with colleagues and friends from as wide a pool as possible. It is very hard for a single individual to shake off their own prejudices and come to a completely dispassionate evaluation of an idea. Only through a process of informed debate can an idea be fully explored.

In the spirit of following my own advice, I would love to hear your thoughts on this article. What have I missed or overlooked? Am I right to emphasise critical thinking for software engineering, or do you think technical skills are more important? I hope this article has got you thinking about how you evaluate the ideas and techniques you encounter every day in your software engineering careers.