Aristotle said that man is a rational animal. Bertrand Russell adds, “Throughout a long life I have searched diligently for evidence in favour of this statement. So far, I have not had the good fortune to come across it.” Traditional economic theory is based on the idea of homo economicus, agents as rational men attempting to maximize their utility. But as Russell points out, man is not that rational. The notion of bounded rationality better describes the way we think and act. What is the philosophy of rationality and deviations from it? How does Cognitive Science describe a bounded rationality? What are the pros and cons of it in comparison to rational choice theory? How do we use it? And how can we construct descriptive models of such systems?

K. I. Manktelow (2004) says that rationality is concerned with two things: what is true and what to do. For our beliefs to be rational, they must be in agreement with evidence (what cognitive scientists call epistemic rationality), and for our actions to be rational, they must be conducive to obtaining our goals (instrumental rationality i.e. adopting appropriate goals and behaving in a manner that optimizes one’s ability to achieve them. (Stanovich, 2009)) If this defines rationality, then not being completely rational could be described as behaviour resulting from not completely knowing what the truth is i.e. beliefs not being in agreement with the evidence, or not completely knowing what to do i.e. not adopting appropriate goals or not optimizing one’s ability to achieve them.

Bounded rationality picks up on these ideas of not being completely rational. Our decisions and their rationality are bounded by constraints of limited information available, limited time to make decisions and limited cognitive abilities of the mind. H. Simon (1972), in his ‘Theories of bounded rationality’, describes reconstructing the classical theory of a firm’s decision making of utility by including risk and uncertainty, incomplete information of alternatives and complexity in calculating the best course of action. By functioning in such a domain, decision makers are viewed as ‘satisficers’ – who seek a satisfactory and not necessarily optimum solution owing to a lack of resources and ability

Limited rationality resulting is satisficing is considered sub-optimal decision making . An alternative method of defining satisficing could be an optimization where ‘all’ costs are covered i.e optimization of the goal, costs of obtaining all necessary information and costs of calculations. So while considering the main objective of goal optimization, satisficing could be sub-optimal. The satisficing problem can also be thought of as constraint satisfaction and formulated as an optimization of satisficing requirements of an objective function. J. Odhnoff (1965) says the difference between optimizing and satisficing is often referred to as a difference in quality of a certain choice.

In the utility framework, bounded rationality can be explained by the notion of epsilon-optimization: agents choosing actions that get them close to the goal, i.e. actions where the pay-off (U(s) )is within epsilon of the optimum(U*). [ U(s) ≥ U* – ɛ ] In the case of strict rationality, ɛ= 0 . Bounded rationality and suboptimal decision making processes result in deviations (ɛ) from rational behavior. For example, the trade-off between computing speed and accuracy of a result in numerical analysis can be analogized to heuristics and cognitive biases in decision making.

The evolutionary psychology perspective is that limitations in rational choice can be explained as being rational in context of survival. For example, if we think we see a predator, due to limited time the more profitable decision would be to fight or flee rather than waiting to obtain all the information to establish for sure that a predator is on our trail. Being more risk averse makes sense at a subsistence level. G. Miller (1956), in his very popular paper ‘The magical number seven, plus or minus two: Some limits on our capacity for processing information’, discusses limited cognitive capacity in relation to how the average human can hold 7 ± 2 objects in his working memory. Making faster, though less accurate decisions could be a factor in determining life or death. D. Kahneman (2003) describes a process of attribute substitution – when someone makes a judgment whose target is computationally complex, an easier calculated heuristic attribute is substituted, even without conscious awareness.

Heuristics like rules of thumb, intuitive guesses, stereotyping, etc. fall under this approach to problem solving where a practical methodology is employed which provides sufficient and not perfect results, sometimes even without knowing it. Bounded rationality substantially affects how we think and act. This has its pros and cons. For example, marketing could be designed in a manner to make us buy more than we need- eg. the anchoring effect where your first perception affects later perceptions and decisions. It can also be used to influence people into making better decisions for social good. Sunstein and Thaler (2008), in their book ‘Nudge’ describe many instances of choice architecture, for example placing food in such a manner that influences people to pick the healthier options. There is a lot of scope in the field of behavioral economics for marketing and administrative planning in influencing choices.

Moving on, to provide descriptive models of bounded rationality, how does one model decision making processes? Cognitive science has been concerned with providing formal, computational description for various aspects of cognition including. Could we then formulate a framework to begin to explain bounded rationality and its implications on decisions? Decision theory has been modeled by probability theory. Cognitive scientists have earlier used classical Bayesian probability and formal logic. In some cases though, this classical approach does not hold true. Alternative proponents are quantum cognition, fuzzy logic, possibility theory, info-gap decision theory, etc.

For example, a fundamental law of Bayesian probability when applied to classical decision theory is the ‘sure thing principle’ which says that if you prefer A over B in state X and in state X’, then you should prefer A over B in an unknown state as well. Tversky and Shafir (1992) tested this in a two stage gambling experiment where there is an even chance to win 2units or lose 1 unit. When players win the first round, a majority choose to play again; when players lose the first round, a majority choose not to; and when players are not informed of the results of the first round, a majority do not choose to play the second round – violating the sure thing principle, according to which they should have play the second round. Due to violation of the law of the total probability, classical probability theory cannot be employed, but quantum interference effect can be used to explain it (context and order dependent, similar to the double slit experiment) (Busemeyer & Bruza (2012)). In such a manner Aerts, Sozzo, & Tapia (2012) say that to explain decision making processes, paradoxical situations in behavioral economics and deviations from rationality, formalisms using quantum concepts like superposition, interference, contextuality and incompatibility have been found useful.

In conclusion, we can now establish that man isn’t as rational as Aristotle declared him to be. Limited resources and cognitive ability result in a ‘bounded rationality’. This bounded rationality manifests itself as deviations form rational behavior. This affects how we think and choose to act. ‘Irrational behavior’ can be explained by this. Bounded rationality is an emerging field, with behavioral scientists trying to influence the choices we make, economists updating their theories to take seemingly irrational behavior into account, and computation cognitive scientists formulating frameworks to better understand decision making processes, etc. Where are we headed? Answering that conclusively would probably not be completely rational, for want of sufficient time, information and cognitive ability!

Sources :

– Manktelow, K. I. (2004). “Reasoning and rationality: The pure and the practical.”, Psychology of reasoning: Theoretical and historical perspectives (Hove, England, Psychology Press.). pp. 157-177.

– Stanovich, K. E. (2009). “What intelligence tests miss: The psychology of rational thought.” (New Haven, Yale University Press.)

– Simon, H. A. (1972). “Theories of Bounded Rationality,” Decision and Organization,( Amsterdam: North-Holland Publishing Company). Chapter 8

– Odhnoff, J. (1965). “On the Techniques of Optimizing and Satisficing.” Swedish Journal of Economics, 67 (I), 24-39.

– Miller, G. (1956). “The Magic Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information.” Psychological Review, 63, 81-97

– Kahneman, D. (2003). “Maps of Bounded Rationality: Psychology for Behavioral Economics.” The American Economic Review, 93(5), pp.1449-1475

– Thaler, R. H. and Sunstein C. R. (2008). Nudge: Improving Decisions about Health, Wealth and Hppiness. Yale University Press

– Tversky, A., and Shafir, E. (1992). “The Disjunction Effect in Choice under Uncertainty.”, Psychological Science

– Busemeyer, J., Bruza, P. (2012). “Quantum Models of Cognition and Decision.”, Cambridge University Press, Cambridge.

– Aerts, D., Sozzo, S., Tapia, J. (2012). “A Quantum Model for the Ellsberg and Machina Paradoxes.” Quantum Interaction 2012. Note

The above was the term paper I submitted for the Philosophy and Cognitive Science course at YIF