Continuum of pure uncertainty and certainty: The domain of decision analysis models falls between two extreme cases. This depends upon the degree of knowledge we have about the outcome of our actions, as shown below:

In fact, all highly man-made systems, such as, large communications networks, nuclear-powered electric-generating stations and spacecraft are full of hidden paths to failure, so numerous that we cannot think of all of them, or not able to afford the time and money required to test for and eliminate them. Individually each of these paths is a black swan, but there are so many of them that the probability of one of them being activated is quite significant.

To make serious business decisions one is to face a future in which ignorance and uncertainty increasingly overpower knowledge, as ones planning horizon recedes into the distance. The deficiencies about our knowledge of the future may be divided into three domains, each with rather murky boundaries:

There are a few satisfactory description of uncertainty, one of which is the concept and the algebra of probability .

Further Readings: Borden T., and W. Banta, (Ed.), Using Performance Indicators to Guide Strategic Decision Making , Jossey-Bass Pub., 1994. Eilon S., The Art of Reckoning: Analysis of Performance Criteria , Academic Press, 1984. Von Furstenberg G., Acting Under Uncertainty: Multidisciplinary Conceptions , Kluwer Academic Publishers, 1990.

The States of Nature are the states of economy during one year. The problem is to decide what action to take among three possible courses of action with the given rates of return as shown in the body of the table.

Source of Errors in Decision Making: The main sources of errors in risky decision-making problems are: false assumptions, not having an accurate estimation of the probabilities, relying on expectations, difficulties in measuring the utility function, and forecast errors.

Decision analysis is a process that allows the decision maker to select at least and at most one option from a set of possible decision alternatives. There must be uncertainty regarding the future along with the objective of optimizing the resulting payoff (return) in terms of some numerical decision criterion.

The mathematical models and techniques considered in decision analysis are concerned with prescriptive theories of choice (action). This answers the question of exactly how a decision maker should behave when faced with a choice between those actions which have outcomes governed by chance, or the actions of competitors.

Further Readings: Arsham H., Decision analysis: Making justifiable, defensible decisions, e-Quality , September, 2004. Forman E., and M. Selly, Decision by Objectives: How to Convince Others That You Are Right , World Scientific, 2001. Gigerenzer G., Adaptive Thinking: Rationality in the Real World , Oxford University Press, 2000. Girón F., (Ed.), Applied Decision Analysis , Kluwer Academic, 1998. Manning N., et al. , Strategic Decision Making In Cabinet Government: Institutional Underpinnings and Obstacles , World Bank, 1999. Patz A., Strategic Decision Analysis: A General Management Framework , Little and Brown Pub., 1981. Vickers G., The Art of Judgment: A Study of Policy Making , Sage Publications, 1995. Von Furstenberg G., Acting Under Uncertainty: Multidisciplinary Conceptions , Kluwer Academic Publishers, 1990.

A decision needs a decision maker who is responsible for making decisions. This decision maker has a number of alternatives and must choose one of them. The objective of the decision-maker is to choose the best alternative. When this decision has been made, events that the decision-maker has no control over may have occurred. Each combination of alternatives, followed by an event happening, leads to an outcome with some measurable value. Managers make decisions in complex situations. Decision tree and payoff matrices illustrate these situations and add structure to the decision problems.

Complexity in the modern world, along with information quantity, uncertainty, and risk, make it necessary to provide a rational decision making framework. The goal of decision analysis is to give guidance, information, insight, and structure to the decision-making process in order to make better, more 'rational' decisions.

There are two additional quantitative comparisons. The first is the direct comparison of the weighted utility for two alternatives on all of the objectives. The second is the comparison of all of the alternatives on any two selected objectives which shows the Pareto optimality for those two objectives.

Once the structure and numbers are in place, the analysis can begin. Decision analysis involves much more than computing the expected utility of each alternative. If we stopped there, decision makers would not gain much insight. We have to examine the sensitivity of the outcomes, weighted utility for key probabilities, and the weight and risk preference parameters. As part of the sensitivity analysis, we can calculate the value of perfect information for uncertainties that have been carefully modeled.

Humans can understand, compare, and manipulate numbers. Therefore, in order to create a decision analysis model, it is necessary to create the model structure and assign probabilities and values to fill the model for computation. This includes the values for probabilities, the value functions for evaluating alternatives, the value weights for measuring the trade-off objectives, and the risk preference.

Further Readings: Berger J., Statistical Decision Theory and Bayesian Analysis , Springer, 1978. Corfield D., and J. Williamson, Foundations of Bayesianism , Kluwer Academic Publishers, 2001. Contains Logic, Mathematics, Decision Theory, and Criticisms of Bayesianism. Grünig R., Kühn, R.,and M. Matt, (Eds.), Successful Decision-Making: A Systematic Approach to Complex Problems , Springer, 2005. It is intended for decision makers in companies, in non-profit organizations and in public administration. Lapin L., Statistics for Modern Business Decisions , Harcourt Brace Jovanovich, 1987. Lindley D., Making Decisions , Wiley, 1991. Pratt J., H. Raiffa, and R. Schlaifer, Introduction to Statistical Decision Theory , The MIT Press, 1994. Press S., and J. Tanur, The Subjectivity of Scientists and the Bayesian Approach, Wiley, 2001. Comparing and contrasting the reality of subjectivity in the work of history's great scientists and the modern Bayesian approach to statistical analysis. Tanaka H., and P. Guo, Possibilistic Data Analysis for Operations Research , Physica-Verlag, 1999.

Fortunately the probabilistic and statistical methods for analysis and decision making under uncertainty are more numerous and powerful today than even before. The computer makes possible many practical applications. A few examples of business applications are the following:

The Decision-Making Process: Unlike the deterministic decision-making process, in the decision making process under uncertainty the variables are often more numerous and more difficult to measure and control. However, the steps are the same. They are:

Making decisions is certainly the most important task of a manager and it is often a very difficult one. This site offers a decision making procedure for solving complex problems step by step.

Considering the uncertain environment, the chance that "good decisions" are made increases with the availability of "good information." The chance that "good information" is available increases with the level of structuring the process of Knowledge Management. One may ask, "What is the use of decision analysis techniques without the best available information delivered by Knowledge Management?" The answer is: one can not make responsible decisions until one possess enough knowledge. However, for private decisions one may rely on, e.g., the psychological motivations, as discusses under "Decision Making Under Pure Uncertainty" in this site. Moreover, Knowledge Management and Decision Analysis are indeed interrelated since one influences the other, both in time, and space. The notion of "wisdom" in the sense of practical wisdom has entered Western civilization through biblical texts. In the Hellenic experience this kind of wisdom received a more structural character in the form of philosophy. In this sense philosophy also reflects one of the expressions of traditional wisdom.

Knowledge is more than knowing something technical. Knowledge needs wisdom. Wisdom is the power to put our time and our knowledge to the proper use. Wisdom comes with age and experience. Wisdom is the accurate application of accurate knowledge and its key component is to knowing the limits of your knowledge . Wisdom is about knowing how something technical can be best used to meet the needs of the decision-maker. Wisdom, for example, creates statistical software that is useful, rather than technically brilliant. For example, ever since the Web entered the popular consciousness, observers have noted that it puts information at your fingertips but tends to keep wisdom out of reach.

Statistical inference aims at determining whether any statistical significance can be attached that results after due allowance is made for any random variation as a source of error. Intelligent and critical inferences cannot be made by those who do not understand the purpose, the conditions, and applicability of the various techniques for judging significance.

The above figure depicts the fact that as the exactness of a statistical model increases, the level of improvements in decision-making increases. That's why we need probabilistic modeling. Probabilistic modeling arose from the need to place knowledge on a systematic evidence base. This required a study of the laws of probability, the development of measures of data properties and relationships, and so on.

Fact becomes knowledge, when it is used in the successful completion of a decision process. Once you have a massive amount of facts integrated as knowledge, then your mind will be superhuman in the same sense that mankind with writing is superhuman compared to mankind before writing. The following figure illustrates the statistical thinking process based on data in constructing statistical models for decision making under uncertainties.

Data is known to be crude information and not knowledge by itself. The sequence from data to knowledge is: from Data to Information, from Information to Facts, and finally, from Facts to Knowledge . Data becomes information, when it becomes relevant to your decision problem. Information becomes fact, when the data can support it. Facts are what the data reveals. However the decisive instrumental (i.e., applied) knowledge is expressed together with some statistical degree of confidence .

Knowledge is what we know well. Information is the communication of knowledge. In every knowledge exchange, there is a sender and a receiver. The sender make common what is private, does the informing, the communicating. Information can be classified as explicit and tacit forms. The explicit information can be explained in structured form, while tacit information is inconsistent and fuzzy to explain. Know that data are only crude information and not knowledge by themselves.

Further Readings: Arsham H., A Markovian model of consumer buying behavior and optimal advertising pulsing policy, Computers and Operations Research , 20(2), 35-48, 1993. Arsham H., A stochastic model of optimal advertising pulsing policy, Computers and Operations Research , 14(3), 231-239, 1987. Ben-Haim Y., Information-gap Decision Theory: Decisions Under Severe Uncertainty , Academic Press, 2001. Golub A., Decision Analysis: An Integrated Approach , Wiley, 1997. Goodwin P., and G. Wright, Decision Analysis for Management Judgment , Wiley, 1998. van Gigch J., Metadecisions: Rehabilitating Epistemology , Kluwer Academic Publishers, 2002. Wickham Ph., Strategic Entrepreneurship: A Decision-making Approach to New Venture Creation and Management , Pitman, 1998.

A Possible Drawback in the Decision Analysis Approach: You might have already noticed that the above criteria always result in selection of only one course of action . However, in many decision problems, the decision-maker might wish to consider a combination of some actions. For example, in the Investment problem, the investor might wish to distribute the assets among a mixture of the choices in such a way to optimize the portfolio's return. Visit the Game Theory with Applications Web site for designing such an optimal mixed strategy.

The systematic study of decision making provides a framework for choosing courses of action in a complex, uncertain, or conflict-ridden situation. The choices of possible actions, and the prediction of expected outcomes, derive from a logical analysis of the decision situation.

Objectives are important both in identifying problems and in evaluating alternative solutions. Evaluating alternatives requires that a decision-makers objectives be expressed as criterion that reflects the attributes of the alternatives relevant to the choice.

This Web site presents the decision analysis process both for public and private decision making under different decision criteria, type, and quality of available information. This Web site describes the basic elements in the analysis of decision alternatives and choice, as well as the goals and objectives that guide decision making. In the subsequent sections, we will examine key issues related to a decision-makers preferences regarding alternatives, criteria for choice, and choice modes.

The origin of decision theory is derived from economics by using the utility function of payoffs. It suggests that decisions be made by computing the utility and probability, the ranges of options, and also lays down strategies for good decisions:

To operate according to the canons of decision theory, we must compute the value of a certain outcome and its probabilities; hence, determining the consequences of our choices.

Business decision making is almost always accompanied by conditions of uncertainty. Clearly, the more information the decision maker has, the better the decision will be. Treating decisions as if they were gambles is the basis of decision theory. This means that we have to trade off the value of a certain outcome against its probability.

Most people often make choices out of habit or tradition, without going through the decision-making process steps systematically. Decisions may be made under social pressure or time constraints that interfere with a careful consideration of the options and consequences. Decisions may be influenced by one's emotional state at the time a decision is made. When people lack adequate information or skills, they may make less than optimal decisions. Even when or if people have time and information, they often do a poor job of understanding the probabilities of consequences. Even when they know the statistics; they are more likely to rely on personal experience than information about probabilities. The fundamental concerns of decision making are combining information about probability with information about desires and interests. For example: how much do you want to meet her, how important is the picnic, how much is the prize worth?

Emotions and Risky Decision: Most decision makers rely on emotions in making judgments concerning risky decisions. Many people are afraid of the possible unwanted consequences. However, do we need emotions in order to be able to judge whether a decision and its concomitant risks are morally acceptable. This question has direct practical implications: should engineers, scientists and policy makers involved in developing risk regulation take the emotions of the public seriously or not? Even though emotions are subjective and irrational (or a-rational), they should be a part of the decision making process since they show us our preferences. Since emotions and rationality are not mutually exclusive, because in order to be practically rational, we need to have emotions. This can lead to an alternative view about the role of emotions in risk assessment: emotions can be a normative guide in making judgments about morally acceptable risks.

It is a challenging task to compare several courses of action and then select one action to be implemented. At times, the task may prove too challenging. Difficulties in decision making arise through complexities in decision alternatives. The limited information-processing capacity of a decision-maker can be strained when considering the consequences of only one course of action. Yet, choice requires that the implications of various courses of action be visualized and compared. In addition, unknown factors always intrude upon the problem situation and seldom are outcomes known with certainty. Almost always, an outcome depends upon the reactions of other people who may be undecided themselves. It is no wonder that decision-makers sometimes postpone choices for as long as possible. Then, when they finally decide, they neglect to consider all the implications of their decision.

Difficulty in probability assessment arises from information that is scarce, vague, inconsistent, or incomplete. A statement such as "the probability of a power outage is between 0.3 and 0.4" is more natural and realistic than their "exact" counterpart such as "the probability of a power outage is 0.36342."

Decision-makers often face a severe lack of information. Probability assessment quantifies the information gap between what is known, and what needs to be known for an optimal decision. The probabilistic models are used for protection against adverse uncertainty , and exploitation of propitious uncertainty .

Probabilistic models are viewed as similar to that of a game; actions are based on expected outcomes. The center of interest moves from the deterministic to probabilistic models using subjective statistical techniques for estimation, testing, and predictions. In probabilistic modeling, risk means uncertainty for which the probability distribution is known. Therefore risk assessment means a study to determine the outcomes of decisions along with their probabilities.

Probabilistic Modeling is largely based on application of statistics for probability assessment of uncontrollable events (or factors), as well as risk assessment of your decision. The original idea of statistics was the collection of information about and for the State . The word statistics is not derived from any classical Greek or Latin roots, but from the Italian word for state. Probability has a much longer history . Probability is derived from the verb to probe meaning to "find out" what is not too easily accessible or understandable. The word "proof" has the same origin that provides necessary details to understand what is claimed to be true.

Uncertainty is the fact of life and business; probability is the guide for a "good" life and successful business. The concept of probability occupies an important place in the decision-making process, whether the problem is one faced in business, in government, in the social sciences, or just in one's own everyday personal life. In very few decision making situations is perfect information - all the needed facts - available. Most decisions are made in the face of uncertainty. Probability enters into the process by playing the role of a substitute for certainty - a substitute for complete knowledge.

As an example of deterministic versus probabilistic models, consider the past and the future : Nothing we can do can change the past, but everything we do influences and changes the future, although the future has an element of uncertainty. Managers are captivated much more by shaping the future than the history of the past.

In deterministic models , a good decision is judged by the outcome alone. However, in probabilistic models , the decision-maker is concerned not only with the outcome value but also with the amount of risk each decision carries

The Dynamics of a System: A system that does not change is a static (i.e., deterministic) system. Many of the systems we are part of are dynamic systems, which are they change over time. We refer to the way a system changes over time as the system's behavior. And when the system's development follows a typical pattern we say the system has a behavior pattern. Whether a system is static or dynamic depends on which time horizon you choose and which variables you concentrate on. The time horizon is the time period within which you study the system. The variables are changeable values on the system.

What is a System: Systems are formed with parts put together in a particular manner in order to pursuit an objective. The relationship between the parts determines what the system does and how it functions as a whole. Therefore, the relationship in a system are often more important than the individual parts. In general, systems that are building blocks for other systems are called subsystems

Such miscommunication can be avoided if the manager works with the specialist to develop first a simple model that provides a crude but understandable analysis. After the manager has built up confidence in this model, additional detail and sophistication can be added, perhaps progressively only a bit at a time. This process requires an investment of time on the part of the manager and sincere interest on the part of the specialist in solving the manager's real problem, rather than in creating and trying to explain sophisticated models. This progressive model building is often referred to as the bootstrapping approach and is the most important factor in determining successful implementation of a decision model. Moreover the bootstrapping approach simplifies otherwise the difficult task of model validating and verification processes.

Specialists in model building are often tempted to study a problem, and then go off in isolation to develop an elaborate mathematical model for use by the manager (i.e., the decision-maker). Unfortunately the manager may not understand this model and may either use it blindly or reject it entirely. The specialist may feel that the manager is too ignorant and unsophisticated to appreciate the model, while the manager may feel that the specialist lives in a dream world of unrealistic assumptions and irrelevant mathematical language.

Progressive Approach to Modeling: Modeling for decision making involves two distinct parties, one is the decision-maker and the other is the model-builder known as the analyst. The analyst is to assist the decision-maker in his/her decision-making process. Therefore, the analyst must be equipped with more than a set of analytical methods.

Decision analysts provide quantitative support for the decision-makers in all areas including engineers, analysts in planning offices and public agencies, project management consultants, manufacturing process planners, financial and economic analysts, experts supporting medical/technological diagnosis, and so on and on.

To search the site , try E dit | F ind in page [Ctrl + f]. Enter a word or phrase in the dialogue box, e.g. " risk " or " utility " If the first appearance of the word/phrase is not what you are looking for, try F ind Next .

One "pole" on this scale is deterministic, such as the carpenter's problem. The opposite "pole" is pure uncertainty. Between these two extremes are problems under risk. The main idea here is that for any given problem, the degree of certainty varies among managers depending upon how much knowledge each one has about the same problem . This reflects the recommendation of a different solution by each person.

Probability is an instrument used to measure the likelihood of occurrence for an event. When you use probability to express your uncertainty, the deterministic side has a probability of 1 (or zero), while the other end has a flat (all equally probable) probability. For example, if you are certain of the occurrence (or non-occurrence) of an event, you use the probability of one (or zero). If you are uncertain, and would use the expression "I really don't know," the event may or may not occur with a probability of 50%. This is the Bayesian notion that probability assessment is always subjective. That is, the probability always depends upon how much the decision maker knows. If someone knows all there is to know, then the probability will diverge either to 1 or 0.

The decision situations with flat uncertainty have the largest risk. For simplicity, consider a case where there are only two outcomes, with one having a probability of p. Thus, the variation in the states of nature is p(1-p). The largest variation occurs if we set p = 50%, given each outcome an equal chance. In such a case, the quality of information is at its lowest level. Remember from your Statistics course that the quality of information and variation are inversely related . That is, larger variation in data implies lower quality data (i.e. information).

Relevant information and knowledge used to solve a decision problem sharpens our flat probability . Useful information moves the location of a problem from the pure uncertain "pole" towards the deterministic "pole".

Probability assessment is nothing more than the quantification of uncertainty. In other words, quantification of uncertainty allows for the communication of uncertainty between persons. There can be uncertainties regarding events, states of the world, beliefs, and so on. Probability is the tool for both communicating uncertainty and managing it (taming chance).

There are different types of decision models that help to analyze the different scenarios. Depending on the amount and degree of knowledge we have, the three most widely used types are:

Decision-making under pure uncertainty

Decision-making under risk

Decision-making by buying information (pushing the problem towards the deterministic "pole")

In decision-making under pure uncertainty, the decision maker has absolutely no knowledge, not even about the likelihood of occurrence for any state of nature. In such situations, the decision-maker's behavior is purely based on his/her attitude toward the unknown . Some of these behaviors are optimistic, pessimistic, and least regret, among others. The most optimistic person I ever met was undoubtedly a young artist in Paris who, without a franc in his pocket, went into a swanky restaurant and ate dozens of oysters in hopes of finding a pearl to pay the bill.

Optimist: The glass is half-full.

Pessimist: The glass is half-empty.

Manager: The glass is twice as large as it needs to be.

Or, as in the follwoing metaphor of a captain in a rough sea:

The pessimist complains about the wind;

the optimist expects it to change;

the realist adjusts the sails.

Optimists are right; so are the pessimists. It is up to you to choose which you will be. The optimist sees opportunity in every problem; the pessimist sees problem in every opportunity.

Both optimists and pessimists contribute to our society. The optimist invents the airplane and the pessimist the parachute.

Whenever the decision maker has some knowledge regarding the states of nature, he/she may be able to assign subjective probability for the occurrence of each state of nature. By doing so, the problem is then classified as decision making under risk.

In many cases, the decision-maker may need an expert's judgment to sharpen his/her uncertainties with respect to the likelihood of each state of nature. In such a case, the decision-maker may buy the expert's relevant knowledge in order to make a better decision. The procedure used to incorporate the expert's advice with the decision maker's probabilities assessment is known as the Bayesian approach.

For example, in an investment decision-making situation, one is faced with the following question: What will the state of the economy be next year? Suppose we limit the possibilities to Growth (G), Same (S), or Decline (D). Then, a typical representation of our uncertainty could be depicted as follows:

Further Readings:

Howson C., and P. Urbach, Scientific Reasoning: The Bayesian Approach, Open Court Publ., Chicago, 1993.

Gheorghe A., Decision Processes in Dynamic Probabilistic Systems, Kluwer Academic, 1990.

Kouvelis P., and G. Yu, Robust Discrete Optimization and its Applications, Kluwer Academic Publishers, 1997. Provides a comprehensive discussion of motivation for sources of uncertainty in decision process, and a good discussion on minmax regret and its advantages over other criteria.



Decision Making Under Pure Uncertainty



Personality Types and Decision Making:

Pessimism , or Conservative (MaxMin). Worse case scenario. Bad things always happen to me.

B 3 a) Write min # in each action row, S -2 b) Choose max # and do that action. D 7 *

Optimism , or Aggressive (MaxMax). Good things always happen to me.

B 12 a) Write max # in each action row, S 15 * b) Choose max # and do that action. D 7

Coefficient of Optimism (Hurwicz's Index) , Middle of the road: I am neither too optimistic nor too pessimistic.

a) Choose an a between 0 & 1, 1 means optimistic and 0 means pessimistic,

b) Choose largest and smallest # for each action,

c) Multiply largest payoff (row-wise) by a and the smallest by (1- a ),

d) Pick action with largest sum.

For example, for a = 0.7, we have

B (.7*12) + (.3*3) = 9.3 S (.7*15) + .3*(-2) = 9.9 * D (.7*7) + (.3*7) = 7

Minimize Regret: (Savag's Opportunity Loss) I hate regrets and therefore I have to minimize my regrets. My decision should be made so that it is worth repeating. I should only do those things that I feel I could happily repeat. This reduces the chance that the outcome will make me feel regretful, or disappointed, or that it will be an unpleasant surprise.

Regret is the payoff on what would have been the best decision in the circumstances minus the payoff for the actual decision in the circumstances. Therefore, the first step is to setup the regret table:



a) Take the largest number in each states of nature column (say, L).

b) Subtract all the numbers in that state of nature column from it (i.e. L - Xi,j).

c) Choose maximum number of each action.

d) Choose minimum number from step (d) and take that action.

The Regret Matrix G MG NC L Bonds (15-12) (9-8) (7-7) (7-3) 4 * Stocks (15-15) (9-9) (7-5) (7+2) 9 Deposit (15-7) (9-7) (7-7) (7-7) 8





You may try checking your computations using Decision Making Under Pure Uncertainty JavaScript, and then performing some numerical experimentation for a deeper understanding of the concepts.

Limitations of Decision Making under Pure Uncertainty

Decision analysis in general assumes that the decision-maker faces a decision problem where he or she must choose at least and at most one option from a set of options. In some cases this limitation can be overcome by formulating the decision making under uncertainty as a zero-sum two-person game. In decision making under pure uncertainty, the decision-maker has no knowledge regarding which state of nature is "most likely" to happen. He or she is probabilistically ignorant concerning the state of nature therefore he or she cannot be optimistic or pessimistic. In such a case, the decision-maker invokes consideration of security. Notice that any technique used in decision making under pure uncertainties, is appropriate only for the private life decisions . Moreover, the public person (i.e., you, the manager) has to have some knowledge of the state of nature in order to predict the probabilities of the various states of nature. Otherwise, the decision-maker is not capable of making a reasonable and defensible decision.

You might try to use Decision Making Under Uncertainty JavaScript E-lab for checking your computation, performing numerical experimentation for a deeper understanding, and stability analysis of your decision by altering the problem's parameters.

Further Readings:

Biswas T., Decision Making Under Uncertainty, St. Martin's Press, 1997.

Driver M., K. Brousseau, and Ph. Hunsaker, The Dynamic Decisionmaker: Five Decision Styles for Executive and Business Success, Harper & Row, 1990.

Eiser J., Attitudes and Decisions, Routledge, 1988.

Flin R., et al., (Ed.), Decision Making Under Stress: Emerging Themes and Applications, Ashgate Pub., 1997.

Ghemawat P., Commitment: The Dynamic of Strategy, Maxwell Macmillan Int., 1991.

Goodwin P., and G. Wright, Decision Analysis for Management Judgment, Wiley, 1998.



Decision Making Under Risk

The problem is defined and all feasible alternatives are considered. The possible outcomes for each alternative are evaluated. Outcomes are discussed based on their monetary payoffs or net gain in reference to assets or time. Various uncertainties are quantified in terms of probabilities. The quality of the optimal strategy depends upon the quality of the judgments. The decision-maker should identify and examine the sensitivity of the optimal strategy with respect to the crucial factors.

Whenever the decision maker has some knowledge regarding the states of nature, he/she may be able to assign subjective probability estimates for the occurrence of each state. In such cases, the problem is classified as decision making under risk. The decision-maker is able to assign probabilities based on the occurrence of the states of nature. The decision making under risk process is as follows:

a) Use the information you have to assign your beliefs (called subjective probabilities) regarding each state of the nature, p(s),

b) Each action has a payoff associated with each of the states of nature X(a,s),

c) We compute the expected payoff, also called the return (R), for each action R(a) = Sums of [X(a,s) p(s)],

d) We accept the principle that we should minimize (or maximize) the expected payoff,

e) Execute the action which minimizes (or maximize) R(a).





Expected Payoff: The actual outcome will not equal the expected value. What you get is not what you expect, i.e. the "Great Expectations!"

a) For each action, multiply the probability and payoff and then,

b) Add up the results by row,

c) Choose largest number and take that action.



G (0.4) MG (0.3) NC (0.2) L (0.1) Exp. Value B 0.4(12) + 0.3(8) + 0.2(7) + 0.1(3) = 8.9 S 0.4(15) + 0.3(9) + 0.2(5) + 0.1(-2) = 9.5* D 0.4(7) + 0.3(7) + 0.2(7) + 0.1(7) = 7





The Most Probable States of Nature (good for non-repetitive decisions)

a) Take the state of nature with the highest probability (subjectively break any ties),

b) In that column, choose action with greatest payoff.

In our numerical example, there is a 40% chance of growth so we must buy stocks.

Expected Opportunity Loss (EOL):

a) Setup a loss payoff matrix by taking largest number in each state of nature column(say L), and subtract all numbers in that column from it, L - Xij,

b) For each action, multiply the probability and loss then add up for each action,

c) Choose the action with smallest EOL.





Loss Payoff Matrix G (0.4) MG (0.3) NC (0.2) L (0.1) EOL B 0.4(15-12) + 0.3(9-8) + 0.2(7-7) + 0.1(7-3) 1.9 S 0.4(15-15) + 0.3(9-9) + 0.2(7-5) + 0.1(7+2) 1.3* D 0.4(15-7) + 0.3(9-7) + 0.2(7-7) + 0.1(7-7) 3.8

Computation of the Expected Value of Perfect Information (EVPI)

EVPI helps to determine the worth of an insider who possesses perfect information. Recall that EVPI = EOL.

a) Take the maximum payoff for each state of nature,

b) Multiply each case by the probability for that state of nature and then add them up,

c) Subtract the expected payoff from the number obtained in step (b)





G 15(0.4) = 6.0 MG 9(0.3) = 2.7 NC 7(0.2) = 1.4 L 7(0.1) = 0.7 + ---------- 10.8

The efficiency of the perfect information is defined as 100 [EVPI/(Expected Payoff)]%

Therefore, if the information costs more than 1.3% of investment, don't buy it. For example, if you are going to invest $100,000, the maximum you should pay for the information is [100,000 * (1.3%)] = $1,300

I Know Nothing: (the Laplace equal likelihood principle) Every state of nature has an equal likelihood. Since I don't know anything about the nature, every state of nature is equally likely to occur:

a) For each state of nature, use an equal probability (i.e., a Flat Probability),

b) Multiply each number by the probability,

c) Add action rows and put the sum in the Expected Payoff column,

d) Choose largest number in step (c) and perform that action.



G MG NC L Exp. Payoff Bonds 0.25(12) 0.25(8) 0.25(7) 0.25(3) 7.5 * Stocks 0.25(15) 0.25(9) 0.25(5) 0.25(-2) 6.75 Deposit 0.25(7) 0.25(7) 0.25(7) 0.25(7) 7

A Discussion on Expected Opportunity Loss (Expected Regret): Comparing a decision outcome to its alternatives appears to be an important component of decision-making. One important factor is the emotion of regret. This occurs when a decision outcome is compared to the outcome that would have taken place had a different decision been made. This is in contrast to disappointment, which results from comparing one outcome to another as a result of the same decision. Accordingly, large contrasts with counterfactual results have a disproportionate influence on decision making.

Regret results compare a decision outcome with what might have been. Therefore, it depends upon the feedback available to decision makers as to which outcome the alternative option would have yielded. Altering the potential for regret by manipulating uncertainty resolution reveals that the decision-making behavior that appears to be risk averse can actually be attributed to regret aversion.

There is some indication that regret may be related to the distinction between acts and omissions. Some studies have found that regret is more intense following an action, than an omission. For example, in one study, participants concluded that a decision maker who switched stock funds from one company to another and lost money, would feel more regret than another decision maker who decided against switching the stock funds but also lost money. People usually assigned a higher value to an inferior outcome when it resulted from an act rather than from an omission. Presumably, this is as a way of counteracting the regret that could have resulted from the act.

You might like to use Making Risky Decisions JavaScript E-lab for checking your computation, performing numerical experimentation for a deeper understanding, and stability analysis of your decision by altering the problem's parameters.

Further Readings:

Beroggi G., Decision Modeling in Policy Management: An Introduction to the Analytic Concepts, Boston, Kluwer Academic Publishers, 1999.

George Ch., Decision Making Under Uncertainty: An Applied Statistics Approach, Praeger Pub., 1991.

Rowe W., An Anatomy of Risk, R.E. Krieger Pub. Co., 1988.

Suijs J., Cooperative Decision-Making Under Risk, Kluwer Academic, 1999.



Making a Better Decision by Buying Reliable Information (Bayesian Approach)





States of Nature High Sales Med. Sales Low Sales A(0.2) B(0.5) C(0.3) A1 (develop) 3000 2000 -6000 A2 (don't develop) 0 0 0

The probabilities of the states of nature represent the decision-maker's (e.g. manager) degree of uncertainties and personal judgment on the occurrence of each state. We will refer to these subjective probability assessments as 'prior' probabilities.

The expected payoff for each action is:

A1= 0.2(3000) + 0.5(2000) + 0.3(-6000)= $ -200 and A2= 0;

so the company chooses A2 because of the expected loss associated with A1, and decides not to develop.

However, the manager is hesitant about this decision. Based on "nothing ventured, nothing gained" the company is thinking about seeking help from a marketing research firm. The marketing research firm will assess the size of the product's market by means of a survey.

Now the manager is faced with a new decision to make; which marketing research company should he/she consult? The manager has to make a decision as to how 'reliable' the consulting firm is. By sampling and then reviewing the past performance of the consultant, we can develop the following reliability matrix:

1. Given What Actually Happened in the Past A B C 2. What the Ap 0.8 0.1 0.1 Consultant Bp 0.1 0.9 0.2 Predicted Cp 0.1 0.0 0.7

All marketing research firms keep records (i.e., historical data) of the performance of their past predictions. These records are available to their clients free of charge. To construct a reliability matrix, you must consider the marketing research firm's performance records for similar products with high sales. Then, find the percentage of which products the marketing research firm correctly predicted would have high sales (A), medium sales (B), and little (C) or almost no sales. Their percentages are presented by

P(A p |A) = 0.8, P(B p |A) = 0.1, P(C p |A) = 0.1,

in the first column of the above table, respectively. Similar analysis should be conducted to construct the remaining columns of the reliability matrix.

Note that for consistency, the entries in each column of the above reliability matrix should add up to one. While this matrix provides the conditional probabilities such as P(A p |A) = 0.8, the important information the company needs is the reverse form of these conditional probabilities. In this example, what is the numerical value of P(A|A p )? That is, what is the chance that the marketing firm predicts A is going to happen, and A actually will happen? This important information can be obtained by applying the Bayes Law (from your probability and statistics course) as follows: a) Take probabilities and multiply them "down" in the above matrix,

b) Add the rows across to get the sum,

c) Normalize the values (i.e. making probabilities adding up to 1) by dividing each column number by the sum of the row found in Step b,

0.2 0.5 0.3 A B C SUM 02(0.8) = 0.16 0.5(0.1) = 0.05 0.3(0.1) = 0.03 0.24 0.2(0.1) = 0.02 0.5(0.9) = 0.45 0.3(0.2) = 0.06 0.53 0.2(0.1) = 0.02 0.5(0) = 0 0.3(0.7) = 0.21 0.23 A B C (.16/.24)=.667 (.05/.24)=.208 (.03/.24)=.125 (.02/.53)=.038 (0.45/.53)=.849 (.06/.53)=.113 (.02/.23)=.087 (0/.23)=0 (0.21/.23)=.913 You might like to use Computational Aspect of Bayse' Revised Probability JavaScript E-lab for checking your computation, performing numerical experimentation for a deeper understanding, and stability analysis of your decision by altering the problem's parameters. d) Draw the decision tree. Many managerial problems, such as this example, involve a sequence of decisions. When a decision situation requires a series of decisions, the payoff table cannot accommodate the multiple layers of decision-making. Thus, a decision tree is needed. Do not gather useless information that cannot change a decision: A question for you: In a game a player is presented two envelopes containing money. He is told that one envelope contains twice as much money as the other envelope, but he does not know which one contains the larger amount. The player then may pick one envelope at will, and after he has made a decision, he is offered to exchange his envelope with the other envelope.

If the player is allowed to see what's inside the envelope he has selected at first, should the player swap, that is, exchange the envelopes?

The outcome of a good decision may not be good, therefor one must not confuse the quality of the outcome with the quality of the decision.

As Seneca put it "When the words are clear, then the thought will be also". Decision Tree and Influence Diagram Decision Tree Approach: A decision tree is a chronological representation of the decision process. It utilizes a network of two types of nodes: decision (choice) nodes (represented by square shapes), and states of nature (chance) nodes (represented by circles). Construct a decision tree utilizing the logic of the problem. For the chance nodes, ensure that the probabilities along any outgoing branch sum to one. Calculate the expected payoffs by rolling the tree backward (i.e., starting at the right and working toward the left). You may imagine driving your car; starting at the foot of the decision tree and moving to the right along the branches. At each square you have control, to make a decision and then turn the wheel of your car. At each circle, Lady Fortuna takes over the wheel and you are powerless. Here is a step-by-step description of how to build a decision tree: Draw the decision tree using squares to represent decisions and circles to represent uncertainty, Evaluate the decision tree to make sure all possible outcomes are included, Calculate the tree values working from the right side back to the left, Calculate the values of uncertain outcome nodes by multiplying the value of the outcomes by their probability (i.e., expected values). On the tree, the value of a node can be calculated when we have the values for all the nodes following it. The value for a choice node is the largest value of all nodes immediately following it. The value of a chance node is the expected value of the nodes following that node, using the probability of the arcs. By rolling the tree backward, from its branches toward its root, you can compute the value of all nodes including the root of the tree. Putting these numerical results on the decision tree results in the following graph:

A Typical Decision Tree

Click on the image to enlarge it

Determine the best decision for the tree by starting at its root and going forward.

Based on proceeding decision tree, our decision is as follows:

Hire the consultant, and then wait for the consultant's report.

If the report predicts either high or medium sales, then go ahead and manufacture the product.

Otherwise, do not manufacture the product.

Check the consultant's efficiency rate by computing the following ratio:

(Expected payoff using consultant dollars amount) / EVPI.

Using the decision tree, the expected payoff if we hire the consultant is:

EP = 1000 - 500 = 500,

EVPI = .2(3000) + .5(2000) + .3(0) = 1600.

Therefore, the efficiency of this consultant is: 500/1600 = 31%

If the manager wishes to rely solely on the marketing research firm's recommendations , then we assign flat prior probability [as opposed to (0.2, 0.5, 0.3) used in our numerical example].

Clearly the manufacturer is concerned with measuring the risk of the above decision, based on decision tree.

Coefficient of Variation as Risk Measuring Tool and Decision Procedure: Based on the above decision, and its decision-tree, one might develop a coefficient of variation (C.V) risk-tree, as depicted below:

Coefficient of Variation as a Risk Measuring Tool and Decision Procedure

Click on the image to enlarge it

Notice that the above risk-tree is extracted from the decision tree, with C.V. numerical value at the nodes relevant to the recommended decision. For example the consultant fee is already subtracted from the payoffs.

From the above risk-tree, we notice that this consulting firm is likely (with probability 0.53) to recommend Bp (a medium sales), and if you decide to manufacture the product then the resulting coefficient of variation is very high (403%), compared with the other branch of the tree (i.e., 251%).

Clearly one must not consider only one consulting firm, rather one must consider several potential consulting during decision-making planning stage. The risk decision tree then is a necessary tool to construct for each consulting firm in order to measure and compare to arrive at the final decision for implementation.

The Impact of Prior Probability and Reliability Matrix on Your Decision: To study how important your prior knowledge and/or the accuracy of the expected information from the consultant in your decision our numerical example, I suggest redoing the above numerical example in performing some numerical sensitivity analysis. You may start with the following extreme and interesting cases by using this JavaScript for the needed computation:

Consider a flat prior, without changing the reliability matrix.

Consider a perfect reliability matrix (i.e., with an identity matrix), without changing the prior.

Consider a perfect prior, without changing the reliability matrix.

Consider a flat reliability matrix (i.e., with all equal elements), without changing the prior.

Consider the consultant prediction probabilities as your own prior, without changing the reliability matrix.

Influence diagrams: As can be seen in the decision tree examples, the branch and node description of sequential decision problems often become very complicated. At times it is downright difficult to draw the tree in such a manner that preserves the relationships that actually drive the decision. The need to maintain validation, and the rapid increase in complexity that often arises from the liberal use of recursive structures, have rendered the decision process difficult to describe to others. The reason for this complexity is that the actual computational mechanism used to analyze the tree, is embodied directly within the trees and branches. The probabilities and values required to calculate the expected value of the following branch are explicitly defined at each node.