The satisficing vs optimizing debate

Last modified: Friday, 30-Dec-2011 13:11:57 MST

On this page I discuss issues that are connected to the long standing "Satisficing vs Optimizing" debate. For the benefit of readers who are not familiar with this debate I ought to point out that its main thesis is that satisficing has an advantage over optimizing. This discussion is related to my Info-Gap campaign.

I must admit that I find this debate tiresome. Nevertheless, I decided to join the fray so as to make the point that ... the debate is wasteful and counter-productive. After all, it is very easy to show that any "satisficing problem" can be formulated as an equivalent "optimization problem", hence at best the debate is about "style" rather than "substance". But what is more, it seems that this entire discussion about the presumed superiority of "satisficing" over "optimizing" is conducted without serious attention being given to the fact that obtaining a "satificing" solution may not be sufficient. Obviously, this is so because a "satisficing" solution may turn out to be dominated — in the Pareto sense — by other "satisficing" solutions. In short, a solution that is better than your "satisficing" solution may well be available to you. So why shouldn't you be able to benefit from it? My point is then that advocates of the superiority of "satisficing" over "optimizing" must in the first instance make a case for the proposition that it is a priori better to opt for a "good" solution even though we can obtain a "better" solution. Likewise, they must show us why we ought to be satisfied with a "better" solution if we can obtain the "best" solution! For instance, are they suggesting that I settle for — that is be satisfied with — 4.25% p.a interest on my 3-month term deposit if I can get 4.75%? Or, that I pay A$3.99 for a bar of dark chocolate if I can pay A$2.50? Of course, it is a commonplace that obtaining the "best" solution is not always an easy task so that, obviously, cases will arise where we would have no other choice but to make do with a solution that is not as good as the "best" solution. But the point brought out by this trivial fact is not that we must therefore base our quest for solutions to problems confronting us on the "satisficing" premise. All that this fact brings out is that we ought to distinguish between two separate issues: The desirability of obtaining the "best" solution.



The difficulties involved in obtaining the "best" solution. I submit that a good deal of the "Satisficing vs Optimizing" debate is due to a simplistic treatment of this distinction. But before we proceed to examine the technical aspects of this debate, let us remind ourselves of the basic question on the agenda.

Consider the classical shortest path problem: you want to find the shortest path from node A to node B on a graph. Obviously, this is an optimization, namely a minimization, problem. An intuitive satisficing counter-part of this problem could, for instance, be the following problem: Find a path from node A to node B whose length is in the interval ΔT=[t L ,t U ]. Thus, if ΔT=[125,176], a solution to the satisficing problem is a path from node A to node B whose length is not smaller than 125 and not greater than 176. Clearly, in general the (optimal) solution to the optimization problem is not necessarily a (feasible) solution to the satisficing problem — and vice versa. The question is then: in real-world situations, should we formulate our "path problem" as an optimization problem or as a satisficing problem? The main point that I stress in this discussion is that the above "satisficing path problem" can be formulated as an optimization problem and that the recipe for this task is straightforward. Therefore, the important thing is not whether we optimize or satisfice, but rather what do we optimize and what do we satisfice.

My decision to enter the ring was prompted by the incessant misguided discussion on this topic in the Info-Gap literature, where it is repeatedly argued that optimal solutions are not robust. Apparently, the Info-Gap folks are unaware of the existence of the vibrant field of robust optimization. But more than this, Info-gap proponents seem equally unaware of the simple fact that in cases where robustness is a factor, then robustness can/should be incorporated in the formulation of the optimization model to ensure that the optimal solutions generated by this model are robust. And if robustness is not a factor, then it will not be incorporated in the model full stop. Before noting Info-Gap's position on this matter, I should point out that although in the 2006 edition of the Info-Gap book the robustness model is viewed as a "robust satisficing" model rather than a "robust optimizing" model — as it is viewed in the 2001 edition of the book — Info-Gap's generic robustness model describes a run of the mill ... optimization (maximization) problem: So Info-Gap's thesis is that in the face of severe uncertainty it is "better" to maximize the robustness (alpha) of the performance constraint r c ≤ R(q,u) than to optimize the performance function R. Details on this model can be found elsewhere.

The point that I make here is that the "satisficing vs optimizing" issue is a ... non-issue. What is an issue, indeed an important one, is "what" should we seek to satisfice and "what" should we seek to optimize. So what, then, is the "satisficing vs optimizing" debate all about? For one thing, it is not about substance. It is about style, terminology, and buzzwords. Of course, for certain purposes style, terminology, and buzzwords are more important than substance. Hence, the debate. To make sense of what is at issue here, let us consider the following two problems, where X is a given set and f is a real-valued function on X: Satisficing Problem: Find an x in X that satisfies a given list of constraints. Optimization Problem: Find an x' in X that optimizes f(x) over X. Now consider this: The Fundamental Theorem of the Sacrificing vs Optimizing debate: Any satisficing problem on Planet Earth can be formulated as an equivalent optimization problem so that any feasible solution to the satisficing problem is optimal with respect to the optimization problem, and vice versa. A number of comments on this lovely theorem: I do not know who first proved this result. I for one have been using it frequently in my research and teaching since about 1973, and I know for a fact that it is being used extensively, for many years now, in the areas of Operations Research, Optimization, Computer Science, Statistics, etc.

What it implies is that the important question is not whether we should satisfice or optimize. The important question is what should be satisfied and what should be optimized.

It shows that off-the-cuff claims such as "satisficing is more robust than optimizing" and "it is better to satisfice that to optimize" are misguided.

Yes, I am aware of Herbert Simon's work in this area.

Yes, I am aware of Barry Schwartz's work on this topic; his popular The Paradox of Choice book, and I have even seen his road-show. The proof of this theorem is so straightforward that I can set it out here and now. It runs as follows: BOP.

Consider any arbitrary satisficing problem, namely consider any set X and any list of constraints on X. Now, let f denote the characteristic function of the feasible subset of X with respect to the constraints under consideration, namely let f(x):= 1 if x is in X and it satisfies all the constraints; and f(x):= 0 otherwise Then by inspection: If x' is a feasible solution to the satisficing problem then x' maximizes f(x) over X.

If x' in X maximizes f(x) over X then x' is a feasible solution to the satisficing problem. EOP. For example, consider the following satisficing problem: Find an element x' in a given set X such that g(x') > G and h(x') < H, where G and H are given numbers and g and h are given real-valued functions on X. All we have to do to rephrase this problem as an optimization problem is to let f denote the real-valued function on X defined as follows: f(x):= 1 iff g(x) > G and h(x) < H ; f(x):=0 otherwise. The idea is then that the given satisficing problem is equivalent to the optimization problem max {f(x): x in X} in that an x' in X is a feasible solution to the satisficing problem iff x' is an optimal solution to the optimization problem max{f(x): x in X}.

I have plenty more to say on the Satisficing vs Optimizing debate. But I am afraid that this will have to wait till I complete a number of much more urgent tasks. For now, you may wish to get hold of Jan Odhnoff's (1965) paper, whose last paragraph reads as follows: It seems meaningless to draw more general conclusions from this study than those presented in section 2.2. Hence, that section maybe the conclusion of this paper. In my opinion there is room for both 'optimizing' and 'satisficing' models in business economics. Unfortunately, the difference between 'optimizing' and 'satisficing' is often referred to as a difference in the quality of a certain choice. It is a triviality that an optimal result in an optimization can be an unsatisfactory result in a satisficing model. The best things would therefore be to avoid a general use of these two words. Jan Odhnoff

On the Techniques of Optimizing and Satisficing

The Swedish Journal of Economics

Vol. 67, No. 1 (Mar., 1965)

pp. 24-39 I fully sympathize with Odhnoff's frustration. Indeed, as attested by the literature, it is remarkable to what length the "triviality" identified by Jan Odhnoff can be taken. If you are unfamiliar with the "satisficing vs optimizing" debate, use your favorite WWW search engine to look up catch phrases such as "good is better than best","why more is less", "advantage of sub-optimal models" If you are surprised, perhaps amazed or even perplexed, to learn that a sub-optimal solution can be better than an optimal solution, or have an advantage over it, do not panic. Not yet, anyhow. Conserve your anti-panic resources, for you'll surely need them when you hear the really bad news. The following simple practical example illustrates some of the non-issues that are advanced by proponents of the satisficing vs optimizing debate. Example You plan a visit to Paris with your spouse and have to decide what car to hire. There are 5 options, call them C1, C2, C3, C4, C5. After long deliberations and consultations, you decide that the optimal choice is the small, funky, fuel-efficient C3. On hearing about this choice, one of your friends points out that this choice, that is optimal for your problem, is not only sub optimal, but actually unsatisfactory, in the context of his — your friend's — problem, which is: what car to hire for a month long trans-Australia desert race. You friend thus concludes: satisficing is better than optimizing! One need hardly point out that in this context the triviality is so obvious and transparent that you'll immediately be able to see how absurd the argument is. But all it takes to camouflage such a triviality is an abstraction, some mathematical notation, Greek symbols, and a couple of buzzwords. So let's see how it works. The first thing you need to do is to create two different but slightly related abstract problems. Let us call them Problem A and Problem B and let s A and s B denote the respective optimal solutions. So by construction, s A is optimal in the context of Problem A and s B is optimal in the context of Problem B. Therefore, typically s A is sub-optimal in the context of Problem B and s B is sub-optimal in the context of Problem A. So far so good, but ... not very exciting. So how about this exciting result: Very often there is a sub optimal solution, call it y A , that is superior to (better than) the optimal solution s A and there is a sub-optimal solution y B that is superior to (better than) the optimal solution s B . Yes!!!!!!!! Try to prove this formally on your own. I shall provide a formal proof after I return from my overseas trip in October. The really bad news is that respectable professional journals publish this kind of material. Now you can panic, and rightly so. The following naive example will show you how much mileage can be made of what Jan Odhnoff termed "trivialities". Example Let X and U be some sets, let R be a real-valued function on the cartesian product of these sets, and let u* be some given element of U. Consider now the following seemingly innocuous problem: Problem A: max {R(x,u*): x in X} In other words, the objective is to maximize R(x,u*) over x in X. Assume that this problem has feasible solutions and let x A denote an optimal solution to this problem. Thus, R(x A ,u*) ≥ R(x,u*) for all x in X. To be more concrete, consider the case where X is the real line, U=[0,1], u*=0.5 and R(x,u) = 2ux - x2 In this case the optimal value of x is x A = 0.5. Note that in this context the feasible solution x 0 =0 is clearly sub-optimal. So far so good. Now, suppose that the survival of Planet Earth critically depends on the validity of the constraint R(x,u) ≥ 0, assuming that we control x and Nature controls u. In this case, to save Planet Earth we consider this problem: Problem B: Find an x B in X such that R(x B ,u) ≥ 0, for all u in U. Note that if, as above, X is the real line and U=[0,1], then this problem has only one feasible solution, namely x B = x 0 = 0. In summary then: for the concrete instance where X is the real line, U=[0,1], and u*=0.5 we have: The optimal solution to Problem A, namely x A = 0.5, is not as good as the sub-optimal solution x 0 = 0, when they are compared in the context of Problem B. In fact, x A = 0.5 is not even feasible in the context of Problem B. If you are a bit puzzled regarding the logic of this Example, join the queue. Why on earth should we expect an optimal solution to Problem A to retain its superiority over other solutions in the context of Problem B ?!?!?!?! Naturally, we would counter-argue as follows: The "best" solution to the satisficing problem, Problem B, namely x B = x 0 = 0, is sub-optimal in the context of the optimization problem, Problem A !??!?!?!?!?!!?!?!?!?!?!?!??!!? So what?! It is sad, very sad, that such convoluted, misguided, arguments are used to "show" that satisficing is better than optimizing. What a mess! The following short quote vividly illustrates how pointless the "satisficing vs optimizing" debate is (emphasis is mine): At some point in his deliberations a decision-maker finds he is seeking an optimal value (or perhaps a set of optima under various conditions). His aide, the operational research worker, points out that the cost of finding the optimum is high. The decision-maker takes this new piece of information into consideration and as part of his own decision says "All right. I will be satisfied with something within 5 per cent of optimum." The OR man may then be able to develop a less costly technique. For example, if branch-and-bound is to be used, he may modify it to accept an improvement over a current solution only if it is better by the necessary margin. Whether we now say that the decision-maker is an optimizer or satisficer is, I suggest, a ticklish point which we should not worry too much about. M. Benham

Reply to a comment by C.B. Chapman

Operational Research Quarterly 24(2), p. 311, 1973 I have plenty more to say about such general "trivialities". If you are truly eager to know more about what I have to say, feel free to contact me. Next.

The case against optimization is often based on references to Simon's theory of bounded rationality. It is therefore important to note that this theory does not categorically assert that it is better to satisfice than optimize. I find the following quote informative (the color and oversized font is mine): There are many excellent treatments of bounded rationality (see, e.g., Simon (1982a, 1982b, 1997) and Rubinstein (1998)). Appendix A provides a brief survey of the mainstream of bounded rationality research. This research represents an important advance in the theory of decision making; its importance is likely to increase as the scope of decision-making grows. However, the research has a common theme, namely, that if a decision maker could optimize, it surely should do so. Only the real-world constraints on its capabilities prevent it from achieving the optimum. By necessity, it is forced to compromise, but the notion of optimality remains intact. Bounded rationality is thus an approximation to substantive rationality, and remains as faithful as possible to the fundamental premises of that view. Wynn C. Stirling (2003, p. 10)

Satisficing Games and Decision Making

Cambridge University Press So when I am in a good mood I argue as follows: If you can optimize, then you surely should do so. If you can't, then do the best you can. But never ever use the "bounded rationality" argument as an excuse for a simplistic, quick-and-dirty "satisficing job". I cannot tell you here and now what I argue when I am in a bad mood. But, we can discuss this over a cup of coffee (skinny latte, no sugar, please). Talking about optimization. It is amazing what kind of misconceptions some analysts have about optimization, its role in decision making and management, its limitations and its relation to other methodologies. For instance, read this (color is mine): From Optimization to Adaptation:

Shifting Paradigms in Environmental Management and Their Application to Remedial Decisions.



ABSTRACT

Current uncertainties in our understanding of ecosystems require shifting from optimization -based management to an adaptive management paradigm. Risk managers routinely make suboptimal decisions because they are forced to predict environmental response to different management policies in the face of complex environmental challenges, changing environmental conditions, and even changing social priorities. Rather than force risk managers to make single suboptimal management choices, adaptive management explicitly acknowledges the uncertainties at the time of the decision, providing mechanisms to design and institute a set of more flexible alternatives that can be monitored to gain information and reduce the uncertainties associated with future management decisions. Although adaptive management concepts were introduced more than 20 y ago, their implementation has often been limited or piecemeal, especially in remedial decision making. We believe that viable tools exist for using adaptive management more fully. In this commentary, we propose that an adaptive management approach combined with multicriteria decision analysis techniques would result in a more efficient management decision-making process as well as more effective environmental management strategies. A preliminary framework combining the 2 concepts is proposed for future testing and discussion. Igor Linkov, F Kyle Satterstrom, Gregory A Kiker, Todd S Bridges, Sally L Benjamin, David A Belluck

Integrated Environmental Assessment and Management

Volume 2, Number 1, pp. 92-98, 2006 Are we to understand from this that "optimization" cannot deal with uncertainty?! Are we to conclude that "optimization" is not adaptive? And what about multicriteria decision analysis techniques: don't they offer, among other things, something called Pareto Optimization? I shall address this quote, and the article itself, in due course. Let me just point out here and now that "optimization-based management" and "adaptive management" are not mutually exclusive. That is, there is no reason why "optimization-based management" cannot be adaptive and why "adaptive management" cannot not be "optimization-based". When I read abstracts like this I do not know whether I should laugh or cry. But seriously, this is not funny, not funny at all.

The term Pareto optimization refers to the area of optimization that concerns itself with the modeling, analysis and solution of problems that require the elimination of dominated solutions from the decision space of the problem considered. The idea is attributed to Vilfredo Pareto (1848-1923), an Italian economist and sociologist. He was a mathematician/physicist by training, and started his professional career as an engineer. In plain words, a Pareto solution to a decision problem has the property that it cannot be improved with respect to any criterion without its performance worsening with regard to some other criterion. The English translation (from French) of the original phrasing of the idea is as follows: We will say that the members of a collectivity enjoy maximium ophelimity in a certain position when it is impossible to find a way of moving from that position very slightly in such a manner that the ophelimity enjoyed by each of the individuals of that collectivity increases or decreases. That is to say, any small displacement in departing from that position necessarily has the effect of increasing the ophelimity which certain individuals enjoy, and decreasing that which others enjoy, of being agreeable to some, and disagreeable to others.

Vilfredo Pareto

Mannual of Political Economy (1906, p. 261)

(Just in case: ophelimity = economic satisfaction) In WIKIPEDIA, Pareto Efficiency, or rather Inefficiency, is described as follows: An economic system that is Pareto inefficient implies that a certain change in allocation of goods (for example) may result in some individuals being made "better off" with no individual being made worse off, and therefore can be made more Pareto efficient through a Pareto improvement. Here 'better off' is often interpreted as "put in a preferred position." It is commonly accepted that outcomes that are not Pareto efficient are to be avoided, and therefore Pareto efficiency is an important criterion for evaluating economic systems and public policies. In other words, a solution to a problem is Pareto Efficient if it is not "dominated" by other solutions to the problem. For example, if you like — and I mean LIKE — both mangoes and dark chocolate, then a solution yielding 4 mangoes and 5 dark chocolate bars is dominated by a solution yielding 4 mangoes and 6 dark chocolate bars.

The point I want to make via this example is that you may run into great trouble should you propose a solution that is (Pareto) dominated by another solution. In fact, you may even risk losing your job! The bottom line is that often it is not good enough to generate a feasible solution, even if this solution satisfies some pre-determined basic requirements. This is so because normally, we (individuals, organizations etc.) seek "good" solutions to the problems confronting us. So, should it transpire that the solution we propose for a particular problem is not as good as another solution that is equally available to us, the implication would be very clear. We would be deemed incompetent, indeed derelict, in our duty, especially if the "satificing" solution that we propose is to be adopted by others who are dependent on our proposal and/or pay for it. This project is related to some of my other campaigns, namely the Worst-Case Analysis / Maximin Campaign, Robust Decision-Making Campaign, Responsible Decision-Making Campaign and the Info-Gap Campaign.