Luke tasked me with researching the following question

I‘d like to know if anybody has come up with a good response to any of the objections to ’full information’ or ‘ideal preference’ theories of value given in Sobel (1994). (My impression is “no.”)

The paper in question is David Sobel’s 1994 paper “Full Information Accounts of Well-Being” (Ethics 104, no. 4: 784–810) (his 1999 paper, “Do the desires of rational agents converge?”, is directed against a different kind of convergence and won’t be discussed here).

The starting point is Brandt’s 1979 book where he describes his version of a utilitarianism in which utility is the degree of satisfaction of the desires of one’s ideal ‘fully informed’ self, and Sobel also refers to the 1986 Railton apologetic. (LWers will note that this kind of utilitarianism sounds very similar to CEV and hence, any criticism of the former may be a valid criticism of the latter.) I’ll steal entirely the opening to Mark C Murphy’s 1999 paper, “The Simple Desire-Fulfillment Theory” (rejecting any hypotheticals or counterfactuals in desire utilitarianism), since he covers all the bases (for even broader background, see the Tanner Lecture “The Status of Well-Being”):

An account of well-being that [Derek] Parfit labels the ‘desire-fulfillment’ theory (1984, 493) has gained a great deal of support as the most plausible account of what makes a subject well-off. According to the desire-fulfillment, or DF, theory, an agent’s well-being is constituted by the obtaining of states of affairs that are desired by that agent.1 Importantly, though, while all DF theorists affirm that an account of what makes an agent well-off must ultimately refer to desire, there now appears to be a consensus among those defending DF theories that it is not the satisfaction of the agent’s actual desires that constitutes the agent’s well-being, but rather the satisfaction of those desires that the agent would have in what I will call a ‘hypothetical desire situation.’ Just as Rawls holds (1971, 12) that the principles of right are those that would be unanimously chosen in a hypothetical choice situation, that is, a setting optimal for choosing such principles, defenders of DF theory hold that an agent’s good is what he or she would desire in a hypothetical desire situation, that is, a setting optimal for desiring.2 While the precise nature of the hypothetical desire situation is a matter of debate among DF theorists, all of them seem to agree that any adequate DF theory will incorporate a strong information condition into the hypothetical desire situation. In treating of the concept of an individual’s good, Sidgwick writes: It would seem. . . that if we interpret the notion ‘good’ in relation to ‘desire,’ we must identify it not with the actually desired, but rather with the desirable:—meaning by ‘desirable’ not necessarily ‘what ought to be desired’ but what would be desired. . . if it were judged attainable by voluntary action, supposing the desirer to possess a perfect forecast, emotional as well as intellectual, of the state of attainment or fruition (1981, 110–111). Brandt writes that a state of affairs belongs to an agent’s welfare only if it is such that “that person would want it if he were fully rational” (1979, 268); an agent’s desire is rational, on Brandt’s view, if it would survive or be produced by careful ‘cognitive psychotherapy’ [where cognitive psychotherapy is the ‘whole process of confronting desires with relevant information.’]. . . I shall call a desire ‘irrational’ if it cannot survive compatibly with clear and repeated judgments about established facts. What this means is that rational desire. . . can confront, or will even be produced by, awareness of the truth (1979, 113). And Railton has argued that we should consider an agent’s good to be “what he would want himself to want. . . were he to contemplate his present situation from a standpoint fully and vividly informed about himself and his circumstances, and entirely free of cognitive error or lapses of instrumental rationality” (1986a, 16).

There are at least four general strategies one could take in arguing that such an informed viewpoint is inadequate in capturing and commensurating what is in an agent’s interests. First, one could argue that the notion of a fully informed self is a chimera. This would likely involve the worry that from the fact that any of the lives that one is to assess the value of must be in some sense available to one (otherwise it could not be a valuable life for one to live) it does not follow that all of them together must be available to one’s consciousness. To make good this suggestion against the full information account one would have to provide reasons to think there are substantive worries about uniting the experience of all lives one could lead into a single consciousness. Second, one could argue that even in cases in which an agent is adequately informed of the different life paths she is choosing between, there is no single pro-attitude, such as preferring, which appropriately measures the value of the diverse kinds of goods available to an agent…The things that sensibly elicit delight are not generally the same things that merit respect or admiration. Our capacity for articulating our attitudes depends upon our understandings of our attitudes, which are informed by norms for valuation. Third, one could argue that a vivid presentation of some experiences which could be part of one’s life could prove so disturbing or alluring as to skew any further reflection about what option to choose. Allan Gibbard has suggested the example of “a more vivid realization of what peoples’ innards are like” causing a “debilitating neurosis” which prevents me from eating in public. [cf. Bostrom’s information-harms typology: ‘evocation hazard’; personally, I would use something like ‘brainwashing’ or war & holocausts] Fourth, one could worry against naturalistic versions of the full information account that the purportedly naturalistically described informed viewpoint essentially invokes unreduced normative notions. [Naturalistic versions seem to assume non-physical definitions, like ‘ideal set of information’, and hence smuggle in non-naturalistic beliefs]

Emphasis added; Sobel pursues line of objection #1.

I will try to reconstruct the argument in something more closely approximating propositional logic so it’s easier to classify any criticism of Sobel based on what premise or inference they are attacking. The following is based on my reading pg 796–797,801–808; I omit all the examples, and some of the weaker tangential arguments. (For example, the suggestion that the ideal moral system may go insane from the difficulty of choices or it will despise us for being so pathetic and wish us dead (pg807), which are obvious anthropomorphisms.)

The ideal moral system may not err Every possible life judgement must be judged by an agent An agent either lives that possible life, or it does not live it If the agent does not live the possible life: If the agent does not live the possible life, it does not live the life’s experiences Experiences may contain otherwise-unobtainable information [‘revelations’] A judgement based on incomplete information may err The ideal moral system will not use an agent that lives the possible life (1, 4.1–4.3) If the agent does live the possible life, it is either a ‘serial’ agent or an ‘amnesia’ agent Serial; the agent either lives the same life or a different life: The same life: To live the same possible life as that possible life, the agent must know only the same things as the possible life Most possible lives do not know what it is to live a different life If the agent knows only the same things as the possible life does, then in most lives it cannot know what it is to live an additional life If one does not know what additional lives are like to live, one may err in assessing one’s own life The serial agent may live a life which does not know what other lives are like The serial agent may err The ideal moral system will not use a serial agent which knows the same as the possible life (1, 4.3, 5.1.1.1–6) A different life: If the agent knows more or less things than the possible life, it is not identical to the possible life If it is not identical to the possible life, it may experience or act differently If may experience things differently or act differently, it may judge experiences or judge acts differently If it may judge experiences or acts differently, then it may err The ideal moral system will not use a serial agent which knows more or less than the possible life (1,4.3,5.1.2.1–4) Amnesia: If the agent is an amnesia agent, it will work under incomplete information due to forgetting Each amnesia period will form a different judgement These judgements may differ Differing judgements may lead to error Rebuttals rejecting 5.2.4: The judgements can be weighed into a final correct judgement by an unspecified algorithm But - how does this work, exactly? What is the life’s utility over its span? Only one (‘allegedly temporally privileged’) judgement is used, and a judgement can’t differ with itself They will not differ, as the fully informed agent at any period will agree with itself at all other periods But - how would one prove such a thing? It is ‘indeterminate’ and ‘unlikely’. The ideal moral system will not use an amnesiac agent (1, 5.2.1–4) The ideal moral system will use neither a serial or amnesiac agent (5.1.7, 5.1.2, 5.2.5) The ideal moral system will not use an agent The ideal moral system will not judge lives

Broken down like this, we can see a number of ways to strengthen or attack it. For example, we can strengthen the attack on serial agents who lead different lives (5.1.2) by defining agents and lives as Turing machines and then invoking Rice’s theorem (the generalized Halting Theorem) - obviously ‘goodness of life’ is a nontrivial predicate and so there will be Turing machines for whom the question is uncomputable.

This strengthening illustrates a possible attack, on the key premise 1: “the system must not err”. Obviously, if the ethical system may err, all the arguments collapse: it’s fine for an amnesia agent to sometimes contradict itself, it’s fine for a too-knowledgeable serial agent to not act the same, etc.

But our strengthening of 5.1.2 to Rice’s theorem would seem to work for all the proposed agents (‘the amnesia agent will both work under incomplete information and be confronted with uncomputable lives’), which is not an issue. What is an issue is that this would seem to work for any agent implementing any nontrivial ethical system - a utilitarian agent (‘you discover a planet-destroying bomb - which is triggered by the halting of a particular Turing machine…’) or many deontological agents (‘your computer claims to be a conscious being and you must not reboot it, because that would violate your deontological respect for personal autonomy and the right to live; you try to check its claims but…’).

An argument which proves too much is not a good argument, and it seems to me that we can construct situations for agents running any moral system where they may err, if only through extreme brute force skeptical claims like the Simulation Hypothesis. (I say ‘may’ because Sobel’s arguments above do not seem to show that various kinds of agents will err, which would be very difficult to prove.)

Given this, we can reject premise 1 and are now free to pick from any of the kinds of agents discussed, since now that they are free to err, they are also free to have incomplete information, not attempt to crack uncomputable cases, etc. (To quote Murphy pg 23, “It would imply the indefensibility of DF [desire-fulfillment] theory if, that is, their hypothetical desire situations incorporated a full information condition, which is the target of Sobel’s and Rosati’s criticisms. If a theory’s information condition were more modest, perhaps it would escape those criticisms.”)

Sobel’s paper has only occasionally been grappled with or defended; usually it is described as illustrating some serious problems with reflective theories, but not much more.

Support:

Loeb, Don 1995: “Full-information theories of individual good”, Social Theory and Practice 21: 1–30 Loeb largely agrees with Sobel, but focuses his criticisms on more empirical grounds, like it taking lifetimes to learn enough, or concerns about judgements of goodness changing as additional information comes in (“restricting the scope of relevant information to the science of the subject’s day would lead to an implausibly relativized account of individual good”). The obvious response to the first ~18 and last ~10 pages of his paper is, just like Sobel, he is anthropomorphizing with a vengeance and that problems for us are not problems for sufficiently powerful agents (the basic theory appeals to asymptotes and ideals), to which he replies: "It would be ironic for a theory that makes questions of value depend on a causal matter (and that is presented in the spirit of naturalism) to take refuge in imagining massive alterations in the laws of nature. But irony is no guarantee of incorrectness. Still, it is not at all clear that such massively impossible counterfactuals have determinate truth values. Counterfactuals about what people would want in causally impossible circumstances are still causal counterfactuals. As such, they depend on causal laws—in particular, laws of psychology. But the laws of psychology would have to be vastly different from the actual laws if they were to rule out all of the unwelcome influences I have pointed out. And since these are the very laws that support the counterfactuals, it is not at all clear that enough is left of them to insure that the counterfactuals have determinate truth values.40 [40: A fortiori, it is not clear that these counterfactuals would have truth values that are empirically determinable.] It is also not clear that the full-information approach would be plausible if it required that we imagine such wide-scale changes in the laws of psychology. We know too little to be confident of that. Perhaps my counterpart would no longer wish for me to shun the poison liquid in a world in which he would react no differently to yelling than to whispering, and in which one’s motivations would not be influenced by massive alterations in one’s cognitive capabilities alone. Without knowing how the laws of psychology would be altered, we are in no position to judge whether the approach maintains whatever plausibility it initially appeared to have." As a hardcore materialist, I do not buy this argument; the ‘laws of psychology’ are no laws at all, but rather one of many possibilities allowed by the laws of physics, and the counterfactuals are not impossible.

Criticism:

Campbell, Stephen Michael, 2006 M.A. thesis: “Phenomenal Well-being”; pg 40-end: Campbell describes a slightly more specific agent, where the lives are simply compared pair-wise and with a point system to break potential ties and intransitivity. Campbell seems to reject premise 1 too, in describing a flawed system (“…the ranking should be accurate, even if not perfectly precise”), but argues that this is acceptable since we do it in ordinary life and offers as a somewhat facetious example the difficulty of perfectly comparing ice cream flavors: Your memories of the different experiences might get corrupted. By the time you get to the end of the thirty-one flavors, perhaps you cannot remember what flavors 5 and 12 were like or even what you thought about them at the time. Or perhaps your memory was distorted at some point in the process. You can re-taste those flavors, but you cannot recapture the exact taste experience again (since, for one, you will now have more ice cream on your stomach), and we have no guarantee that the re-experience of a sample will not diverge in such a way as to affect your ranking. Campbell hopes agents will ultimately converge despite the roughness of judging, and most of his replies to Sobel/Rosati/Loeb depend on that or his own brand of anthropomorphizing the ideal system (eg. suggesting that an unappreciative system will, after experiencing countless lives, come to appreciate them - I’m reminded of the TvTropes Do Androids Dream?).

Campbell hopes agents will ultimately converge despite the roughness of judging, and most of his replies to Sobel/Rosati/Loeb depend on that or his own brand of anthropomorphizing the ideal system (eg. suggesting that an unappreciative system will, after experiencing countless lives, come to appreciate them - I’m reminded of the TvTropes Do Androids Dream?). Beaulieu, 1997 MA thesis, "The Normative Authority of Our Fully Informed Judgements; Goes after Rosati’s arguments, arguing that enough memory can serve to appreciate differing viewpoints, changes in one’s desires with additional information are welcome, and Rosati’s examples (showing full information to be incoherent) do not work. Most worth reading is chapter 3.

Goes after Rosati’s arguments, arguing that enough memory can serve to appreciate differing viewpoints, changes in one’s desires with additional information are welcome, and Rosati’s examples (showing full information to be incoherent) do not work. Most worth reading is chapter 3. Anton Tupa, 2006 PhD thesis “Development and Defense of a Desire-satisfaction Conception of Well-being” Tupa argues Rosati’s internalism criteria can be met by idealized/extrapolated versions of a person, and that doesn’t refute desirism (pg 111–128). Discussing Sobel on pg 137, he writes something which I think is very insightful when applied to suggestions like Sobel’s ‘the ideal agent/system will go mad if it had perfect information’: I think that so long as the conditional fallacy [see "The Conditional Fallacy in Contemporary Philosophy"] has the form of “for all we know, x could be a consequent change, given your analysans, and if so, your analysis will yield counterintuitive results,” then a solution can be provided. I am optimistic here because although sometimes critics of ideal advisor accounts write as if there would be only one possible world in which one would have full information, and they then prognosticate doomsday-like scenarios, in reality (in some sense perhaps), there are many possible worlds in which one is fully informed, i.e. there are many A+ candidates. Of these many possible worlds in which one has full information, some will involve changes in one that will be problematic, but some will involve few significant changes in one or changes that are quite unproblematic. …Problems that can be solved by appeal to the concept of a personality include worries about the increased mental capacity and mental processing speed that would have to be the case in order for someone to have full information. To be sure, it is a little odd even thinking about people with what can only be described as super-minds. However, anyone’s personality, I say, is compatible with increased cognitive capacity and the like. Unless someone can show that some counterintuitive consequent change must occur in a world in which one is fully informed, the method of singling out the best possible world in which one is fully informed seems to have a great deal of promise…Thus far no one has come close to offering an argument that counterintuitive consequent changes must result in the nearest possible world in which one is fully informed…Later, I will examine whether full propositional information is adequate as an information set for the ideal advisor. While Rosati and Sobel are skeptical, I argue that full propositional information is far richer and more textured than they envision and may very well be sufficient to play the requisite role in the deliberation of the ideal advisor. Tupa’s replies to previously mentioned claim and arguments often have this flavor up to pg 150, where he then rejects much of premise 5 and argues for the judging agent to be able to make flawless assessments of a life without adopting the viewpoint of the life (based on ‘propositional knowledge’: “I have a hard time seeing how knowledge of what something is like is evaluative in any important sense”) and like Campbell, he contrasts Sobel’s demand for perfect judgement as beyond even the most reliable ordinary daily judgement

Works on the subject include: