We saw that there are some interesting analogies between clinical and non-clinical confabulation. Table 1 offers a summary of such analogies.

Table 1 Analogies between clinical and non-clinical confabulation Full size table

So, what features of an explanation make it an instance of non-clinical confabulation? I believe there are two necessary features and one optional feature that deserve attention.

Necessary features:

1. Ignorance: People ignore some of the key causal factors leading to the formation of their attitudes and choices. 2. Ill-groundedness: People produce ill-grounded claims about the causes of their attitudes and choices.

Common but optional feature:

3. Further ill-groundedness: As a result of producing the ill-grounded causal claim, people commit to further beliefs that, even if generally plausible, do not fit the specifics of the situation in which the attitude is formed or the choice is made.

When people confabulate they ignore some of the psychological processes responsible for the formation of their attitudes or the making of their choices, and produce an ill-grounded causal claim when asked for an explanation. The purpose of the rest of this section is to clarify what the epistemic costs of confabulation are and how my account relates to existing accounts of confabulation in the philosophical literature. In Section 5.1. I ask whether people’s ignorance of the causal history of their attitudes and choices has implications for self-knowledge intended as mental-state self-attribution. In Section 5.2. I consider why people offer ill-grounded explanations rather than acknowledging ignorance, and why they go on and commit themselves to further ill-grounded beliefs.

Ignorance

The relevant philosophical literature suggests that confabulation is a failure of self-knowledge. For instance, on the basis of the evidence on pervasive confabulation about reasons for attitudes and choices, Lawlor (2003) argues that mental-state self-attributions lack authority as they are not as accurate as third-party attributions and fail to correlate with the person’s future behaviour. On similar grounds, Carruthers (2005) argues that there is no special first-personal route to self-knowledge. His influential view is that people attribute mental states to themselves in the same way as they attribute mental states to others, using interpretation.

On the basis that ill-grounded explanations of attitudes and choices are virtually indistinguishable from well-grounded ones and are very common, Scaife (2014, page 471) argues that we should be genuinely concerned about the reliability of self-knowledge. Thus, Strijbos and de Bruin (2015) are right in interpreting the standard philosophical account of confabulation as an instance of “failed mind-reading”: confabulation shows that people make mistakes in attributing mental states to themselves.

[If] confabulation turns out to be a widespread phenomenon in everyday social practice, this would seriously undermine first-person authority of mental state attribution. (Strijbos and de Bruin 2015, page 298)

Whether the form of non-clinical confabulation we are examining here involves a failure in mental-state self-attribution depends on what we take successful mental-state self-attributions to require. In their original paper on priming effects, Nisbett and Wilson are very clear that participants’ verbal reports are inaccurate because participants ignore the mental processes leading to their choices and, as a result, misidentify the reasons for their choices. Confabulation is evidence for the view that people are blind to the processes responsible for their choices, but does not imply that they are also blind to what choices they made. Independent of whether research participants can identify the reasons for their choices, their choices are authentic, in the sense that they are sincerely reported and genuinely endorsed. If successful mental-state self-attributions require awareness of one’s attitudes and choices, then they are not threatened by the form of confabulation reviewed here.Footnote 6

Does successful mental-state self-attribution require that people are aware of the mental processes responsible for their attitudes and choices? This sounds like an implausibly demanding requirement. In the cases where confabulation has been observed and documented (such as consumer choice, moral judgements, and hiring decisions), causal factors leading to the attitude or the choice are likely to be psychological processes that involve priming effects, socially conditioned emotional reactions, and implicit biases whose role cannot be directly experienced or easily observed, but needs to be inferred on the basis of the systematic, scientific study of human behaviour.

Does successful mental-state self-attribution require that people’s subsequent behaviour is explained and reliably predicted on the basis of that self-attribution? This also sounds like an implausibly demanding requirement, one that imposes more stability and consistency on people’s mental life than is reasonable to expect. We do not know whether people who claim to have chosen a pair of stockings for its texture would choose the softest pair of stockings at their next consumer choice survey, but should they not do so, the fact that mental-state self-attributions fail to shape their future behaviour does not speak so much against self-knowledge as against the crystallization of preference criteria for stockings.

I have proposed here that the evidence of confabulation gathered in the literature on consumer choice and moral judgements and in the research on implicit biases in hiring decisions does not threaten self-knowledge as mental-state self-attribution. Research participants know the content of their attitudes and choices – they just ignore some of the mental processes contributing to them.

Ill-Groundedness

We saw that when people confabulate they tell more than they can know, and offer ill-grounded causal claims as explanations for their attitudes and choices. In addition to that, people may also end up committing to beliefs that do not fit the specifics of the situation, as a result of producing ill-grounded causal claims.

It is not clear why people tell more than they can know. Processes of introspection, self-observation, or self-interpretation are not always reliable methods for identifying the causal factors responsible for attitudes and choices, and are vulnerable to error. So, when people are asked questions such as: “Why did you choose that nightgown?”, “Why do you believe that it was wrong for Julie and Mark to have sex?”, or “Why did you offer the job to Tim and not to Arya?”, most are not aware of the role of priming effects, basic emotional reactions, or implicit biases in their choices or attitudes. This is because such factors cannot be accessed via introspection, straight-forwardly observed, or inferred from behaviour, and thus cannot be easily identified. But if people do not know the reasons for their choices and attitudes, why don’t they just acknowledge ignorance?

People do not acknowledge their ignorance because they do not know that they do not know some of the key factors contributing to their attitudes and choices. In the accounts of confabulation developed by Hirsten (2005) and Coltheart and Turner (2009), people are not dishonest when they confabulate, but sincere, and convinced of the accuracy of their claims. When discussing the Nisbett and Wilson study, Coltheart and Turner argue that participants do not realise that they do not know the answers to the questions they are asked, and they accept as true the answers they provide (Coltheart and Turner 2009, page 185). This suggests that when people confabulate they believe they know how their attitudes and choices were formed, and this is due to the fact that information that would ground accurate explanations for their attitudes and choices is unavailable to them.

Information can be unavailable to a varying extent and for different reasons (see Sullivan-Bissett 2015 for details of the taxonomy I use here). We have a case of strict unavailability when the information that would ground the accurate explanation cannot be accessed or retrieved. If a person involved in a consumer survey is asked why she chose a particular pair of nylon stockings and does not know about priming effects, she lacks the information that would most likely ground the accurate explanation of her choice.

We have a case of motivational unavailability when there are motivational factors inhibiting the acceptance or use of the information that grounds the accurate explanation. The director of a company in charge of hiring decisions may become aware of the influence of implicit bias on people’s behaviour at an equal opportunities training workshop. Still, she may refuse to acknowledge that she is implicitly racist or sexist because this conflicts with her view of herself as an egalitarian. So, she continues to confabulate reasons for preferring male (non-overweight/white) candidates.

We have a case of explanatory unavailability when information that would ground the accurate explanation is not regarded as relevant to the target phenomenon, and thus is dismissed. The fact that people choose items due to their relative position may seem outrageous (as Nisbett and Wilson say in the passage I cited earlier), and thus the accurate explanation may be dismissed as implausible. Similarly, a person who is asked to explain why she believes that the incestuous relationship between Julie and Mark is wrong might have heard that people are socially conditioned to react with disgust to descriptions of incest. Yet, she might find it implausible that moral judgements are primarily determined by basic emotional reactions of disgust, insisting that her response was motivated by the endorsement of an ethical principle.

As we saw, when people provide an explanation for their attitudes and choices, their answers are based on general plausibility considerations about why stockings are chosen, incest is condemned, or a job candidate is selected. Because the answers are based on general plausibility considerations, they can be blind to specific features of the situation at hand. Although it is generally plausible that softness or brightness makes a pair of stockings preferable to another, it is false in the context of a choice between identical stockings that the chosen pair was softer or brighter. In the examples I considered, people commit to beliefs that do not fit the evidence such as: “The stockings on the right are more brightly coloured than those on the left”, “The siblings will be scarred by the experience of incest”, or “Tim was more confident than Arya”.

Couldn’t people offer an answer that fits the evidence better? People often do offer answers that are better supported by the evidence. Even if the answer remains an instance of confabulation, because it is not based on information relevant to the formation of the attitudes or the making of the choices, the confabulation is obviously less epistemically costly if it does not also commit the person to adopting further beliefs that are ill-grounded. Let me offer an example of an explanation that involves no further commitment to beliefs that do not fit the evidence.

Freya is asked to choose between two nightgowns that are not identical (this was one of the tasks in the original Nisbett and Wilson study). Let us assume that she chooses the nightgown on her right-hand side because it is on her right-hand side, but she is not aware of the role of position effects on her choice. When Freya is asked why she chose that nightgown she says that she chose it because it is softer. The nightgown she chose is indeed softer than the alternatives. In this case, Freya provides an inaccurate and ill-grounded explanation of her choice, as the explanation is not based on information relevant to why she made the choice. That said, the nightgown on her right-hand side is softer than the alternatives. Not knowing why she made that choice, and not knowing that she does not know, Freya provides a plausible explanation that does not commit her to any additional ill-grounded claims.

Similar scenarios can be constructed in the case of moral judgements or hiring decisions as well, and this suggests that instances of confabulation can be more or less epistemically costly depending on whether further ill-grounded beliefs are adopted.