Related

By J. Adam Carter and Emma C. Gordon

Suppose you wanted to know who the first pope was after St. Peter (answer: Pope Linus, born 10 AD), or what the oldest continuously habited city in the world is (answer: Damascus, Syria, continuously inhabited since the 3rd millennium BC) or what the terrifying entity ‘Krampus’ is (answer: a horned, anthropomorphic goat-like figure who, during the Christmas season, punishes children who have misbehaved). You don’t need to be a genius or even well-travelled or well-read to know such things. You’ve got it all on your phone. (And so does the person next to you).

Is this a good thing? Maybe so. As Socrates tells us in the Meno (and also as common sense seems to indicate), knowledge is valuable. And being able to get knowledge quickly and without much effort or skill seems like a fact of modern life that—absent some good reason to the contrary—we should celebrate (imagine what people in the past had to do to learn such facts as the above? Travel to the Library of Alexandria? Ask an Oracle?)

Even chess legend Garry Kasparov, the first world chess champion to lose a match to a computer (in 1997, to IBM’s Deep Blue)—someone you’d perhaps expect might be grudgingly ‘anti-computer’—has finally warmed in recent years[1] to the idea that outsourcing cognitive tasks to our intelligent machines might just be for the best. In fact, Kasparov has gone even further and suggested that we’d be better off just letting such machines do much of our mundane thinking for us. As Kasparov puts it:

Let’s look at this historical process. Machines that replace manual labor, they have allowed us to focus on developing our minds. More intelligent machines, I understand that they will take over more menial aspects of cognition and will elevate our lives towards curiosity, creativity, beauty, joy.

Curiosity, creativity, beauty and joy—this all sounds great!

Though we might want to slow down here a bit. Consider, after all, that a consequence of outsourcing cognitive tasks to our gadgetry is that we’re doing less—and storing less—in our heads. Does this matter?

It might. Especially, perhaps, if we begin to lose track of what is stored where. In a 2015 paper in the Journal of Experimental Psychology, Matthew Fisher, Mariel Goddu, and Frank Keil have argued, on the basis of experimental evidence, that that the very act of online searching (i.e., exactly the thing we do when we want quick knowledge) generates a kind of illusion whereby ‘externally accessible information is conflated with knowledge “in the head”, specifically by causing us to conflate mere access to information for personal knowledge (2015, 674). The first two (of nine in total) of Fisher et al.’s experiments showed that searching for explanations on the Internet increases one’s self-assessed ability to answer questions about an unrelated topic, while a third shows that this result occurs even after time, content and features of the search process have been controlled for. Meanwhile, another three experiments show that this tendency comes not from (i) general overconfidence, or (ii) a different understanding of what counts as internal knowledge, but rather from a genuine misattribution of the sources of knowledge.

Put simply, results like Fisher et al.’s suggest that ‘Googled knowledge’ may come with a (perhaps surprising) epistemic price: we get facts quickly but at the cost of an impoverished and inflated perspective of our own mental lives.

Such skewed self-assessments (and especially when inflated in our favour) are, in general, not good. At one extreme, consider Calvisius Sabinus, as described by Seneca the Younger[2], who relied extensively on his expensive slaves to memorise and recite epic poetry so that Sabinus himself could receive the credit (while knowing not a verse of poetry himself). From Calvisius Sabinus’s perspective, it seemed as though since he owned the slaves, and the slaves had the epic poetry memorised, it stood to reason that he should receive the plaudits for the slaves’ poetic oration[3]. (In fact, he even believed he should be viewed as ‘cultured’).

We are surely not as deluded as Calvisius Sabinus. But it’s not obvious we’re entirely in the clear. If constantly availing ourselves to quick (as Michael Lynch calls it) ‘Google facts’ causes us (a la Fisher et al.) to end up mistakenly claiming more knowledge than we have a right to, then a deeper worry materialises, one that has to do with our intellectual character. Consider that Calvisius Sabinus seems not only deluded but in some sense intellectually arrogant. Intellectual arrogance, on at least one notable way of thinking, is displayed when one infers an entitlement to behave in certain ways of superiority toward others on the basis of one’s assessment of one’s own intellectual merits[4]. It’s not hard to see how results like Fisher et al.’s might rightly make us a bit uncomfortable—at least, if we think that intellectual worth and intellectual flourishing involve more than quick knowledge but also good intellectual character traits. What is Google doing to us?

We are not here, though, to issue a pro-luddite jeremiad. In a recent paper, we actually suggest that results like Fisher et al.’s may lend themselves to an entirely different kind of interpretation, one that has much less disconcerting consequences. And so we’ll end this post on a slightly happier note (albeit one with some heterodox overtones).

The very question of whether Fisher et al. are right that searching the net engenders a tendency for us to conflate access to knowledge for, as they call it, ‘personal knowledge’ presupposes something about personal knowledge. That it’s in the head. That surely seems right. But recent work in the philosophy of mind and cognitive science is increasingly sceptical[5].

Take, for example, the cognitive process of memory storage and retrieval. This all plays out between the skull and skin—at least, when we rely on biomemory. But what if we used a smartphone just like we use memory? Suppose, for example, that when you encounter new information, you store it in your phone, and when you need old information, you look it up where you stored it. Over time, you phase out your biomemory and start using the phone exclusively. Is there any reason to think your memories are not now stored in your phone?

According to Clark and Chalmers (1998), there’s really no good reason, just old fashioned ‘bioprejudice’. As they see it, our theorising about what kinds of things can serve as the material realisers of a genuinely cognitive process should be guided not by questions of location (is it in the head or not?) and material constitution (is it made of brain stuff or not?), but by a more egalitarian principle they call the parity principle:

Parity Principle: If, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is part of the cognitive process.

More recently, Orestis Palermos has argued that a case for extended memory can be made on the basis of dynamical systems theory—by showing how certain memory processes involve feedback loops that cross boundaries between the brain, body and the extracranial memory technologies we interact with[6].

If extended cognition is plausible—and we think that it is (though here is not the place to try to sell you on it)—then an interesting result follows: at least some cases of apparent conflation (between access to knowledge and personal knowledge) aren’t conflation at all. Think about it: if cognition doesn’t exclusively take place in our head, then what looks like mere access to knowledge to some (with an intracranial view of cognition in the background) might in fact be the real thing. Whether or not that’s so will depend not on where the information is stored, but on how we’re interacting with the information—and this is a matter for epistemology as well as the philosophy of mind and cognitive science.

[1] See, in particular, Kasparov, G. (2017). Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. Hachette UK.

[2] Lucius Annaeus Seneca, Epistulae Morales ad Lucilium, 27.

[3] Thanks to Mike Wheeler for drawing our attention to this case. For a helpful discussion by Wheeler, see Wheeler, Michael. 2017. ‘Knowledge, Credit and the Extended Mind, or What Calvi- sius Sabinus Got Right’. In Extended Epistemology, edited by J. Adam Carter, Andy Clark, Jesper Kallestrup, S. Orestis Palermos, and Duncan Pritchard. Oxford: Ox- ford University Press.

[4] See, for example, Roberts, Robert C., and W. Jay Wood. 2007. Intellectual Virtues: An Essay in Regulative Epistemology. Oxford University Press, p. 243; and Tiberius, Valerie, and John D. Walker. 1998. ‘Arrogance’. American Philosophical Quarterly 35 (4): 379–390. For a self-delusion model of intellectual arrogance, see Tanesini, Alessandra. 2016. ‘I—“Calm Down, Dear”: Intellectual Arrogance, Silencing and Ignorance’. Aristotelian Society Supplementary Volume 90 (1): 71–92. doi:10.1093/arisup/akw011.

[5] For a recent collections of papers engaging this issue, see Carter, Clark, Kallestrup, Palermos and Pritchard (eds.) Extended Epistemology, Oxford: OUP (forthcoming, 2017).

[6] See Palermos, S.O. 2014. ‘Loops, Constitution, and Cognitive Extension’, Cognitive Systems Research. 27: 25-41.

Image from Pixabay under CCO Creative Commons