This is a commentary on Roger Penrose’s book Shadows of the Mind, focusing on his attempt to use Gödel’s theorem to demonstrate the noncomputability of thought. I argue that the attempt is ultimately unsuccessful, but that there is a novel argument here that many commentators have overlooked, and that it raises many interesting issues. I also comment on his proposals concerning “the missing science of consciousness”. This paper appears in PSYCHE ,2:11-20, 1995 in a symposium on Penrose’s book. Penrose replied in “Beyond the Doubting of a Shadow”

In (Russell Blackford and Damien Broderick, eds.) Intelligence Unbound: The Future of Uploaded and Machine Minds (Wiley-Blackwell, 2014). A discussion of philosophical issues regarding uploading our minds into computers. Originally this was published as the last one-third of the singularity article above, and later it was split off as a separate article for Blackford/Broderick collection. I think this is more or less the first time I’ve written about personal identity. [pdf]

Journal of Consciousness Studies 19:141-67, 2012. This was my reply to 26 commentaries by Sue Blackmore, Nick Bostrom, Barry Dainton, Dan Dennett, Robin Hanson, Marcus Hutter, Ray Kurzweil, Drew McDermott, Jesse Prinz, Susan Schneider, Jurgen Schmidhuber, and others. This is my reply in turn. The whole symposium was published as a special issue of the journal and later as the oddly-subtitled book The Singularity: Could artificial intelligence really out-think us (and would we want it to) from Imprint Academic. For what it’s worth I think the argument for an intelligence explosion holds up pretty well. [pdf]

Journal of Consciousness Studies 17:7-65, 2010. A long article on artificial superintelligence. I try to make the argument for a rapid “intelligence explosion” philosophically rigorous. There’s also some discussion of AI safety issues and mind uploading. There were subsequently 26 commentaries and my reply below. [pdf]

Disputatio, 2020. A reply to seven commentaries on “The Virtual and the Real”, focusing especially on virtual objects as digital objects, and on issues about space in virtual worlds. [pdf]

Disputatio 9(46):309-352. On the metaphysics of virtual reality. I defend a sort of virtual realism and virtual digitalism (on which virtual objects are real digital objects) over virtual irrealism and virtual fictionalism (where virtual objects are fictional objects). With discussion of the definition of VR, of illusions in VR, of the value of VR and Nozick’s Experience Machine, of augmented reality and dreams, of the connection to structuralism and skepticism, and more. This paper is the subject of a symposium on Disputatio with a number of commentaries and a reply. [pdf]

First published on thematrix.com in 2003, and printed in (C. Grau, ed) Philosophers Explore the Matrix (Oxford University Press, 2005). I argue that even if we are in a Matrix-style simulation, most of our ordinary beliefs about the world are true. The hypothesis that we are in a Matrix is really a metaphysical hypothesis: one according to which physical objects are constituted by computations and all this was designed by a designer (in the next world up). I use this to argue against a sort of Cartesian global skepticism about the external world. [pdf]

Minds and Machines, 1994. Pro-tip: This paper is more or less a proper subset of “>”A Computational Foundation for the Study of Cognition”

In an appendix to his book Representation and Reality, Hilary Putnam “proves” that every ordinary open system implements every finite automaton, so that computation cannot provide a nonvacuous foundation for the sciences of the mind. I analyze Putnam’s argument and find it wanting. The argument can be patched up to some extent, but this only points the way to a better definition of implementation (of combinatorial-state automata) that is invulnerable to such an objection. A couple of open questions remain, however.

Journal of Cognitive Science. A reply to comments by Curtis Brown, Frances Egan, Stevan Harnad, Colin Klein, Gerard O’Brien, Marcin Milkowski, Brendan Ritchie, Michael Rescorla, Matthias Scheutz, Oron Shagrir, Mark Sprevak, Brandon Towl. With discussion of pluralism about computation, computation and representation, and of counterexamples of structuralist accounts of implementation. [pdf]

This paper addresses some key questions about computation and its role in cognitive science. I give an account of what it takes for a physical system to implement a given computation (in terms of abstract patterns of causal organization), and use this account to defend “strong artificial intelligence” and justify the centrality of computational explanation in cognitive science. This paper was written in 1993 but unpublished for many years (though section 2 appeared in “On Implementing a Computation” in Minds and Machines, 1994), and was finally published as the subject of a 2012 symposium in the Journal of Cognitive Science. [html] [philpapers]

High-Level Perception, Analogy, and Representation: A Critique of Artificial Intelligence Methodology

Fluid Concepts and Creative Analogies. Basic Books. This paper argues that high-level perception is crucially involved in most cognitive processing. We mount a critique of the common approach of using “frozen”, hand-coded representations in cognitive modeling (exemplified by Langley and Simon’s BACON and Gentner’s Structure-Mapping Engine), and argue for a different approach in which representations are constructed and molded “on the fly”. There was a response by [pdf] [philpapers] Co-authored with Bob French and Doug Hofstadter . Journal of Experimental and Theoretical Artificial Intelligence 4:185-211, 1992. Reprinted in (D. R. Hofstadter)Basic Books. This paper argues that high-level perception is crucially involved in most cognitive processing. We mount a critique of the common approach of using “frozen”, hand-coded representations in cognitive modeling (exemplified by Langley and Simon’s BACON and Gentner’s Structure-Mapping Engine), and argue for a different approach in which representations are constructed and molded “on the fly”. There was a response by Forbus, Gentner, Markman, and Ferguson , as well as a commentary by Morrison and Dietrich

Syntactic Transformations on Distributed Representations (1990)

[pdf] [philpapers] In this paper I demonstrate that a connectionist network can be used to perform systematic structure-sensitive transformations on compressed distributed representations of compositional structures. Using representations developed by a Recursive Auto-Associative Memory (a model of Jordan Pollack ‘s), a feedforward network learns to move systematically from representations of an active sentence to that of a corresponding passive sentence, and vice versa. This paper appeared in Connection Science in 1990. This line of research has since been extended by a number of others, e.g. in Lonnie Chrisman’s “Learning Recursive Distributed Representations for Holistic Computation”

Connectionism and Compositionality: Why Fodor and Pylyshyn Were Wrong (1993)

[philpapers] I point out some structural problems with Fodor and Pylyshyn’s arguments against connectionism, and trace these to an underestimation of the role of distributed representation. I discuss some empirical results (from the paper above) that have some bearing on Fodor and Pylyshyn’s argument. This paper was published in Philosophical Psychology in 1993 (an earlier version was in the 1990 Proceedings of the Cognitive Science Society). See Murat Aydede’s “Connectionism and the Language of Thought” for some discussion. [pdf]

The Evolution of Learning: An Experiment in Genetic Connectionism (1990)

[philpapers] I combine genetic algorithms and neural networks to show how learning mechanisms might evolve in a population of organisms that initially have no capacity to learn. The dynamics of a neural network’s cross-time development are specified in a genome, and phenotypes are selected for their ability to learn various tasks across a lifetime. Over many generations, sophisticated learning mechanisms are developed, including on occasion the well-known delta rule. This paper appeared in the Proceedings of the 1990 Connectionist Summer School Workshop. See here for more models of evolution and learning. [pdf]

Subsymbolic Computation and the Chinese Room (1991)

[philpapers] In this paper I analyze the distinction between symbolic and subsymbolic computation, and use this to shed some light on Searle’s “Chinese Room” argument and the associated argument that “syntax is not sufficient for semantics”. I argue that subsymbolic models may be less vulnerable to this argument. I no longer think this paper is very good, but perhaps the analysis of symbolic vs. subsymbolic computation is worthwhile. It appeared in The Symbolic and Connectionist Paradigms: Closing the Gap, edited by John Dinsmore, published by Lawrence Erlbaum in 1991. [pdf]

Why Fodor and Pylyshyn Were Wrong: The Simplest Refutation