I was not consulted about the cover. The book is mainly concerned with the biological, psychological and philosophical significance of virtual machinery. I did not know that the publishers had decided to associate it with paper tape devices until it was published.

Copyright: Aaron Sloman, 1978

The online version of this work is licensed under a Creative Commons Attribution 4.0 International License. If you use, or comment on, any of this please include a URL if possible, so that readers can see the original (or the latest version).

For more freely available online books see http://onlinebooks.library.upenn.edu/

Other titles in this series:

ARTIFICIAL INTELLIGENCE AND NATURAL MAN: Margaret A. Boden

INFERENTIAL SEMANTICS: Frederick Parker-Rhodes

Other titles in preparation:

THE FORMAL MECHANICS OF MIND: Stephen N. Thomas

THE COGNITIVE PARADIGM: Marc De Mey

ANALOGICAL THINKING: MYTHS AND MECHANISMS: Robin Anderson

EDUCATION AND ARTIFICIAL INTELLIGENCE: Tim O'Shea

Published later:

GÖDEL ESCHER BACH: Douglas Hofstadter

BRAINSTORMS: Daniel Dennett

The book was first published in Great Britain in 1978 by

THE HARVESTER PRESS LIMITED

Publisher: John Spiers

2 Stanford Terrace, Hassocks, Sussex

(Also published in the USA by Humanities Press, 1978)

Copyright: Aaron Sloman, 1978

Sloman, AaronThe computer revolution in philosophy. (Harvester studies in cognitive science).1. Intellect. 2. Artificial intelligence1. Title128'.2 BF431ISBN 0-85527-389-5ISBN 0-85527-542-1 Pbk.Printed in England by Redwood Burn Limited, Trowbridge & Esher

New Material In Online Edition

Original 1978 book contents with format changes and other modifications

Page numbers below refer to the 1978 printed edition, not preserved in this version.

New Material In Online Edition

(Requires further re-organisation.)

Please see the "creative commons" licence.

History of the online version of the book (since 2001)

(Incomplete summary)

Various past readers have pointed out errors and infelicities, but Mike Ferguson has done a very thorough review and pointed out a collection of errors, infelicities and gaps resulting from the steps between photocopying the book and creation of the current web-site. It is now an improved online (html+PDF) version of the book, for which I am very grateful. Remaining errors and infelicities are entirely my fault.

Origins

This book, published in 1978 by Harvester Press (UK) and Humanities Press (USA), has been out of print for many years. In 2001 Manuela Viezzer, photocopied the book to A4 sheets while a PhD student here.

Note: She is now an artist:

https://manuela-viezzer.squarespace.com/

http://www.saatchiart.com/manuela.viezzer She is now an artist:

Sammy Snow, a departmental administrator, scanned in the photocopy and produced the original OCR version (in RTF, later converted to HTML). I am enormously grateful to Manuela and Sammy.

The scanned copy unfortunately had many pencilled comments and corrections so the files were very messy, but were eventually made readable. It proved necessary to redo all the figures. Various colleagues have reported portions of the text that needed correction after scanning and conversion to html. It is likely that errors still remain. Please report any to A.Sloman@cs.bham.ac.uk.

For a while, Ceinwen Cushway kindly made printed copies of the new version available at a charge to cover printing and postage. The online PDF version now makes this unnecessary. The online version of CRP became available in September 2001, as separate html chapters. This online edition now includes many corrections, and recently added notes and comments, e.g. the notes at the end of Chapter 9 (on vision), and many others, all marked as additions. There are also several unmarked changes to improve clarity or accuracy.

Downloadable PDF versions

PDF versions of individual chapters were first produced by reading html files into OpenOffice, editing, then exporting to PDF. Since about 2012 the PDF files have been created directly from html, using the superb html2ps package and ps2pdf (on linux). From some time in 2015, the separate chapters An older PDF version (now out of date) is also available from the EPRINTS archive of ASSC (The Association for the Scientific Study of Consciousness). See CRP at ASSC eprints web site

In 2003, Michael Malien converted the html files (now out of date) to CHM format, still available in this zip file: http://www.cs.bham.ac.uk/research/cogaff/crp-chm.zip

Nils Valentin informed me that a tool for extracting html files from a chm file is obtainable here

A Russian Student, Sergei Kaunov, created a Kindle e-book version, in 2011. (Also out of date now. Would someone like to produce a new, updated kindle version?)

http://www.amazon.co.uk/The-Computer-Revolution-Philosophy-ebook/dp/B006JT8FSK

He kindly commented: "It is a rare kind of scientific or philosophical book which become more valuable with time".



In December 2014 I installed a copy of the 1981 Review of this book by Steven Stich, and wrote a reply to the criticisms he (and others) had made of the claim in Chapter 2 that explanations of possibilities are a core part of science even if they are not falsifiable. More information about that review, and my response to the criticisms can be found in a separate document, along with a link to Douglas Hofstadter's review, which also criticised that chapter.

HTML and PDF 'book' Versions (Some indentation lost in PDF version)

In July 2015 the online parts were combined to form this electronic book (with internal links) in HTML and PDF:

http://www.cs.bham.ac.uk/research/projects/cogaff/crp/crp.html

(about 890KB (Feb 2020))

http://www.cs.bham.ac.uk/research/projects/cogaff/crp/crp.pdf

(about 1.5MB (Feb 2020)). Separate chapters found online are now out of date. The HTML and PDF index pages from the last 17 pages of the book are available separately. (about 890KB (Feb 2020))(about 1.5MB (Feb 2020)).

In 2019 I learnt about a freely available online book by William Rapaport,, which is an excellent teaching resource available online free of charge and updated from time to time.His book and this book complement each other in the ways in which they relate Philosophy and Computation. We do not agree on everything, though there is much overlap of interests! We attempt to answer different, but overlapping collections of questions about Philosophy, Science, AI, Philosophy of Mathematics, Cognitive Science, and Computer science. More details can be found here:

Philosophical relevance

(Last updated June 2019)

Some parts of the book are dated whereas others are still relevant both to the scientific study of mind and to philosophical questions about the aims of science, the nature of theories and explanations, varieties of concept formation, and to questions about the evolution of minds. Moreover, the answers to the questions are directly relevant to Artificial Intelligence, neuroscience and psychology, in addition to philosophy.

In particular, Chapter 2 describes the deep, largely ignored, overlap between science and philosophy insofar as both are concerned to investigate what sorts of things (including states, events, processes, etc.) are possible and how such things are possible (a deep kind of explanation based on compositionality of metaphysical types that seems to have largely been ignored in philosophy of science). This contradicts widely believed assumptions that science is primarily concerned with the discovery of laws, and that science and philosophy are distinct.

The most powerful forms of scientific explanation are generative : they explain how new classes/types of entity are possible. Examples involving compositionality occur in chemistry, biology, linguistics, AI/Computer science, and many branches of engineering. That aspect of science was implicit throughout this book, including Chapter6 on varieties of control of processing and in the chapter on visual perception Chapter 9, where the examples discussed illustrate multiple internal languages (forms of representation) used in parallel, in intelligent visual perception, each with a kind of compositionality -- but not always linear compositionality as in spoken languages and algebraic expressions, and not necessarily discrete compositionality, for instance in some uses of analogical (non-Fregean) forms of representation discussed in Chapter 7, based on my IJCAI 1971 paper [Sloman-71c], introducing a contrast between Fregean and analogical forms of representation. For more on those ideas about different sorts of compositionality see this 2018 discussion of compositionality in biology (ideas still under continual development):

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/compositionality.html (or pdf).

Chapter 2 analyses some of the variety of scientific advances ranging from shallow discoveries of new laws and correlations to deep science which extends our ontology, i.e. our understanding of what is possible, rather than just our understanding of what happens when.

Insofar as AI explores designs for possible mental mechanisms, possible mental architectures, and possible minds using those mechanisms and architectures, it is not merely a branch of engineering, but more fundamentally a contribution to deep science, the science of what is possible, in contrast with a great deal of empirical psychology which is shallow science, exploring correlations: the science of laws and regularities. Like much deep science, AI has metaphysical implications, regarding types of information-processing and information processing mechanisms that are possible, and the implications.

This contrast is taken up again in the Afterthoughts document mentioned above, and in a separate paper on explaining possibilities http://www.cs.bham.ac.uk/research/projects/cogaff/misc/explaining-possibility.html, developing and supporting the ideas in Chapter 2.

This "designer stance" approach to the study of mind was very different from the "intentional stance" being developed by Daniel Dennett at the same time, expounded in his 1978 book Brainstorms, and later partly re-invented by Alan Newell as the study of "The knowledge Level" (see his 1990 book Unified Theories of Cognition). Both Dennett and Newell based their methodologies on a presumption of rationality, whereas the designer-stance considers functionality, which is possible without rationality, as insects and microbes demonstrate well, Functional mechanisms may provide limited rationality, as Herbert Simon noted in his 1969 book The Sciences of the Artificial.

Relevance to AI and Cognitive Science

In some ways the AI portions of the book are not as out of date as the publication date might suggest because it recommends approaches that have not yet been explored fully (e.g. the study of human-like mental architectures in Chapter 6 ) pursued by various research groups associated with the Biologically Inspired Cognitive Architecture (BICA) society, http://bicasociety.org/ , among others. That web site includes a snapshot of proposals for cognitive architectures around the time BICA was formed http://bicasociety.org/cogarch/

Several of the research directions and some of the alternatives that have been explored have not made huge amounts of progress (e.g. there has been much vision research in directions that are different from those recommended in Chapter 9). In particular, there has been a vast amount of research in the use of statistical evidence either collected by a learning machine or assembled by humans or human products and fed into training processes, including "Deep learning" mechanisms based on towers of mutually influencing statistical learning mechanisms.

I think the successes of such approaches are all narrowly bounded by the data used, unlike humans (and possibly some other animals) whose learning abilities include powers of extrapolation that are based on generative theories rather than data-mining. As Immanuel Kant pointed out, this can lead to understanding of impossibility and necessity, as in ancient mathematics, discussed briefly in Chapters 7 and 8. Statistics-based mechanisms that computer probabilities are incapable of discovering, or even representing, the sorts of impossibility and necessity discussed by Kant and in Sloman (1962). This needs to be explained in more detail elsewhere. Some of the work in progress on the Meta-morphogenesis project mentioned below, is relevant, including a paper on compositionality in biology, also referenced above.

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/compositionality.html

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/compositionality.pdf

And many collections of examples of different sorts, including

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html

I also have several online discussions of examples of abilities in humans (and possibly some intelligent non-human animals) to recognize and reason about necessity and impossibility without doing so on the basis of logical axioms and logical inferences. I learnt in 2018 that Alan Turing had made a related point in his PhD thesis, contrasting mathematical intuition and mathematical ingenuity (not really explained) and claiming that computers are capable of mathematical ingenuity, but not mathematical intuition. He did not say why not. I think he had unwittingly re-discovered some of Kant's ideas about ancient mathematical discoveries as non-empirical, non-analytic, non-contingent discoveries, i.e. non-empirical discoveries of synthetic necessary truths, the topic of my 1962 DPhil thesis Sloman(1962) also referenced in chapter 7.

Ideas about "Representational Redescription" presented in Annette Karmiloff-Smith's 1992 book Beyond Modularity summarised in her BBS 2004 article, are illustrated by my discussion of some of what goes on when a child learns about numbers in Chapter 8. That chapter suggests mechanisms and processes involved in learning about numbers that could be important for developmental psychology, philosophy and AI, but have never been properly developed. The chapter also emphasises the connections between cardinal (and ordinal) numbers and one-one correspondences (recognised as essential to the notion of number by Hume, Cantor, Frege, Russell and others). Understanding the concept also requires grasping the transitivity and other properties of 1-1 correspondences.

Many psychological studies of numerical cognition ignore that requirement, though Piaget was aware of it in the 1950s. There is nothing in current neuroscience that explains how necessary transitivity of a relation could be discovered and represented in a brain. An AI system based on logic might deduce the transitivity from a collection of axioms and definitions, but that is definitely not how the transitivity was originally discovered and represented. Neither is it remotely plausible that that is how children discover the necessary transitivity (often aged 5 or 6).

Without that transitivity, checking equinumerosity of two collections would require setting up a direct correspondence between their elements, whereas somehow it was discovered long ago that a sequence of arbitrary names, or symbols could be used as an intermediary, as discussed at length in Chapter 8.

Some chapters have short notes commenting on developments since the time the book was published. I may add more such notes from time to time.

More recent work by the author

Last updated: 3 Apr 2020

4 Jun 2007; 28 Jul 2015; 26 Oct 2015.Reviews moved to separate document : 26 Dec 2015The most recent major venture closely related to the ideas in this book, begun late 2011, is the Meta-Morphogenesis project, inspired by Turing's 1952 paper on the Chemical basis of morphogenesis: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html (later sub-titled "The Self-informing Universe project" (Feb 2017)), triggered by an invitation to contribute four papers to the Elsevier Turing Centenary volume(2013) (Eds Cooper and van Leeuwen). This new (sub-)project further illustrates the ideas in Chapter 2 on overlaps between Science and Philosophy insofar as both investigate what is possible and what makes it possible. The ideas in that 2013 paper spawned a collection of branching sub-topics linking philosophy (especially philosophy of mathematics and philosophy of mind) to aspects of biological evolution, the features of physics and especially chemistry that support the ever-expanding diversity and sophistication of forms of life, the roles of fundamental and evolved construction-kits used by evolution and its products, and a new framework for investigating forms of biological consciousness, including ancient forms of mathematical consciousness in humans derived from more widely shared forms of consciousness in other intelligent animals. One of the key ideas in that project is that genomes for complex organisms do not specify a starting state that develops in interaction with the environment (e.g. by learning). Instead the Meta-Configured Genome hypothesis, referenced below, postulates delayed expression of under-specified (more abstract) parts of the genome that allows parameters to be used based on information acquired and stored during earlier interactions with the environment. This requires the environment to be able to trigger forms of motivation (e.g. during play) that are not reward based, but architecture-based, leading to actions providing information stored and used during later stages of gene expression.

A draft sequel to this book was partly written around 1985, but never published because I was dissatisfied with many of the ideas, especially because I did not think the notion of "computation" was sufficiently well defined. More recent work developing themes from the book is available in the Cognition and Affect Project directory

http://www.cs.bham.ac.uk/research/projects/cogaff/

in the slides for conference and seminar presentations here:

http://www.cs.bham.ac.uk/research/cogaff/talks/

in the frequently extended or modified contents of the 'Miscellaneous' directory:

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html

and in the papers, discussion notes and presentations related to the CoSy robotic project (2004-2008):

http://www.cs.bham.ac.uk/research/projects/cosy/papers/

A particularly relevant discussion note is my answer to the question 'what is information?' -- in the context of the notion of an information-processing system such as an animal or a human:

http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#905

which was supplemented by a contrast between Jane Austen's (semantic) concept of "information", used in her novels, and Claude Shannon's syntactic concept):

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.pdf

A new strand on compositionality, especially biological evolution's use of compositionality was launched late in 2018, mentioned above, closely related to still developing ideas about the Meta-Configured Genome, also part of the Turing-inspired Meta-Morphogenesis project:

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-configured-genome.html (also PDF).

A list of some things I have done, many of which grew out of the ideas in this book, can be found a document begun in 2005, and occasionally updated.

http://www.cs.bham.ac.uk/~axs/my-doings.html

NOTE on educational predictions made in 1978

The world has changed a lot since the book was published in 1978, but not enough, in one important respect.

In the Preface and in Chapter 1 comments were made about how the invention of computing was analogous to the combination of the invention of writing and of the printing press, and predictions were made about the power of computing to transform our educational system to stretch minds.

Alas, the predictions have not yet come true: instead computers are used in schools for lots of shallow activities. Instead of teaching cooking, as used to happen in 'domestic science' courses, we teach them 'information cooking' using word processors, browsers, and the like. We don't teach them to design, debug, test, analyse, explain new machines and tools, merely to use existing ones as black boxes. That's like teaching cooking instead of teaching chemistry.

In 2004, a paper on that topic, accepted for a UK conference on Grand Challenges in Computing Education referred back to the predictions in this book and how the opportunities still remain. The paper, entitled 'Education Grand Challenge: A New Kind of Liberal Education --- Making People Want a Computing Education For Its Own Sake' is available in HTML and PDF formats here http://www.cs.bham.ac.uk/research/cogaff/misc/gc-ed.html

Additional comments were made in 2006 in this document

Why Computing Education has Failed and How to Fix it

In 2007 I attempted, unsuccessfully to generate interest in a multidisciplinary school syllabus combining computing, biology, cognitive science and philosophy.

http://www.cs.bham.ac.uk/~axs/courses/alevel-ai.html

Computing at School (Started in 2009)

http://www.computingatschool.org.uk/

Since 2009 the computing at school organisation has grown and become very influential, helping to produce massive changes to UK computing education, from primary school (age about 6) upwards, with far more emphasis on designing, implementing and testing programs, and far less on merely using computer-based tools. A Computing At School conference has been organised in the School of Computer Science at the University of Birmingham every Summer since 2009. Although this is bringing about massive broadening and deepening of the computing education in UK schools, it is mainly focused on making computers do things rather than trying to understand natural forms of information processing by modelling them, including human competences such as vision, use of language, learning, generating stories, etc. Perhaps that will change as more school teachers become available with the required confidence, insight and breadth of experience.

Some external links to (older, out of date, versions of) this book

(1 Apr 2018: several removed because no longer functional.)

CRP-Afterthoughts (External link)Notes on the development of these ideas after 1978(in a separate document).This is a growing collection of pointers to discussions and developments after this book, including the Turing-inspired Meta-Morphogenesis project, begun 2011/2012:New Material section updated: 26 Sep 2009; 15 Dec 2014; 4 Jul 2015; 24, 28 Jul 2015 (re-formatting); 27 Oct 2015; 15 Feb 2016 (formatting); 1 Jan 2018 (formatting and minor changes); 22 Feb 2019 (minor)

Next: Original Book Contents, Then: List of Figures (Chapters 6 to 9) -- BACK TO CONTENTS

Original 1978 book contents

with format changes and other modifications

THE COMPUTER REVOLUTION IN PHILOSOPHY (1978)

Aaron Sloman

http://www.cs.bham.ac.uk/~axs/

DETAILED 1978 CONTENTS LIST

List of Figures (Chapters 6 to 9) (Added 28 Oct 2015)

The Computer Revolution In Philosophy (1978)

Preface and Acknowledgements

Prev: Back to original Book Contents Next: List of Figures (Chapters 6 to 9) , Then: Original Preface Prev: Figures (Chaps 6 to 9) Next: Preface , Then: Acknowledgements

Original pages x--xiii

PREFACE TO 1978 EDITION

(Slightly modified in 2001)

Another book on how computers are going to change our lives? Yes, but this is more about computing than about computers, and it is more about how our thoughts may be changed than about how housework and factory chores will be taken over by a new breed of slaves. Thoughts can be changed in many ways. The invention of painting and drawing permitted new thoughts in the processes of creating and interpreting pictures. The invention of speaking and writing also permitted profound extensions of our abilities to think and communicate. Computing is a bit like the invention of paper (a new medium of expression) and the invention of writing (new symbolisms to be embedded in the medium) combined. But the writing is more important than the paper. And computing is more important than computers: programming languages, computational theories and concepts -- these are what computing is about, not transistors, logic gates or flashing lights. Computers are pieces of machinery which permit the development of computing as pencil and paper permit the development of writing. In both cases the physical form of the medium used is not very important, provided that it can perform the required functions. Computing can change our ways of thinking about many things, mathematics, biology, engineering, administrative procedures, and many more. But my main concern is that it can change our thinking about ourselves: giving us new models, metaphors, and other thinking tools to aid our efforts to fathom the mysteries of the human mind and heart. The new discipline of Artificial Intelligence is the branch of computing most directly concerned with this revolution. By giving us new, deeper, insights into some of our inner processes, it changes our thinking about ourselves. It therefore changes some of our inner processes, and so changes what we are, like all social, technological and intellectual revolutions. I cannot predict all these changes, and certainly shall not try. The book is mainly about philosophical thinking, and its transformation in the light of computing. But one of my themes is that philosophy is not as limited an activity as you might think. The boundaries between philosophy and other theoretical and practical activities, notably education, software engineering, therapy and the scientific study of man, cannot be drawn as neatly as academic syllabuses might suggest. This blurring of disciplinary boundaries helps to substantiate a claim that a revolution in philosophy is intimately bound up with a revolution in the scientific study of man and its practical applications. Methodological excursions into the nature of science and philosophy therefore take up rather more of this book than I would have liked. But the issues are generally misunderstood, and I felt something needed to be done about that. I think the revolution is also relevant to several branches of science and engineering not directly concerned with the study of man. Biology, for example, seems to be ripe for a computational revolution. And I don't mean that biologists should use computers to juggle numbers -- number crunching is not what this book is about. Nor is it what computing is essentially about. Further, it may be useful to try to understand the relationship between chemistry and physics by thinking of physical structures as providing a computer on which chemical programs are executed. But I am not so sure about that one, and will not pursue it. Though fascinated by the intellectual problems discussed in the book, I would find it hard to justify spending public money working on them if it were not for the possibility of important consequences, including applications to education. But perhaps I should not worry: so much public money is wasted on futile research and teaching, to say nothing of incompetent public administration, ridiculous defence preparations, profits for manufacturers and purveyors of shoddy, useless or harmful goods (like cigarettes), that a little innocent academic study is marginal. Early drafts of this book included lots of nasty comments on the current state of philosophy, psychology, social science, and education. I have tried to remove them or tone them down, since many were based on my ignorance and prejudice. In particular, my knowledge of psychology at the time of writing was dominated by lectures, seminars, textbooks and journal articles from the 1960s. Nowadays many psychologists are as critical as I could be of such psychology (which does not mean they will agree with my criticisms and proposed remedies). And Andreski's Social Science as Sorcery makes many of my criticisms of social science redundant. I expect I shall be treading on many toes in my bridge-building comments. The fact that I have not read everything relevant will no doubt lead me into howlers. Well, that's life. Criticisms and corrections, published or private will be welcomed. (Except for arguments about whether I am doing philosophy or psychology or some kind of engineering. Demarcation disputes are usually a waste of time. Instead ask: are the problems interesting or important, and is some real progress made towards dealing with them?) Since the book is aimed at a wide variety of readers with different backgrounds, it will be found by each of them to vary in clarity and interest from section to section. One person's banal oversimplification is another's mind-stretching novelty. Partly for this reason, the different chapters vary in style and overlap in content. The importance of the topic, and the shortage of informed discussion seemed to justify offering the book for publication despite its many flaws. One thing that will infuriate some readers is my refusal to pay close attention to published arguments in the literature about whether machines can think, or whether people are machines of some sort. People who argue about this sort of thing are usually ignorant of developments in artificial intelligence, and their grasp of the real problems and possibilities in designing intelligent machines is therefore inadequate. Alternatively, they know about machines, but are ignorant of many old philosophical problems for mechanist theories of mind. Most of the discussions (on both sides) contain more prejudice and rhetoric than analysis or argument. I think this is because in the end there is not much scope for rational discussion on this issue. It is ultimately an ethical question whether you should treat robots like people, or at least like cats, dogs or chimpanzees; not a question of fact. And that ethical question is the real meat behind the question whether artefacts could ever think or feel, at any rate when the question is discussed without any attempt to actually design a thinking or feeling machine. When intelligent robots are made (with the help of philosophers), in a few hundred or a few thousand years time, some people will respond by accepting them as communicants and friends, whereas others will use all the old racialist arguments for depriving them of the status of persons. Did you know that you were a racialist? But perhaps when it comes to living and working with robots, some people will be surprised how hard it is to retain the old disbelief in their consciousness, just as people have been surprised to find that someone of a different colour may actually be good to relate to as a person. For an unusually informative and well-informed statement of the racialist position concerning machines see Weizenbaum 1976. I admire his book, despite profound disagreements with it. So, this book is an attempt to publicise an important, but largely unnoticed, facet of the computer revolution: its potential for transforming our ways of thinking about ourselves. Perhaps it will lead someone else, knowledgeable about developments in computing and Artificial Intelligence, to do a better job, and substantiate my claim that within a few years philosophers, psychologists, educationalists, psychiatrists, and others will be professionally incompetent if they are not well-informed about these developments. NOTE, added 22 Jan 2020: Looking back at this preface and related parts of the book, I confess that I completely failed to foresee some of the dreadful misuses of computers that make up so much of our news in the 21st Century. Last updated: 4 Jun 2007. Reformatted: 15 Jul 2015 Last updated: 4 Jun 2007. Reformatted: 15 Jul 2015 BACK TO CONTENTS Original Contents List

Prev: Preface, Next: Acknowledgements, Chapter One

The Computer Revolution In Philosophy (1978) Book contents page Original pages xiv--xvi ACKNOWLEDGEMENTS I have not always attributed ideas or arguments derived from others. I tend to remember content, not sources. Equally I'll not mind if others use my ideas without acknowledgement. The property-ethic dominates too much academic writing. It will be obvious to some readers that besides recent work in artificial intelligence the central ideas of Kant's (1781) Critique of Pure Reason have had an enormous influence on this book. Writings of Frege, Wittgenstein, Ryle, Austin, Popper, Chomsky, and indirectly Piaget have also played an important role. Many colleagues and students have helped me in a variety of ways: by provoking me to disagreement, by discussing issues with me, or by reading and commenting on earlier drafts of one or more chapters. This has been going on for a long time, so I am not sure that the following list includes everyone who has refined or revised my ideas, or given me new ones: Frank Birch, Margaret Boden, Mike Brady, Alan Bundy, Max Clowes, Steve Draper, Gerald Gazdar, Roger Goodwin, Steven Hardy, Pat Hayes, Geoffrey Hinton, Laurie Hollings, Nechama Inbar, Robert Kowalski, John Krige, Tony Leggett, Barbara Lloyd, Christopher Longuet-Higgins, Alan Mackworth, Frank O'Gorman, David Owen, Richard Power, Julie Rutkowska, Alison Sloman, Jim Stansfield, Robin Stanton, Sylvia Weir, Alan White, Peter Williams. Pru Heron, Jane Blackett, Judith Dennison, Maryanne McGinn and Pat Norton helped with typing and editing. Jane Blackett also helped with the diagrams. The U.K. Science Research Council helped, first of all by enabling me to visit the Department of Artificial Intelligence in Edinburgh University for a year in 1972-3, and secondly by providing me with equipment and research staff for a three year project on computer vision at Sussex. Bernard Meltzer was a very helpful host for my visit to Edinburgh, and several members of the department kindly spent hours helping me learn programming, and discussing computing concepts, especially Bob Boyer, J. Moore, Julian Davies and Danny Bobrow. Steve Hardy and Frank O'Gorman continued my computing education when I returned from Edinburgh. Several of my main themes concerning the status of mind can be traced back to interactions with Stuart Sutherland (e.g. see his 1970) and Margaret Boden. Her book Artificial Intelligence and Natural Man, like other things she has written, adopts a standpoint very similar to mine, and we have been talking about these issues over many years. So I have probably cribbed more from her than I know. She also helped by encouraging me to put together various privately circulated papers when I had despaired of being able to produce a coherent, readable book. By writing her book she removed the need for me to give a detailed survey of current work in the field of AI Instead I urge readers to study her survey to get a good overview. I owe my conversion to Artificial Intelligence, towards the end of 1969, to Max Clowes. I learnt a great deal by attending his lectures for undergraduates. He first pointed out to me that things I was trying to do in philosophical papers I was writing were being done better in AI, and urged me to take up programming. I resisted for some time, arguing that I should first finish various draft papers and a book. Fortunately, I eventually realised that the best plan was to scrap them. (I have not been so successful at convincing others that their intellectual investments are not as valuable as the new ideas and techniques waiting to be learnt. I suspect, in some cases, this is partly because they were allowed by the British educational system to abandon scientific and mathematical subjects and rigorous thinking at a fairly early age to specialise in arts and humanities subjects. I believe that the knowledge-explosion, and the needs of our complex modern societies, make it essential that we completely re-think the structure of formal education, from primary schools upwards: indefinitely continued teaching and learning at all ages in sciences, arts, humanities, crafts (including programming) must be encouraged. Perhaps that will be the best way to cope with unemployment produced by automation, and the like. But I'm digressing!). Note added 9 Feb 2016

Max died of a heart attack in 1981. A personal tribute and incomplete annotated biography/bibliography can be found here:

http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#61 Max died of a heart attack in 1981. A personal tribute and incomplete annotated biography/bibliography can be found here: Alison, Benjamin and Jonathan tolerated (most of the time) my withdrawal from family life for the sake of this book and other work. I did not wish to have children, but as will appear frequently in this book (e.g., in the chapter on learning about numbers), observing them and interacting with them has taught me a great deal. In return, my excursions into artificial intelligence and the topics of the book have changed my way of relating to children. I think I now understand their problems better, and have acquired a deeper respect for their intellectual powers. The University of Sussex provided a fertile environment for the development of the ideas reported here, by permitting a small group of almost fanatical enthusiasts to set up a 'Cognitive Studies Programme' for interdisciplinary teaching and research, and providing us with an excellent though miniscule computing laboratory. But for the willingness of the computer to sit up with me into the early hours helping me edit, format, and print out draft chapters (and keeping me warm when the heating was off), the book would not have been ready for a long time to come. I hope that, one day, even better computing facilities will be commonplace in primary schools, for kids to play with. After all, primary schools are more important than universities, aren't they? I should like to thank John Spiers of Harvester Press for tolerating my tardiness in completing the book, and Gillian Orton for patiently preparing the typescript for the printer. Preface updated: 4 Jun 2007; Date Correction 15 Dec 2014; Reformatted 28 Jul 2015;

9 Feb 2016: Added link to Clowes Obituary/Bio

BACK TO CONTENTS

Original Contents List

Prev: Chapter Two Preface updated: 4 Jun 2007; Date Correction 15 Dec 2014; Reformatted 28 Jul 2015;9 Feb 2016: Added link to Clowes Obituary/BioPrev: Acknowledgements , Next: Chapter One

THE COMPUTER REVOLUTION IN PHILOSOPHY (1978): Chapter 1

The Computer Revolution In Philosophy (1978) Book contents page Original pages 1-21 CHAPTER 1 INTRODUCTION AND OVERVIEW 1.1.(Page 1) Computers as toys to stretch our minds

Developments in science and technology are responsible for some of the best and some of the worst features of our lives. The computer is no exception. There are plenty of reasons for being pessimistic about its effects in the short run, in a society where the lust for power, profit, status and material possessions are dominant motives, and where those with knowledge -- for instance scientists, doctors and programmers -- can so easily manipulate and mislead those without. Nevertheless I am convinced that the ill effects of computers can eventually be outweighed by their benefits. I am not thinking of the obvious benefits, like liberation from drudgery and the development of new kinds of information services. Rather, I have in mind the role of the computer, and the processes which run on it, as a new medium of self-expression, perhaps comparable in importance to the invention of writing. Think of it like this. From early childhood onwards we all need to play with toys, be they bricks, dolls, construction kits, paint and brushes, words, nursery rhymes, stories, pencil and paper, mathematical problems, crossword puzzles, games like chess, musical instruments, theatres, scientific laboratories, scientific theories, or other people. We need to interact with all these playthings and playmates in order to develop our understanding of ourselves and our environment that is, in order to develop our concepts, our thinking strategies, our means of expression and even our tastes, desires and aims in life. The fruitfulness of such play depends in part on how complex the toy and the processes it generates, and how rich the interaction between player and toy are. A modern digital computer is perhaps the most complex toy ever created by man. It can also be as richly interactive as a musical instrument. And it is certainly the most flexible: the very same computer may simultaneously be helping an eight year old child to generate pictures on a screen and helping a professional programmer to understand the unexpected behaviour of a very complex program he has designed. Meanwhile other users may be attempting to create electronic music, designing a program to translate English into French, testing a program which analyses and describes pictures, or simply treating the computer as an interactive diary. A few old-fashioned scientists may even be doing some numerical computations. Unlike pet animals and other people (also rich, flexible and interactive), computers are toys designed by people. So people can understand how they work. Moreover the designs of the programs which run on them can be and are being extended by people, and this can go on indefinitely. As we extend these designs, our ability to think and talk about complex structures and processes is extended. We develop new concepts, new languages, new ways of thinking. So we acquire powerful new tools with which to try to understand other complex systems which we have not designed, including systems which have so far largely resisted our attempts at comprehension: for instance human minds and social systems. Despite the existence of university departments of psychology, sociology, education, politics, anthropology, economics and international relations, it is clear that understanding of these domains is currently at a pathetically inadequate level: current theories don't yet provide a basis for designing satisfactory educational procedures, psychological therapies, or government policies. But apart from the professionals, ordinary people need concepts, symbolisms, metaphors and models to help them understand the world, and in particular to help them understand themselves and other people. At present much of our informal thinking about people uses unsatisfactory mechanistic models and metaphors, which we are often not even aware of using. For instance even people who strongly oppose the application of computing metaphors to mental processes, on the grounds that computers are mere mechanisms, often unthinkingly use much cruder mechanistic metaphors, for instance 'He needed to let off steam', I was pulled in two directions at once, but the desire to help my family was stronger', 'His thinking is stuck in a rut', 'The atmosphere in the room was highly charged'. Opponents of the spread of computational metaphors are in effect unwittingly condemning people to go on living with hydraulic, clock-work, and electrical metaphors derived from previous advances in science and technology. To summarise so far: it can be argued that computers, or, to be more precise, combinations of computers and programs, constitute profoundly important new toys which can give us new means of expression and communication and help us create an ever-increasing new stock of concepts and metaphors for thinking about all sorts of complex systems, including ourselves. NOTE added 2 Feb 2020: Missed predictions When writing this (probably in 1977) I completely failed to foresee the extent to which computation can facilitate a huge variety of types of evil, including financial fraud, fake news, social hysteria, mind-manipulation/control, extortion, abuse of the vulnerable, distribution of malware, privacy violation, cyberwar, to name a few examples. That's not a reason to regret the development of computer-based technology. It is a reason to seek and deploy counter-measures, which will surely involve the power of computation. I believe that not only psychology and social sciences but also biology and even chemistry and physics can be transformed by attempting to view complex processes as computational processes, including rich information flow between sub-processes and the construction and manipulating of symbolic structures within processes. This should supersede older paradigms, such as the paradigm which represents processes in terms of equations or correlations between numerical variables. This paradigm worked well for a while in physics but now seems to dominate, and perhaps to strangle, other disciplines for which it is irrelevant. Apart from computing science, linguistics and logic seem to be the only sciences which have sharply and successfully broken away from the paradigm of 'variables, equations and correlations'. But perhaps it is significant that the last two pretend not to be concerned with processes, only with structures. This is a serious limitation, as I shall try to show in later chapters. 1.2.(Page 3) The Revolution in Philosophy

Well, suppose it is true that developments in computing can lead to major advances in the scientific study of man and society: what have these scientific advances to do with philosophy? The very question presupposes a view of philosophy as something separate from science, a view which I shall attempt to challenge and undermine later, since it is based both on a misconception of the aims and methods of science and on the arrogant assumption by many philosophers that they are the privileged guardians of a method of discovering important non-empirical truths. But there is a more direct answer to the question, which is that very many of the problems and concepts discussed by philosophers over the centuries have been concerned with processes, whereas philosophers, like everybody else, have been crippled in their thinking about processes by too limited a collection of concepts and formalisms. Here are some age-old philosophical problems explicitly or implicitly concerned with processes. How can sensory experience provide a rational basis for beliefs about physical objects? How can concepts be acquired through experience, and what other methods of concept formation are there? Are there rational procedures for generating theories or hypotheses? What is the relation between mind and body? How can non-empirical knowledge, such as logical or mathematical knowledge, be acquired? How can the utterance of a sentence relate to the world in such a way as to say something true or false? How can a one-dimensional string of words be understood as describing a three-dimensional or multi-dimensional portion of the world? What forms of rational inference are there? How can motives generate decisions, intentions and actions? How do non-verbal representations work? Are there rational procedures for resolving social conflicts? There are many more problems in all branches of philosophy concerned with processes, such as perceiving, inferring, remembering, recognising, understanding, learning, proving, explaining, communicating, referring, describing, interpreting, imagining, creating, deliberating, choosing, acting, testing, verifying, and so on. Philosophers, like most scientists, have an inadequate set of tools for theorising about such matters, being restricted to something like common sense plus the concepts of logic and physics. A few have clutched at more recent technical developments, such as concepts from control theory (e.g. feedback) and the mathematical theory of games (e.g. payoff matrix), but these are hopelessly deficient for the tasks of philosophy, just as they are for the task of psychology. The new discipline of artificial intelligence explores ways of enabling computers to do things which previously could be done only by people and the higher mammals (like seeing things, solving problems, making and testing plans, forming hypotheses, proving theorems, and understanding English). It is rapidly extending our ability to think about processes of the kinds which are of interest to philosophy. So it is important for philosophers to investigate whether these new ideas can be used to clarify and perhaps helpfully reformulate old philosophical problems, re-evaluate old philosophical theories, and, above all, to construct important new answers to old questions. As in any healthy discipline, this is bound to generate a host of new problems, and maybe some of them can be solved too. I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory. Later in this book I shall elucidate some of the connections. Chapter 4, for example, will show how concepts and techniques of philosophy are relevant to AI and cognitive science. Philosophy can make progress, despite appearances. Perhaps in future the major advances will be made by people who do not call themselves philosophers. After that build-up you might expect a report on some of the major achievements in artificial intelligence to follow. But that is not the purpose of this book: an excellent survey can be found in Margaret Boden's book. Artificial Intelligence and Natural Man, and other works mentioned in the bibliography will take the interested reader into the depths of particular problem areas. (Textbooks on AI will be especially useful for readers wishing to get involved in doing artificial intelligence.) My main aim in this book is to re-interpret some age-old philosophical problems, in the light of developments in computing. These developments are also relevant to current issues in psychology and education. Most of the topics are closely related to frontier research in artificial intelligence, including my own research into giving a computer visual experiences, and analysing motivational and emotional processes in computational terms. Some of the philosophical topics in Part One of the book are included not only because I think I have learnt important things by relating them to computational ideas, but also because I think misconceptions about them are among the obstacles preventing philosophers from accepting the relevance of computing. Similar misconceptions may confuse workers in AI and cognitive science about the nature of their discipline. For instance, the chapters on the aims of science and the relations between science and philosophy attempt to undermine the wide-spread assumption that philosophers are doing something so different from scientists that they need not bother with scientific developments and vice versa. Those chapters are also based on the idea that developments in science and philosophy form a computational process not unlike the one we call human learning. The remaining chapters, in Part Two, contain attempts to use computational ideas in discussing some problems in metaphysics, philosophy of mind, epistemology, philosophy of language and philosophy of mathematics. I believe that further analysis of the nature of number concepts and arithmetical knowledge in terms of symbol-manipulating processes could lead to profound developments in primary school teaching, as well as solving old problems in philosophy of mathematics. In the remainder of this chapter I shall attempt to present, in bold outline, some of the main themes of the computer revolution, followed by a brief definition of "Artificial Intelligence". This will help to set the stage for what follows. Some of the themes will be developed in detail in later chapters. Others will simply have to be taken for granted as far as this book is concerned. Margaret Boden's book and more recent textbooks on AI fill most of the gaps. 1.3.(Page 6) Themes from the Computer Revolution

1. Computers are commonly viewed as elaborate numerical calculators or at best as devices for blindly storing and retrieving information or blindly following sequences of instructions programmed into them. However, they can be more accurately viewed as an extension of human means of expression and communication, comparable in importance to the invention of writing. Programs running on a computer provide us with a medium for thinking new thoughts, trying them out, and gradually extending, deepening and clarifying them. This is because, when suitably programmed, computers are devices for constructing, manipulating, analysing, interpreting and transforming symbolic structures of all kinds, including their own programs. 2. Concepts of 'cause', law', and 'mechanism', discussed by philosophers, and used by scientists, are seriously impoverished by comparison with the newly emerging concepts. The old concepts suffice for relatively simple physical mechanisms, like clocks, typewriters, steam engines and unprogrammed computers, whose limitations can be illustrated by their inability to support a notion of purpose. By contrast, a programmed computer may include representations of itself, its actions, possible futures, reasons for choosing, and methods of inference, and can therefore sometimes contain purposes which generate behaviour, as opposed to merely containing physical structures and processes which generate behaviour. So biologists and psychologists who aim to banish talk of purposes from science, thereby ignore some of the most important new developments in science. So do philosophers and psychologists who use the existence of purposive human behaviour to 'disprove' the possibility of a scientific study of man. 3. Learning that a computer contains a certain sub-program enables you to explain some of the things it can do, but provides no basis for predicting what it always or frequently does, since that will depend on a large number of other factors which determine when this sub-program is executed and the environment in which it is executed. So a scientific investigation of computational processes need not be primarily a search for laws so much as an attempt to describe and explain what sorts of things are and are not possible. A central form of question in science and philosophy is 'How is X possible?', where X is something observed, reported by others, or thought to be the case. Many scientists, especially those studying people and social systems, mislead themselves and their students into thinking that science is essentially a search for laws and correlations, so that they overlook the study of possibilities. Linguists (especially since Chomsky) have grasped this point, however. (This topic is developed at length in Chapter 2.) 4. Similarly there is a wide-spread myth that the scientific study of complex systems requires the use of numerical measurements, equations, calculus, and the other mathematical paraphernalia of physics. These things are useless for describing or explaining the important aspects of the behaviour of complex programs (e.g. a computer, operating system, or Winograd's program described in his book Understanding Natural Language). Instead of equations and the like, quite new non-numerical formalisms have evolved in the form of programming languages, along with a host of informal concepts relating the languages, the programs expressed therein, and the processes they generate. Many of these concepts (e.g. parsing, compiling, interpreting, pointer, mutual recursion, side-effect, pattern matching) are very general, and it is quite likely that they could be of much more use to students of biology, psychology and social science than the kinds of numerical mathematics they are normally taught, which are of limited use for theorising about complex interacting structures. Unfortunately although many scientists dimly grasp this point (e.g. when they compare the DNA molecule with a computer program) they are often unable to use the relationship: their conception of a computer program is limited to the sorts of data-processing programs written in low-level languages like Fortran or Basic. 5. It is important to distinguish cybernetics and so-called 'systems theory' from this broader science of computation, for the former are mostly concerned with processes involving relatively fixed structures in which something quantifiable (e.g. money, energy, electric current, the total population of a species) flows between or characterises substructures. Their formalisms and theories are too simple to say anything precise about the communication of a sentence, plan or problem, or to represent the process of construction or modification of a symbolic structure which stores information or abilities. Similarly, the mathematical theory of information, of Shannon and Weaver, is mostly irrelevant, although computer programs are often said to be information-processing mechanisms. The use of the word 'information' in the mathematical theory has proved to be utterly misleading. It is not concerned with meaning or content or sense or connotation or denotation, but with probability and redundancy in signals. If more suitable terminology had been chosen, then perhaps a horde of artists, composers, linguists, anthropologists, and even philosophers would not have been misled. I am not denying the importance of the theory to electronic engineering and physics. In some contexts it is useful to think of communication as sending a signal down a noisy line, and understanding as involving some process of decoding signals. But human communication is quite different: we do not decode, we interpret, using enormous amounts of background knowledge and problem-solving abilities. That is, we map one class of structures (e.g. 2-D images), into another class (e.g. 3-D scenes). Chapter 9 elaborates on this, in describing work in computer vision. The same is true of artificial intelligence programs which understand language. Information theory is not concerned with such mappings. 6. One of the major new insights is that computational processes may be markedly decoupled from the physical processes of the underlying computer. Computers with quite different basic components and architecture may be equivalent in an important sense: a program which runs on one of them can be made to run on any other either by means of a second program which simulates the first computer on the second, or by means of a suitable compiler or interpreter program which translates the first program into a formalism which the second computer can execute. So a program may run on a virtual machine. Differences in size can be got round by attaching peripheral storage devices such as magnetic discs or tapes, leaving only differences in speed. So all modern digital computers are theoretically equivalent, and the detailed physical structure and properties of a computer need not constrain or determine the symbol-manipulating and problem-solving processes which can run on it: any constraints, except for speed, can be overcome by providing more storage and feeding in new programs. Similarly, the programs do not determine the computers on which they can run. 7. Thus reductionism is refuted. For instance, if biological processes are computational processes running on a physico-chemical computer, then essentially the same processes could, with suitable re-programming, run on a different sort of computer. Equally, the same computer could permit quite different computations: so the nature of the physical world need not determine biological processes. Just as the electronic engineers who build and maintain a computer may be quite unable to describe or understand some of the programs which run on it, so may physicists and chemists lack the resources to describe, explain or predict biological processes. Similarly psychology need not be reducible to physiology, nor social processes to psychological ones. To say that wholes may be more than the sum of their parts, and that qualitatively new processes may 'emerge' from old ones, now becomes an acceptable part of the science of computation, rather than old-fashioned mysticism. Many anti-reductionists have had this thought prior to the development of computing, but have been unable to give it a clear and indisputable foundation. 8. There need not be only two layers: programs and physical machine. A suitably programmed computer (e.g. a computer with a compiler program in it[2]), is itself a new computer a new 'virtual machine' which in turn may be programmed so as to support new kinds of processes. Thus a single process may involve many layers of computations, each using the next lower layer as its underlying machine. But that is not all. The relations may sometimes not even be hierarchically organised, for instance if process A forms part of the underlying machine for process B and process B forms part of the underlying machine for process A. Social and psychological, psychological and physiological processes, seem to be related in this mutually supportive way. Chapters 6 and 9 present some examples. The development of good tools for thinking about a system composed of multiple interlocking processes is only just beginning. Systems of differential equations and the other tools of mathematical physics are worse than useless, for the attempt to use them can yield quite distorted descriptions of processes involving intelligent systems, and encourage us to ask unfruitful questions. 9. Philosophers sometimes claim that it is the business of philosophy only to analyse concepts, not to criticise them. But constructive criticism is often needed and in many cases the task will not be performed if philosophers shirk it. An important new task for philosophers is constructively critical analysis of the concepts and underlying presuppositions emerging from computer science and especially artificial intelligence. Further, by carefully analysing the mismatch between some of our very complicated ordinary concepts like goal, decide, infer, perceive, emotion, believe, understand, and the models being developed in artificial intelligence, philosophers may help to counteract unproductive exaggerated claims and pave the way for further developments. They will be rewarded by being helped with some of their philosophical problems. 10. For example, the computational metaphor, paradoxically, provides support for a claim that human decisions are not physically or physiologically determined, since, as explained above, if the mind is a computational process using the brain as a computer then it follows that the brain does not constrain the range of mental processes, any more than a computer constrains the set of algorithms that can run on it. It can be more illuminating to think of the program (or mind) as constraining the physical processes than vice versa. Moreover, since the state of a computation can be frozen, and stored in some non-material medium such as a radio signal transmitted to a distant planet, and then restarted on a different computer, we see that the hitherto non-scientific hypothesis that people can survive bodily death, and be resurrected later on, acquires a new lease of life. Not that this version is likely to please theologians, since it no longer requires a god. 11. Recent attempts to give computers perceptual abilities seem to have settled the empiricist/rationalist debate by supporting Immanuel Kant's claim that no experiencing is possible without information-processing (analysis, comparison, interpretation of data) and that no information-processing is possible without pre-existing knowledge in the form of symbol-manipulating procedures, data-structures, and quite specific descriptive abilities. (This topic is elaborated in chapter 9.) Shallow philosophical, linguistic and psychological disputes about innate or non-empirical knowledge are being replaced by much harder and deeper explorations of exactly what pre-existing knowledge is required, or sufficient, for particular types of empirical and non-empirical learning. What knowledge of two- and three-dimensional geometry and of physics does a robot need in order to be able to interpret its visual images in terms of tables, chairs and dishes to be carried to the sink? What kind of knowledge about its own symbolisms and symbol-manipulating procedures will a baby robot need in order to stumble upon and understand the discovery that counting a row of buttons from left to right necessarily produces the same result as counting from right to left, if no mistakes occur? (More on this sort of thing in the chapter on learning about numbers.) Similarly, philosophical debates about the possibility of 'synthetic apriori' knowledge dissolve in the light of new insights into the enormous variety of ways in which a computational system (including a human society?) may make inferences, and perhaps discover necessary truths about the capabilities and limitations of its current stock of programs. For an example see the book by Sussman about a program which learns to build better programs for stacking blocks by analysing why initial versions go wrong.

(G.J. Sussman, A Computational Model of Skill Acquisition, American Elsevier, 1975.) Epistemology, developmental psychology, and the history of ideas (including science and art) may be integrated in a single computational framework. The chapters on the aims of science and on number concepts are intended as a small step in this direction.

NOTE 5 Feb 2020: Mike Ferguson has pointed out that Isaiah Berlin's Against the Current, Essays in the History of Ideas, especially his account of Vico's distinction between kinds of knowing, is relevant. 12. One of the bigger obstacles to progress in science and philosophy is often our inability to tell when we lack an explanation of something. Before Newton, people thought they understood why unsupported objects fell. Similarly, we think practice explains learning, familiarity explains recognition, desire explains action. Philosophers often assume that if you have experienced instances and non-instances of some concept, then this 'ostensive definition' suffices to explain how you could have learnt this concept. So our experience of seeing blue things and straight lines is supposed to explain how we acquire the concepts blue and straight. As for how the relevant aspects of instances and non-instances are noticed, related to one another and to previous experiences, and how the irrelevant aspects are left out of consideration the question isn't even asked. (Winston asked it, and gave some answers to it in the form of a primitive learning program: see his 1975.) Psychologists don't normally ask these questions either: having been indoctrinated with the paradigm of dependent and independent variables, they fail to distinguish a study of the circumstances in which some behaviour does and does not occur, from a search for an explanation of that behaviour. People assume that if a person or animal wants something, then this, together with relevant beliefs, suffices to explain the resulting actions. But no decent theory is offered to explain how desires and beliefs are capable of generating action, and in particular no theory of how an individual finds relevant beliefs in his huge store of information, or how conflicting motives enter into the process, or how beliefs, purposes, skills, etc. are combined in the design of an action (e.g. an utterance) suited to the current situation. The closest thing to a theory in the minds of most people is the model of desires as physical forces pushing us in different directions, with the strongest force winning. The mathematical theory of games and decisions is a first crude attempt to improve on this, but is based on the false assumptions that people start with a well-defined set of alternative actions when they take decisions. Work in artificial intelligence on programs which formulate and execute plans is beginning to unravel some of the intricacies of such processes. My chapter on aspects of the mechanism of mind will discuss some of the problems (i.e. Chapter 6.) By trying to turn our explanations and theories into designs for working systems, we soon discover their poverty. The computer, unlike academic colleagues, is not convinced by fine prose, impressive looking diagrams or jargon, or even mathematical equations. If your theory doesn't work then the behaviour of the system you have designed will soon reveal the need for improvement. Often errors in your design will prevent it behaving at all. Books don't behave. We have long needed a medium for expressing theories about behaving systems. Now we have one, and a few years of programming explorations can resolve or clarify some issues which have survived centuries of disputation. Progress in philosophy (and psychology) will now come from those who take seriously the attempt to design a person. I propose a new criterion for evaluating philosophical writings: could they help someone designing a mind, a language, a society or a world? The same criterion is relevant to theorising in psychology. The difference is that philosophy is not so much concerned with finding the correct explanation of actual human behaviour. Its aims are more general. For more on the difference see chapters 2 and 3. 13. A frequently repeated discovery, using the new methodology, is that what seemed simple and easy to explain turns out to be very complex, requiring sophisticated computational resources, for instance: seeing a dot, remembering a word, learning from an example, improving through practice, recognising a familiar shape, associating two ideas, picking up a pencil. Of course, it may be that for all these achievements there are simple explanations, of kinds hitherto quite unknown. But at least we have learnt that we don't know them, and that is real progress. This also teaches a new respect for the intellects of infants and other animals. How does a bee manage to alight on a flower without crashing into it? 14. There are some interesting implications of the points made in 7 and 8 above. I mentioned that two computational processes may be mutually supportive. Similarly, two procedures may contain each other as parts, two information structures may contain each other as parts. More generally, a whole system may be built up from large numbers of mutually recursive procedures and data-structures, which interlock so tightly that no element can be properly defined except in terms of the whole system. (Recursive rules in formal grammars illustrate the same idea.) Since the system cannot be broken down hierarchically into parts, then parts of those parts, until relatively simple concepts and facts are reached, it follows that anyone learning about the system has to learn many different interrelated things in parallel, tolerating confusion, oversimplifications, inaccuracies, and constantly altering what has previously been learnt in the light of what comes later.[3] So the process of learning a complex interlocking network of circular concepts, theories and procedures may have much in common with the task of designing one. If all this is correct it not only undermines philosophical attempts to perform a logical analysis of our concepts in terms of ever more primitive ones (as Wittgenstein, for example, assumed possible in his Tractatus Logico Philosophicus), it also has profound implications for the psychology of learning and for educational practice. It seems to imply that learning may be a highly creative process, that cumulative educational programmes may be misguided, and that teachers should not expect pupils to get things right while they are in the midst of learning a collection of mutually recursive concepts. This theme will be illustrated in more detail in the chapter on learning about numbers. (One implication is that this book cannot be written in such a way as to introduce readers to the main ideas one at a time in a clear and accurate way. Readers who are new to the system of concepts will have to revisit different portions of the book frequently. No author has the right to expect this. The book is therefore quite likely to fail to communicate.) 15. Much of what is said in this book simply reports common sense. That is, it attempts to articulate much of the sound intuitive knowledge we have picked up over years of interacting with the physical world and with other people. Making common sense explicit is the goal of much philosophising. Common sense should not be confused with common opinions, namely the beliefs we can readily formulate when asked: these are often false over-generalisations or merely the result of prejudice. Common sense is a rich and profound store of information, not about laws, but about what people are capable of doing, thinking or experiencing. But common sense, like our knowledge of the grammar of our native language, is hard to get at and articulate, which is one reason why so much of philosophy, psychology and social science is vapid, or simply false. Philosophers have been struggling for centuries to develop techniques for articulating common sense and unacknowledged presuppositions, such as the techniques of conceptual analysis and the exploration of paradoxes. Artificial intelligence provides an important new tool for doing this. It helps us find our mistakes quickly. One reason for this is that attempts to make computers understand what we say soon break down if we haven't learnt to articulate in the programs the presuppositions and rich conceptual structures which we use in understanding such things. (See Abelson, 'The structure of belief systems', and Schank & Abelson, 1977.) Further, when you've designed a program whose behaviour is meant to exemplify some familiar concept, such as learning, perceiving, conversing, or achieving a goal, then in trying to interact with the program and in experiencing its behaviour it often happens that you come to realise that it does not really exemplify your concept after all, and this may help you to pin down features of the concept, essential to its use, which you had not previously noticed. So artificial intelligence contributes to conceptual analysis. (The interaction is two-way.) 16. Of course, merely imagining the program's behaviour would often suffice: running the program isn't necessary in principle. But one of the sad and yet exhilarating facts most programmers soon learn is that it is hard to be sufficiently imaginative to anticipate the kinds of behaviour one's program can produce, especially when it is a complex system capable of generating millions of different kinds of processes depending on what you do with it. It is a myth that programs do just what the programmer intended them to do, especially when they are interacting with compilers, operating systems and hardware designed by someone else. The result is often behaviour that nobody planned and nobody can understand. Thus new possibilities are discovered. Such discoveries may serve the same role as thought-experiments have often done in physics. So computational experiments may help to extend common sense as well as helping us to analyse it. 17. One of the things I have been trying to do is undermine the conflict between those who claim that a scientific study of man is possible and those who claim it isn't. Both sides are usually adopting a quite mistaken view of the essence of science. Bad philosophical ideas seem to have a habit of pervading a whole culture (like the supposed dichotomy between the emotional, intuitive aspects of people and the cognitive, intellectual, or rational aspects -- a dichotomy I have tried to undermine elsewhere). The chapter on the aims of science attempts to correct widespread but mistaken views about the nature of science. I first became aware of the mistakes under the influence of linguistics and artificial intelligence. 18. One of the main themes of the revolution is that the pure scientist needs to behave like an engineer: designing and testing working theories. The more complex the processes studied, the closer the two must become. Pure and applied science merge. And philosophers need to join in. 19. I'll end with one more wildly speculative remark. Social systems are among the most complex computational processes created by man (whether intentionally or not). Most of the people currently charged with designing, maintaining, improving or even studying such processes are almost completely ignorant of the concepts, and untrained in the skills, required for thinking about very complex interacting processes. Instead they mess about with variables (on ordinal, interval or ratio scales), looking for correlations between them, convinced that measurement and laws are the stuff of science, without recognizing that such techniques are merely useful stop-gaps for dealing with phenomena you don't yet understand. In years to come, our willingness to trust these politicians, civil servants, economists, educationalists and the like with the task of managing our social system will look rather laughable. I am not suggesting that programmers should govern us. Rather, I venture to suggest that if everyone were allowed to play with computers from childhood, not only would education become much more fun and stretch our minds much further, but people might be a lot better equipped to face many of the tasks which currently defeat us because we don't know how to think about them. Computer 'experts' would find it harder to exploit us. 1.4.(Page 17) What is Artificial Intelligence? The best way to answer this question is to look at the aims of AI, and some of the methods for achieving those aims, and to show how the subject is decomposable into sub-domains and related to other disciplines. This would require a whole book, which is not my current purpose. So I'll give an incomplete answer by describing and commenting on some of the aims. AI is not just the attempt to make machines do things which when done by people are called "intelligent". It is much broader and deeper than this. For it includes the scientific and philosophical aims of understanding as well as the engineering aim of making. The aims of Artificial Intelligence 1. Theoretical analysis of possible effective explanations of intelligent behaviour.

2. Explaining human abilities.

3. Construction of intelligent artefacts. Comments on the aims: The first aim is very close to the aims of Philosophy. The main difference is the requirement that explanations be 'effective'. That is they should form part of, or be capable of contributing usefully to the design of, a working system, i.e. one which generates the behaviour to be explained. The second aim is often formulated, by people working in AI, as the aim of designing machines which 'simulate' human behaviour, i.e. behave like people. There are many problems about this, e.g. which people? People differ enormously. Also what does 'like' mean here? Programs, mechanisms, and people may be compared at many different levels. The programming of computers is not an essential part of the first two aims: rather it is a research method. It imposes a discipline, and provides a tool for finding out what your explanations are theoretically capable of explaining. Sometimes they can do more than you intended usually less. People doing AI do not usually bother much about experiments or surveys of the kinds psychologists and social scientists do, because the main current need is not for more data but for better theories and theory-building concepts and formalisms, so that we can begin to explain the masses of data we already have. (In fact a typical strategy for getting theory-building off the ground, in AI as in other sciences, is to try to explain idealised and simplified situations, in which much of the available data are ignored: e.g. AI programs concerned with 'toy' worlds (like the world of overlapping letters described in chapter 9), and physicists treating moving objects as point masses.) An issue which bothers psychologists is how we can tell whether a particular program really does explain some human ability, as opposed to merely mimicking it. The short answer is that there is never any way of establishing that a scientific explanation is correct. However, it is possible to compare rival explanations, and to tell whether we are making progress. Criteria for doing this are formulated in Chapter 2. The notion of 'intelligent behaviour' in the first aim is easy to illustrate but hard to define. It includes behaviour based on the ability to cope in a systematic fashion with a range of problems of varying structures, and the ability (consciously or unconsciously) to build, describe, interpret, compare, modify and use complex structures, including symbolic structures like sentences, pictures, maps and plans for action. AI is not specially concerned with unusual or meritorious forms of intelligence: ordinary human beings and other animals display the kinds of intelligence whose possibility AI seeks to explain. It turns out that there is not just one thing called 'intelligence', but an enormous variety of kinds of expertise the ability to see various kinds of things, the ability to understand a language, the ability to learn different kinds of things, the ability to make plans, to test plans, to solve problems, to monitor our actions, etc. It also includes the ability to have motives, emotions, and attitudes, e.g. to feel lonely, embarrassed, proud, disgusted, elated, and so on. Each of these abilities involves domain-specific knowledge (factual and procedural knowing that and knowing how). So, much current work in AI is exploration of the knowledge underlying competence in a variety of specialised domains: e.g., seeing blocks, understanding children's stories, making plans for building things out of blocks, assembling bits of machinery, reading handwriting, synthesising or checking computer programs, solving puzzles, playing chess and other games, solving geometrical problems, proving logical and mathematical theorems, etc. I.e. a great deal of AI research is highly 'domain-specific', and amounts to an attempt to explicitly formulate knowledge people already use unconsciously in ordinary life or specialised activities. This is closely related to conceptual analysis as practised by linguists and philosophers. (See Chapter 4.) Alongside all this, there is the search for generality. So research is in progress on possible computing mechanisms and concepts which are not necessarily relevant only to one domain, but may be useful, or necessary, for explaining many different varieties of intelligence, e.g. mechanisms concerned with good ways of storing and retrieving information, making inferences, controlling processes, allowing sub-processes to interact and influence one another, allowing factual knowledge to be translated into procedural forms as required, etc. However, the role of general mechanisms seems to be much less important in explaining intelligent abilities than the role of domain-specific knowledge. As pointed out below, much of the domain-specific research overlaps with research in other disciplines, e.g. Linguistics, Psychology, Education, Philosophy, Anthropology, and perhaps Physiology. For example, you can't make a computer understand English without studying syntactic, semantic and pragmatic rules of English, that is, without doing Linguistics. A major effect of AI research as already mentioned is to establish that apparently simple tasks, like seeing a line, may involve very complex cognitive processes, using substantial prior knowledge. One side-effect of attempts to understand human abilities well enough to give them to computers, has been the introduction of some new approaches to teaching those abilities to children, for instance LOGO projects (see papers by Papert). These projects use a programming language based on programming languages developed for AI research, and they teach children and other beginners programming using such a language. These languages are much more suitable for teaching beginners than BASIC or FORTRAN, the most commonly used languages, because (a) they are very much more powerful, making it relatively easy to get the computer to do complex things and (b) they are not restricted to numerical computations. For example, LOGO, used at MIT and Edinburgh University, and POP-2, which we use at Sussex University, provide facilities suitable for manipulating words and sentences, drawing pictures, etc. (See Burstall et al. 1971.) AI gives people much more respect for the achievements of children, and more insight into the problems they have to solve in learning what they do. This leads to a better understanding of possible reasons for not learning so well. 1.5.(Page 20) Conclusion

The primary aim of my research is to understand aspects of the human mind. Different people will be interested in different aspects, and many will not be interested in the aspects I have chosen: scientific creativity, decision making, visual perception, the use of verbal and non-verbal symbolisms, and learning of elementary mathematics. At present I can only report fragmentary progress. Whether it is called philosophy, psychology, computing science, or anything else doesn't really interest me. The methods of all these disciplines are needed if progress is to be made. It may be that the human mind is too complex to be understood by the human mind. But the desire to attempt the impossible seems to be one of its persistent features. Note

The remaining chapters, apart from Chapter 10, should be readable in any order. On the whole, people knowledgeable about philosophy and ignorant of computing will probably find chapters 2 to 5 easier than the following chapters. People interested in trying to understand how people work, and not so concerned with abstract methodological issues, may find chapters 2 to 5 tedious (or difficult?), and should start with Part Two, though they'll not be able to follow all the methodological asides, which refer back to earlier chapters. Chapter 1 Endnotes (1) I write 'program' not 'programme' since the former is a technical term referring to a collection of definitions, instructions and information expressed in a precise language capable of being interpreted by a computer. For more details see J. Weizenbaum, Computer Power and Human Reason. There is much in this book that I disagree with, but it is well worth reading, and may be a useful antidote to some of my excesses. (2) A compiler is a program which translates programs from one programming language into another. E.g. an ALGOL compiler may translate ALGOL programs into the 'machine code' of a particular computer. (3) Apparently Hegel anticipated some of these ideas. His admirers might advance their understanding of his problems by turning to the study of computation.

Chap. 1 updated: 4 Jun 2007; reformatted 1 Jul 2015 Chap. 1 updated: 4 Jun 2007; reformatted 1 Jul 2015 BACK TO CONTENTS

Original Contents List

Prev: Chapter One, Next: Chapter Two, Chapter Three.

THE COMPUTER REVOLUTION IN PHILOSOPHY (1978): Chapter 2

The Computer Revolution In Philosophy (1978) Book contents page Original pages 22-62 PART ONE: METHODOLOGICAL PRELIMINARIES CHAPTER 2 WHAT ARE THE AIMS OF SCIENCE?[1] 2.1. Part One: Overview 2.1.1. Introduction Very many persons and institutions are engaged in what they call scientific research. Do their activities have anything in common? They seem to ask very different sorts of questions, about very different sorts of objects, events and processes, and they use very different methods for finding answers. Very many persons and institutions are engaged in what they call scientific research. Do their activities have anything in common? They seem to ask very different sorts of questions, about very different sorts of objects, events and processes, and they use very different methods for finding answers. If we ask scientists what science is and what its aims are, we get a confusing variety of answers. Whom should we believe? Do scientists really know what they are doing, or are they perhaps as confused about their aims and methods as the rest of us? I suggest that it is as hard for a scientist to characterise the aims and methods of science in general as it is for normal persons to characterise the grammatical rules governing their own use of language. But I am going to stick my neck out and try. If we are to understand the nature of science, we must see it as an activity and achievement of the human mind alongside others, such as the achievements of children in learning to talk and to cope with people and other objects in their environment, and the achievements of non-scientists living in a rich and complex world which constantly poses problems to be solved. Looking at scientific knowledge as one form of human knowledge, scientific understanding as one form of human understanding, scientific investigation as one form of human problem-solving activity, we can begin to see more clearly what science is, and also what kind of mechanism the human mind is. I suggest that no simple slogan or definition, such as can be found in textbooks of science or philosophy can capture its aims. For instance, I shall try to show that it is grossly misleading to characterise science as a search for laws. Science is a complex network of different interlocking activities with multiple practical and theoretical aims and a great variety of methods. I shall try to describe some of the aims and their relationships. Oversimple characterisations, by both scientists and philosophers, have led to unnecessary and crippling restrictions on the activities of some would-be scientists, especially in the social and behavioural sciences, and to harmfully rigid barriers between science and philosophy. By undermining the slogan that science is the search for laws, and subsidiary slogans such as that quantification is essential, that scientific theories must be empirically refutable, and that the methods of philosophers cannot serve the aims of scientists, I shall try to liberate some scientists from the dogmas indoctrinated in universities and colleges. I shall also try in later chapters to show philosophers how they can contribute to the scientific study of man, thereby escaping from the barrenness and triviality complained of so often by non-philosophers and philosophy students. An important reason for studying the aims and methods of science is that it may give us insights into the learning processes of children, and help us design machines which can learn. Equally, the latter project should help us understand science. A side-effect of my argument is to undermine some old philosophical distinctions and pour cold water on battles which rage around them like the distinction between subjectivity and objectivity, the distinction between science and philosophy and the battles between empiricists and rationalists. My views have been powerfully influenced by the writings of Karl Popper. However, several major points of disagreement with him will emerge. 2.1.2. First crude subdivision of aims of science Science has not just one aim but several. The aims of scientific investigation can be crudely subdivided as follows: Science has not just one aim but several. The aims of scientific investigation can be crudely subdivided as follows: To extend man's knowledge and understanding of the form and contents of the universe (factual aims), To extend man's control over the universe, and to use this to improve the world (technological or practical aims), To discover how things ought to be, what sorts of things are good or bad and how best to further the purposes of nature or (in the case of religious scientists) God (normative aims). Whether the third aim makes sense (and many scientists and philosophers would dispute this) depends on whether it is possible to derive values and norms from facts. I shall not discuss it as it is not relevant to the main purposes of this book. The second kind of aim will not be given much attention either, except when relevant to discussions of the first kind of aim, on which I shall concentrate. These aims are not restricted to science. We all, including infants and children, aim to extend our knowledge and understanding: science is unique only in the degree of rigour, system and co-operation between individuals involved in its methods. For the present, however, I shall not explore the peculiarities of science, since what it has in common with other forms of acquisition of knowledge has been too long neglected, and it is the common features I want to describe. In particular, notice that one cannot have the aim of extending one's knowledge unless one presupposes that one's knowledge is incomplete, or perhaps even includes mistakes. This means that pursuing science requires systematic self-criticism in order to find the gaps and errors. This distinguishes both science and perhaps the curiosity of young children from some other belief systems, such as dogmatic theological systems and political ideologies. (See Chapter 6 for the role of self-criticism in intelligence.) But it does not distinguish science from philosophy. Let us now examine the factual aims of science more closely. 2.1.3. A further subdivision of the factual aims: form and content The aims of extending knowledge and understanding can be subdivided as follows: (1.a) Extending knowledge of the form of the world: Extending knowledge of what sorts of things are possible and impossible in the world, and how or why they are (the aim of interpreting the world, or learning about its form). (This will be further subdivided below.) NOTE: I would now (since about 2002) express the aim of 'extending knowledge of what sorts of things are possible' in terms of 'extending the ontology we use'. This is also part of the process of child development, e.g. as illustrated in this presentation:

http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0604

'Ontology extension' in evolution and in development, in animals and machines. And in: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang

Evolution of minds and languages.

What evolved first and develops first in children:

Languages for communicating, or languages for thinking (Generalised Languages: GLs)? (1.b) Extending knowledge of the contents of the world: Extending knowledge of what particular objects, events, processes, or states of affairs exist or existed in particular places at particular times (the aim of acquiring 'historical' knowledge, or learning about the contents of the world). The aims of extending knowledge and understanding can be subdivided as follows: A similar distinction pervades the writings of Karl Popper, though he would disagree with some of the things I say below about (1.a). Different branches of science tend to stress one or other of these aims, though both aims are usually present to some extent. For instance, physics is more concerned with aim (1.a), studying the form of the world, whereas astronomy is perhaps more concerned with (1.b), studying the contents. Geology, geography, biology, anthropology, human history, sociology, and some kinds of linguistics tend to be more concerned with (1.b), i.e. with learning about the particular contents of particular parts of the universe. Chemistry, some branches of biology, economics and psychology attempt to investigate truths not so restricted in scope. In the jargon of philosophers, (1.a) is concerned with universals, (1.b) with particulars. However, the two scientific aims are very closely linked. One cannot discover what sorts of things are possible, nor test explanatory theories, except by discovering particular facts about what actually exists or occurs. Conversely, one cannot really understand particular objects, events, processes, etc., except insofar as one classifies and explains them in the light of more general knowledge about what kinds of things there can be and how or why. These two aims are closely linked in all forms of learning about the world, not only in science. The study of form and the study of content go hand in hand. (This must be an important factor in the design of intelligent machines.) I have characterised these aims in a dynamic form: the aim is to extend knowledge, to go on learning. Some might say that the aim is to arrive at some terminal state when everything is known about the form and content of the world, or at least the form. There are serious problems about whether this suggestion makes sense: for example how could one tell that this goal had been reached? But I do not wish to pursue the matter. For the present, it is sufficient to note that it makes sense to talk of extending knowledge, that is removing errors and filling gaps, whether or not any final state of complete knowledge is possible. Some of the criteria for deciding what is an extension or improvement will be mentioned later. Many philosophers of science have found it hard to explain the sense in which science makes progress, or is cumulative. (E.g. Kuhn (1962), last chapter.) This is because they tend to think of science as being mainly concerned with laws; and supposed laws are constantly being refuted or replaced by others. Very little seems to survive. But if we see science as being also concerned with knowledge of what is possible, then it is obviously cumulative. For a single instance demonstrates a new possibility and, unlike a law, this cannot be refuted by new occurrences, even if the possibility is re-described from time to time as the language of scientists evolves. Hypotheses about the limits of possibilities (laws) lack this security, for they are constantly subject to revision as the boundaries are pushed further out, by newly discovered (or created) possibilities. Explanations of possibilities and their limits frequently need to be refined or replaced, for the same reason. But this is all a necessary part of the process of learning and understanding more about what is possible in the world. (This is true of child development too.) It is an organic, principled growth. Let us now look more closely at aim (1.a), the aim of extending knowledge of the form of the world. 2.2. Part Two: Interpreting the world 2.2.1. The interpretative aims of science subdivided The aim (1.a) of interpreting the world, or learning about its form, can be subdivided into several subgoals listed below. They are all closely related. To call some of them 'scientific' and others 'metaphysical' or 'philosophical', as empiricists and Popperians tend to do, is to ignore their inter-dependence. Rather, they are all aspects of the attempt to discover what is and what is not possible in the world and to understand why. The aim (1.a) of interpreting the world, or learning about its form, can be subdivided into several subgoals listed below. They are all closely related. To call some of them 'scientific' and others 'metaphysical' or 'philosophical', as empiricists and Popperians tend to do, is to ignore their inter-dependence. Rather, they are all aspects of the attempt to discover what is and what is not possible in the world and to understand why. All the following types of learning will ultimately have to be catered for in intelligent machines. Development of new concepts and symbolisms making it possible to conceive of, represent, think about and ask questions about new kinds or ranges of possibilities (e.g. new kinds of physical sub