This post is by Keith O’Rourke and as with all posts and comments on this blog, is just a deliberation on dealing with uncertainties in scientific inquiry and should not to be attributed to any entity other than the author. As with any critically-thinking inquirer, the views behind these deliberations are always subject to rethinking and revision at any time.

I thought I would revisit a post of Andrew’s on artificial intelligence (AI) and statistics. The main point seemed to be that “AI can be improved using long-established statistical principles. Or, to put it another way, that long-established statistical principles can be made more useful through AI techniques.” The point(s) I will try to make here are that AI, like statistics, can be (and has been) improved by focusing on representations themselves. That is focusing on what makes them good in a purposeful sense and how we and machines (perhaps only jointly) can build more purposeful ones.

To start, I’ll suggest the authors of the paper Andrew reviewed might wish to consider Robert Kass’ arguments that it is time to move past the standard conception of sampling from a population to one more focused on the hypothetical link between variation in data and its description using statistical models . Even more elaborately – on the hypothetical link between data and statistical models where here the data are connected more specifically to their representation as random variables. Google “rob kass pragmatic statistics” for his papers on this as well as reactions to them (including some from Andrew).

Given my armchair knowledge of AI, the focus on representations themselves and how algorithms can learn better ones first came to my attention in a talk by Yoshua Bengio at the 2013 Montreal JSM. (The only reason I went was the David Dunson had suggested to me that the talk would be informative). Now, I had attended many of Geoff Hinton’s talks when I was in Toronto, but never picked up the idea of learning ways to represent rather than just predict. Even in the seminar he gave to a group very interested in the general theory of representation – the Toronto Semiotic Circle in 1991 (no I am not good at remembering details – its on his CV). Of course this was well before deep neural nets.

So what is my motivation and sense of purpose for this post. The so what? Perhaps as Dennett put it “produce new ways of looking at things, ways of thinking about things, ways of framing the questions, ways of seeing what is important and why” [The intentional stance. MIT Press, Cambridge (1987)]. For instance, at 32:32 in Yann LeCun – How Does The Brain Learn So Quickly? there is a diagram depicting reasoning with World Simulator -> Actor -> Critic. Perhaps re-ordering as Critic -> Actor -> World Simulator, re-expressed as Aesthetics -> Ethics -> Logic and then expanded as what you value should set out how you act and that how you represent what to possibly act upon – would be insightful. Perhaps not. But even Dennett’s comment does seem to be about representing differently to get better, more purposeful representations.

Of course, there will be expected if not obligatory “what insight can be garnered from CS Peirce’s work” in this regard.

p.s. A variety of comments suggest some further clarification. By claiming artificial intelligence has always been part of human reasoning, I was trying to deflate the hype rather than add to it. When I started this post, I was very wary of claims of AI learning ways to represent or even representing at all. However, I came to think it was more reasonable not to speculate or argue for any limits to what machines could do nor what things could or could not actually represent. Peirce did argue that signs stand for something to someone or something. For instance, I do think it is accepted that bees do represent where they obtained the nectar to other bees. However, the current differences between human and machine representation that were pointed to below and more fully in some of the links, are huge. Given this, my primary interest is in “how we and machines (perhaps only jointly ) can build more purposeful ones”. For instance, I believe autonomous driving vehicles are a wasteful distraction that is likely delaying the development and deployment of life saving computer assisted driving aids.

It turns out Peirce thought AI was an important topic. “Precisely how much of the business of thinking a machine could possibly be made to perform, and what part of it must be left for the living mind, is a question not without conceivable practical importance; the study of it can at any rate not fail to throw needed light on the nature of the reasoning process.” Logical machines. 1887

Now, he argued that much of representing is done outside one’s mind, using artefacts, paper and machines. As Steiner put it “for Peirce, artefacts and machines can be parts of cognitive processes: a logical argument concerning the semiotic [representing] character of mental phenomena, and (especially) a functional argument on the constitutive role of these machines and artefacts for human intelligence.” That is representing along with critical thinking has for a very long time been made more useful through AI-like techniques. Today, I believe some are even speculating that representing was first done outside the (pre)human’s mind, using artefacts that may have then led to the development of language and internal representations in the mind.

The main take home point perhaps being that externalizing the representations allow us to stare at them, if not them stare back at us. And publicly, so others can also stare and perhaps point out how the representations could be made less wrong. That is we (or some of us) treat representations as thinking devices to be tried out and reworked – controlled – to better address purposes. The most import purpose being to represent reality so as to be able to act with out frustration, because the representation somehow connects us with that reality we have no direct access to.

Perhaps also the main point of this blog?

OK, if really interested – read Steiner’s paper. Here are a couple quotes.

“The idea that the cognitive processes of individuals may extend beyond their skin and skull, as they are notably composed of, constituted by, or spatially distributed over the manipulation, use or transformation of artefacts and machines was already suggested by Peirce. But not only: I now want to show how Peirce’s philosophy is very relevant if we want to inquire about the differences between the reasoning abilities of machines and human intelligence, in a framework in which human intelligence is notably made of machines, symbols, and their use.”

“The difference between human reasoning and machine reasoning is basically related neither to consciousness nor to originality, but to the degrees of control, purpose, and reflexivity human reasoning can exhibit … Human intelligence – including how we acquire and exercise self-control, purpose, and reflexivity – is basically made up of exo-somatic artefacts (including representational systems) and their use”.