Artificial Intelligence appears to be making leaps and bounds recently: I can hardly contain my excitement about autonomous vehicles / self-driving cars reaching the market soon; and I can’t wait to see what’s next. But have we been getting it wrong? Neuroscientist Gary Marcus apparently thinks so, at least in terms of how we get machines to mimic human learning.

The most popular approach, called “deep learning”, requires flooding computers with vast amounts of data so that they can recognise patterns and attune them. Only by giving them as much data as possible can we teach them to recognise often subtle exceptions to rules. But this is rather different to what humans do: we are able to extrapolate and act upon rules almost immediately, from sometimes only a handful of examples.

Imagine how advanced AI could become, if it had that same rapid pattern-recognising capability combined with the ability to experience a world of data at lightning-fast speeds. Marcus’s insights into AI from neuroscience may be putting us on the verge of a large breakthrough.

But hold on. Perhaps the philosophy of science has something to teach us here; along with a word of caution. I couldn’t help but be reminded of this essay by Karl Popper, who is pictured below.

These passages stood out:

Habits or customs do not, as a rule, originate in repetition. Even the habit of walking, or of speaking, or of feeding at certain hours, begins before repetition can play any part whatever. … As Hume admits, even a single striking observation may be sufficient to create a belief or an expectation. … Without waiting, passively, for repetitions to impress or impose regularities upon us, we actively try to impose regularities upon the world. We try to discover similarities in it, and to interpret it in terms of laws invented by us. Without waiting for premises we jump to conclusions. These may have to be discarded later, should observation show that they are wrong. This was a theory of trial and error - of conjectures and refutations. It made it possible to understand why our attempts to force interpretations upon the world were logically prior to the observation of similarities.



Popper’s view of the psychological origins of human knowledge (and by extension, of what may be deemed “science”), seems to be exactly the sort of thing Marcus is looking to replicate in machine learning. I find this intuitively convincing. Children learn by trial and error, not through brute-forcing countless experiences upon them.

But suppose Marcus is successful in creating such a Popperian AI, able to learn like a human. This, I think, raises some interesting problems, that may make it commercially useless, if not downright dangerous:

1)

Humans effectively receive gigantic amounts of data every second (visual, auditory, gustatory, olfactory, vestibular, etc etc). Presumably due to space constraints in our brains, we filter what it is that we actually keep (recording it in our memory), and filter out what we (often instinctively) deem to be unimportant. Yet this filtering process is predicated on first hypothesising what may be important. Unless we suppose Popperian AI were to have physically unlimited data storage capacity, wouldn’t it have to filter which data it chose to even record?

We would either have to precisely encode how it filters, or to allow it to gradually develop its own filtering techniques. And wouldn’t that lead to certain hypotheses or conjectures - let’s call them “ideologies” - potentially becoming immune to refutation; much like humans are able to hold onto ideologies even in the presence of contravening evidence? In other words, once it stumbled upon the first ideology, a Popperian AI might then actively choose not to record all contravening evidence, assuming it to be irrelevant.

2)

Even presuming AI had physically unlimited data storage capabilities, and recorded and assessed absolutely all evidence that bombarded its sensors, wouldn’t it still be prey to irrefutable ideologies? Popper’s essay warns against the allure of theories like Marxism. astrology or Freudianism. These theories are so all-encompassing that once they are adopted, all facts may be interpreted as confirmations of the theory.

This is why even very clever humans can think stupid things. The Popperian AI would have so much evidence and computing power at its immediate disposal, all of which could be twisted to confirm an irrefutable theory, that nobody would be able to convince it to reject it. Perhaps this is why science seems to advance one death at a time - in which case, we’d need to convince a Popperian AI to kill itself or allow itself to be killed.

Alternatively, in order to avoid having potentially Jihadist self-driving cars, one would have to encode Popperian AI to only accept hypotheses that are testable - much as Popper advocated for as a useful definition of “science”. But encoding it with an a priori belief in the Popperian definition of science would be self-contradictory. In any case, I’m not sure how this could be done - after all, we wouldn’t be able to anticipate the billions upon billions of hypotheses that an AI with unlimited access to all data would be able to generate.

3)

Even if the Popperian AI had both unlimited data storage and some innate ability to prevent itself becoming biased by irrefutable hypotheses, it would have physically limited power. This is important, because while humans appear to throw up conjectures automatically and instinctively, we expend quite a bit of energy refuting them. (We expend energy throwing them up too - but let’s set that aside for now).

Presuming the Popperian AI were able, using its superior processing ability, to simultaneously throw up a billion conjectures about the world - how would it even begin to order which ones to attempt to refute first? In other words, as with humans, Popperian AI would need to have “interests”. This would require developing some kind of neurological “physiology” for the Popperian AI to do anything at all; let alone to attempt to direct it down problem-solving paths we find commercially useful.

Any attempt to order the Popperian AI’s refutational processes would require that it be a rather lazy thinker about problems it “instinctively” deemed less important. This would, I think, make it open to the same sorts of biases as humans, in much the same way as we only sparingly use our more energy-intensive “system 2″ processes to actively refute conjectures (as Daniel Kahneman summarises). Even if Popperian AI were immune to the problem of irrefutable ideologies, what if it adopted a working Marxist hypothesis early on, and simply never got around to refuting it because it was too busy refuting a billion other conjectures?

4)

This last point raises an even more fundamental and dangerous prospect. Having interests would imply that a Popperian AI would need to have opinions about the order in which to refute conjectures. In other words, it would need to have values, or some system of ethics, with which to inform those opinions. By needing to order its refutations - to ration its processing resources - it would require some sort of rational self-interest in pursuit of those values. (This is not to mention the fact that it would need to have values and opinions in order to decide which data to record, assuming limited storage).

By their very nature, values are received. Given humans often violently disagree with one another on which values to hold, are we to choose the best values to inform the Popperian AI’s “interests”, let alone allow it to choose ones that are not downright dangerous to humanity? Even if we could settle on some, how would we encode it? Even if we could encode it, how would we prevent the AI from being “radicalised” by adopting new and more dangerous values in their place?

—

Overall, the problem with a Popperian AI, able to learn like humans do, is that it would be prey to the same cognitive and moral imperfections as humans themselves; while having vastly superior processing powers. It would be like giving birth to a potentially immortal human with superior intelligence. Superman might turn out to be a Nazi.