Ages ago (gosh, it was nearly five years ago) I had a piece where Blandula remarked that any robot clever enough to understand Isaac Asimov’s Three Laws of Robotics would surely be clever enough to circumvent them. At the time I think all I had in mind was the ease with which a clever robot would be able to devise some rationalisation of the harm or disobedience it was contemplating. Asimov himself was of course well aware of the possibility of this kind of thing in a general way. Somewhere (working from memory) I think he explains that it was necessary to specify that robots may not, through inaction, allow a human to come to harm, or they would be able to work round the ban on outright harming by, for example, dropping a heavy weight on a human’s head. Dropping the weight would not amount to harming the human because the robot was more than capable of catching it again before the moment of contact. But once the weight was falling, a robot without the additional specification would be under no obligation to do the actual catching.

That does not actually wrap up the problem altogether. Even in the case of robots with the additional specification, we can imagine that ways to drop the fatal weight might be found. Suppose, for example, that three robots, who in this case are incapable of catching the weight once dropped, all hold on to it and agree to let go at the same moment. Each individual can feel guiltless because if the other two had held on, the weight would not have dropped. Reasoning of this kind is not at all alien to the human mind; compare the planned dispersal of responsibility embodied in a firing squad.

Anyway, that’s all very well, but I think there may well be a deeper argument here: perhaps the cognitive capacity required to understand and apply the Three Laws is actually incompatible with a cognitive set-up that guarantees obedience.

There are two problems for our Asimovian robot: first it has to understand the Laws; second, it has to be able to work out what actions will deliver results compatible with them. Understanding, to begin with, is an intractable problem. We know from Quine that every sentence has an endless number of possible interpretations; humans effortlessly pick out the one that makes sense, or at least a small set of alternatives; but there doesn’t seem to be any viable algorithm for picking through the list of interpretations. We can build in hard-wired input-output responses, but when we’re talking about concepts as general and debatable as ‘harm’, that’s really not enough. If we have a robot in a factory, we can ensure that if it detects an unexpected source of heat and movement of the kind a human would generate, it should stop thrashing its painting arm around – but that’s nothing like intelligent obedience of a general law against harm.

But even if we can get the robot to understand the Laws, there’s an equally grave problem involved in making it choose what to do. We run into the frame problem (in its wider, Dennettian form). This is, very briefly, the problem that arises from tracking changes in the real world. For a robot to keep track of everything that changes (and everything which stays the same, which is also necessary) involves an unmanageable explosion of data. Humans somehow pick out just relevant changes; but again a robot can only pick out what’s relevant by sorting through everything that might be relevant, which leads straight back to the same kind of problem with indefinitely large amounts of data.

I don’t think it’s a huge leap to see something in common between the two problems; I think we could say that they both arise from an underlying difficulty in dealing with relevance in the face of the buzzing complexity of reality. My own view is that humans get round this problem through recognition; roughly speaking, instead of looking at every object individually to determine whether it’s square, we throw everything into a sort of sieve with holes that only let square things drop through. But whether or not that’s right, and putting aside the question of how you would go about building such a faculty into a robot, I suggest that both understanding and obedience involve the ability to pick out a cogent, non-random option from an infinite range of possibilities. We could call this free will if we were so inclined, but let’s just call it a faculty of choice.

Now I think that faculty, which the robot is going to have to exercise in order to obey the Laws, would also unavoidably give it the ability to choose whether to obey them or not. To have the faculty of choice, it has to be able to range over an unlimited set of options, whereas constraining it to any given set of outcomes involves setting limits. I suppose we could put this in a more old-fashioned mentalistic kind of way by observing that obedience, properly understood, does not eliminate the individual will but on the contrary requires it to be exercised in the right way.

If that’s true (and I do realise that the above is hardly a tight knock-down argument) it would give Christians a neat explanation of why God could not have made us all good in the first place – though it would not help with the related problem of why we are exposed to widely varying levels of temptation and opportunity. To the rest of us it offers, if we want it, another possible compatibilist formulation of the nature of free will.

Share this: Facebook

Email

Reddit

Twitter

More

Tumblr

Pinterest



Pocket

LinkedIn



