Consider a rock outside somewhere. It sits there, starting off in the morning in a certain state. The sun comes out and proceeds to warm it up. Its temperature climbs through the day until the sun sets, whereupon it cools through the night. The cycle starts again the next morning. The rock is going through a series of states throughout the day.

We can model the changing states of the rock with a computational model, which we’ll call R. However, if we can model the rock with that computation, then we can regard the rock to be implementing that computation. In other words, the rock can be seen as a computational system implementing the algorithm R.

Suppose we want to consider the rock to be implementing something other than R? In truth, there are probably numerous computational models that would describe what is happening in the rock, depending on the level of detail we want to work at and perspective that we want to take. But suppose we want to interpret the rock to be doing something non-rockish.

Well, we can create a new model, which we’ll call R+M (rock + mapping). Let’s implement a clock algorithm with R+M. Naively, this might seem straightforward. The rock’s temperature will vary throughout the day, so all we need to do is map each temperature to a specific time. That’s what M adds to the model. Viola, we’ve interpreted a rock to be implementing a clock.

But is the rock really implementing a clock algorithm? Before you answer, consider that if you ran a clock algorithm on your computer, the actual sequence of states inside your computer could be modeled at a much more primal physical level involving transistor voltage states, which might bear limited resemblance to a high level clock model. We’ll call this primal model C. Your computer has an I/O (input/output) system, which maps C into presenting all the things a clock would present. We’ll call the overall model of this C+M (computer+mapping).

It’s not C by itself which provides the clock, but C+M. What’s the functional difference between R+M and C+M? Certainly R and C are radically different, but aren’t we compensating for those differences by their respective versions of M?

If we can do this to consider the rock to be implementing a clock, can’t we do it with more sophisticated algorithms? Suppose we want to consider the rock to be running Microsoft Word. So we implement a new R+M model, but this time M adds everything to map the rock temperature states to the computational states of Word.

But if we can do that, is there then any computation, any algorithm we can’t consider the rock to be implementing?

If the mind is computation, couldn’t we then extend R+M to be a conscious mind? In other words, with the right perspective, isn’t every rock implementing a conscious mind? Not just one mind, but every conceivable mind? In other words, if the computational theory of mind is correct, and we can map the sequence of states for any complex object, isn’t the universe teeming with consciousness?

Before you start having any concern about the way you might have treated the last rock you encountered, let’s back up a bit. In the first scenario of R+M above, we mapped the states of the rock to a clock. But this is problematic because the rock has variances in its inputs that the clock model doesn’t. For example, cloud cover and other weather conditions may affect exactly how warm the rock becomes and there may be other environmental factors. When these come into play, we may find our R+M clock being a bit unreliable.

No problem. We’ll just put in some adjustments into the mapping M. When the weather is overcast, or if it’s raining, or windy, we’ll adjust which temperature maps to which time to take into account the weather.

Except now, is our mapping still just a mapping? It’s taking in its own inputs, performing its own logic, and essentially dynamically adjusting as needed to insure that the states of the rock map to the states of a clock. Of course, to have implemented Word or a mind, we would have to take similar albeit much more aggressive steps in the mappings for those algorithms. Does it still make sense to say the rock is implementing a clock, or Word, or a mind?

If two implementations of a piece of functionality are not physically identical, then isn’t it a judgment call whether they are functionally the same? Algorithm X executing on a desktop PC is not physically identical to algorithm X executing on an iPhone. Both implementations may have originated from the same source code, the same action plan in essence, but we would still need a mapping between the physical processes is necessary to consider them to be the same. Run algorithm X on a quantum processor, and the mapping might become extremely complex.

So our rock is implementing every algorithm? There have been a number of proposals to explain why this isn’t true.

First, it’s been pointed out that, in the case of something like a rock, the mappings are created after the fact, observing what happens in the rock and creating a framework to map it to a meaningful algorithm. This certainly makes the rock unusable as a computing tool. But does that mean it isn’t actually implementing the algorithm?

Another criticism, related to the first, is that the mapped algorithms are fragile, falling into incoherence if the rock’s state transitions don’t flow in just the right way. In other words, the algorithm isn’t just one of many the rock could be implementing, but the only one it could be implementing with the given interpretation.

That leads to challenges involving causality and dispositional states. The reasons why the rock moves from one state to another have no relation to why the (reliable) clock, or Word, or a mind move from one state to another. Each state has a dispositional nature, that together with inputs into the system, causally lead to the next state. If the physical state’s dispositions don’t have some logical resemblance to the disposition of the computational model’s state, then the mappings are arbitrary.

There are other criticisms. Is the rock really a computational system? We spend a lot of money to purchase computational devices which are only created with a lot of engineering. If the mind is a computational system, evolution invested a lot of resources developing its computational substrate in the brain. We don’t run Word on just any matter. Minds can’t exist in just any piece of biological matter. In both cases, it takes a very specialized structure. Regarding the rock as equivalent to a brain or silicon chip seems to ignore these important facts.

So, how complex can the mapping become before it is invalid? What specifically makes it invalid? Do the objections laid out above resolve the issue?

My own answer is that I think the mapping become invalid when it crosses into becoming part of the implementation. Due to the absurd work required of the mappings, I’m not inclined to view the rock as implementing Word or conscious minds. It seems to me that the mapping is implementing these things and blaming it on the rock.

But as my friend and fellow blogger Disagreeable Me pointed out in our discussions, attempting to objectively nail down exactly where it crosses this line may be a lost cause. Every mapping, including the ones we use for our computational devices, have some degree of implementation. Ultimately, it may be that whether a particular physical system is implementing a particular algorithm is subjective, which implies that whether a system is conscious is also subjective.

Now, I think the idea of a rock being conscious requires a perverse interpretation, a ridiculous mapping, so I’m not going to be worried about the next rock I break apart or throw. But this becomes a more difficult matter for simulated beings, whose internals might be just close enough to those of a conventional conscious being to give us moral quandaries.

Many people see this is as a problem with the computational theory of mind. While the consequence is profound, I don’t see it as a problem, but simply a stark fact of reality. To understand why, let’s back up and consider exactly what the computational theory of mind is. It’s the belief that the mind is what the brain does, as opposed to some separate substance. In other words, the mind is the function of the brain, and that function can be mathematically modeled.

But isn’t the purpose of any functional system always open to interpretation? To say otherwise is to veer into natural teleology, the belief that natural things have intrinsic purpose. Pursuit of teleology was abandoned by scientists centuries ago, because it could never be objectively established. I fear we might be discovering that the existence of a mind might lie on the far side of that divide.

There’s a strong sentiment that consciousness must be a fundamental aspect of reality. While it certainly is a fundamental aspect of the subjective reality of any conscious being, reality doesn’t appear to be telling us that consciousness is objectively fundamental, at least not in the way of a fundamental force like gravitation, electromagnetism, etc.

Unless, of course, I’m missing something?

Further reading

This post was inspired by a lengthy conversation with Disagreeable Me (and others). He has posted a much more rigorous and philosophically termed entry on it. If you’re feeling particularly energetic, there is a Stanford encyclopedia of philosophy article that covers this issue in pretty good depth. If you’re feeling truly masochistic, check out the papers cited by Disagreeable Me or the Stanford article.