$\begingroup$

For starters, keep in mind that research in Conway's Game of Life is still ongoing and future developments may present a far less complicated solution.

Now then. Interestingly enough, this is a topic that is actually as much in line with biology and quantum physics as with traditional computer science. The question at the root of the matter is if any device can effectively resist random alterations to its state. The plain and simple answer is that it is impossible to make a such a machine that is perfectly resistant to such random changes. Of course, this is true in much the same way that quantum mechanics could cause seemingly impossible events. What prevents these events from occurring (leading most people to declare them strictly impossible) is the stupendously small probability such an event has of happening. A probability made so small by the large scale difference between the quantum level and the human level. It is similarly possible to make a state machine that is resistant to small degrees of random change by simply making it so large and redundant that any "change" noticed is effectively zero, but the assumption is that this is not the goal. Assuming that, this can be accomplished in the same way that animals and plants are resistant to radiation or physical damage.

The question then may not be how to prevent low-level disturbances from doing too much damage, but rather how to recover from as much damage as possible. This is where biology becomes relevant. Animals and plants actually have this very ability at the cellular level.(Please note: I am speaking of cells in the biological sense in this answer) Now, in Conway's game of life the notion of building a computing device at the scale of single cells is appealing (it does, after all, make such creations much smaller and more efficient), but while we can build self-reproducing computers (see Gemini), This ignores the fact that the constructor object itself may become damaged by disturbances.

Another, more resilient, way I can see to solve this is to build computers out of self-reproducing redundant parts (think biological cells) that perform their operations, reproduce, and are replaced.

At this point we can see another interesting real-world parallel. These low-level disturbances are akin to the effects of radiation. This is most appreciable when you consider the type of damage that can be done to your cellular automata. It is easy to trigger the cascade failure or "death" of a cell in Conway's Game of Life, much the same as what happens to many cells exposed to radiation. But there exists the worst-case possibility of mutation, creating a "cancerous" cell that continues to reproduce faulty copies of itself that do not aid in the computational process, or produce incorrect results.

As I've said, its impossible to build a system that is entirely foolproof, you can only make it less and less likely for a fault to compromise the entire system. Of course, the fundamental question here is really "are probabilistic simulations themselves Turing complete" which has already been decided to be true. I would have answered that fundamental question initially, save that it wasn't what you asked.