A. Classical chaitin model.

1. The model. The fundamental notion in Chaitin model is to consider life as evolving software. This will be specified below. To this end, let us recall some basic notions from AIT that are needed to define the model. Let be the set of finite strings of binary bits, with Λ denoting the blank space symbol. The size or number of bits is |x|. The set of infinite bit-strings is denoted as . A classical computer is an application that takes an input data and a program and acts on the input to produce an output string which is the result of the computation, assuming it halts. The concrete structure and functioning of C is given by the classical Turing Machine5,6. When the input data is empty, we simply write C(p) = x and when the output is simply stopping the computer with no output, we write C(p) : halts. A universal Turing Machine (UTM) U is one that can simulate the functioning of any other TM C.

The notion of complexity is basic in computability theory. It tells us whether a program or input/output data q, have a simple structure or not. Throughout this paper, we shall be using the notion of algorithmic complexity H(x) of a generic string of bits . It was studied independently by Solomonoff13, Kolmogorov14 and Chaitin15 and sometimes is referred to as Kolmogorov complexity. It is defined as the shortest program that can reproduce a given string x in a universal TM:

This notion of complexity grasp the concept that the information content of a string is more related to its intrinsic computational structure rather than to its mere size. For example, a string like x = 0101010101010101… may be very large, but its structure is very simple; x = (01)n, for a certain integer n. The same goes for other periodic strings or structured strings. Its complexity is bounded by a constant; H(x) < c. On the other side of the complexity are the random strings x r that are those without internal structure. This is represented by a complexity H(x r ) ≥ |x r |, for the best thing a TM can do is to output the same input string x r .

A remark is in order. The algorithmic complexity H(x) is not computable because of the existence of the Halting problem and it is defined through a optimization process. Nevertheless, this is no obstacle to produce good and rigorous upper bounds that are enough to quantify the complexity of programs, data etc.

The classical Chaitin model is characterized by a triplet of elements , whose definitions are:

i Living Organism : it is a classical program, i.e., a piece of software that can be fed in a universal Turing Machine and produce a certain output, or just halt or even not halt. If the program halts, then the output is a string of classical bits x. In the theory of classical computation, a program can also be characterized by a certain bit-string whose size is denoted as . Thus, . The rationale behind this choice is an abstract process that reduces an organism to pure information encoded in its DNA. The rest of the organism such as its body, functionalities etc are disregarded as far as being essential to evolution is concerned. This is an oversimplification that is inherent to this toy model and so far it is necessary in order to be able to apply tools from classical information theory (AIT). ii Mutation : it is a classical algorithm that transforms a given organism into a mutated organism . Thus, it represents a transformation of the DNA by the action of external agents to the classical code. Thus, . This notion of mutation is an algorithmic mutation as opposed to other more typical mutations called point-wise mutation that are common to population genetics studies. What is remarkable is that an algorithmic mutation is far richer than other notions of mutations considered thus far and in this context, it appears as the most general change that we can consider on a given living organism (classical code). Consider the following two very different mutations acting on a n-string in bitwise notation . One is a point-wise mutation defined as and the other is a bit-wise mutation While represents a local change in the classical code (DNA), affects globlally to a all the code. is a typical mutation in population genetics since it is more likely to change one single base of the genetic code than multiple changes which are exponentially unlikely. On the contrary, the bit-wise mutation produces a drastic change in the genetic code. It turns out to be useful since it may lead to a change of specie for example. Both mutations are necessary and they find a common framework in the algorithmic treatment of evolution. They have similar amounts of complexity and . In fact, for conditional complexities we also have . Therefore, having a big mutation is not penalized during the whole history of evolution. The evolution is a process that starts with the simplest organism and it evolves towards more complex organisms after the action of a series of mutations , k = 1, 2, …, N. The algorithmic complexity measures how the new successful organisms are becoming more advanced. It is the action of a mutation what defines the notion of time in this model and it is given by the time-step k. The total evolution time would be N. iii Fitness Function : this is a cost function that evaluates whether a mutated organism has improved with respect to the original. Thus, .

Let be a given organism and time-step. Then, in the next step the organism is mutated to . The fitness function selects whether the new organism survives or fails:

Chaitin's deep insight into the problem of biological evolution is the choice of the fitness function from AIT. The idea is to see life as evolving software, such that a living organism is tested after a mutation has occurred. The idea is to use a testing function that is an endless resource. This way, evolution will never be exhausted, will ever go on. In AIT there are several functions with this remarkable property that make them specially well-suited for this task: quantities that are definable but not computable. One example is the Busy Beaver function16 . Another example is Chaitin's Ω number6,17,18 that represents the halting probability of self-delimiting TMs.

For the Busy Beaver function there are several variants which are equally good for the purposes of fitness function, that measures the rate of evolution. For instance, can be defined as the maximum number of 1′s output by a TM U after it halts starting from a blank input data q = Λ. To work with it is convenient to specify the maximum size N that the programs operated by U and define the output as the largest integer in binary form that is computed after halting U. Thus, a N-th Busy Beaver function is denoted and defined

where the algorithmic complexity (1) is defined for programs p that compute k = U(p) without input and halting. This is a well-defined function but it is noncomputable: it grows faster than any computable function for sufficiently large N. Therefore, cannot be bounded in the form of . This is the property that makes a good candidate for fitness function since it is an endless source of creativity that enable us to test a new organism, a program and see whether it is smarter by checking whether it can name a bigger number. Thus, we can use (4) with and ask how the total mutation time T N behaves as N grows. Let us mention in passing that naming increasingly bigger numbers requires lots of creativity in the form of new functions and ways to name new numbers bigger and bigger.

A more manageable and systematic choice for fitness function is Chaitin's Ω number. To define it, it is convenient to introduce the notion of universal probability P U (x) of a given string :

which is the probability that a program randomly drawn as a sequence of fair coin flips p = p 1 p 2 … will compute the string x. That this is a well-defined probability distribution is a central result in AIT. It relies on some technical details: a) the programs p are not arbitrary, but self-delimiting; b) convergence of the series is guaranteed by the Kraft inequality19. A self-delimiting program is a program that knows when to stop by itself, without additional stopping symbols. It is constructed from a set of prefix-free strings of bits: strings that are not prefix of any other string in the set (see Methods section III). In AIT, the algorithmic complexity and the universal probability of strings are related by a Shannon type of equation:

The Ω number can be defined from the universal probability once we drop any reference to any particular output string:

It is considered as the halting probability in the theory of TMs. It measures the probability that a randomly chosen program p will halt when run in a UTM that halts. Thus, it is defined on the set of prefix-free halting programs, not for arbitrary programs. Interestingly enough, Chaitin proved that universal TM exist for self-delimiting programs. This technical condition guarantees that 0 < Ω < 1: there are always programs that halt, but not all of them will halt due to the halting problem. Again, Ω is well-defined and noncomputable. It hosts an inexhaustible amount of knowledge and it is thus suited for a fitness function. In short, if Ω were computable it would imply that there is no halting problem, which is false. Like , it is convenient to truncate Chaitin's number up to programs of size N computed in time less than N:

These Ω N are lower bounds to the actual Ω. This truncation also produces an unbounded function Ω N that reflects its non-computability.

Chaitin uses Ω k to define an organism and a mutation at time-step k, as well as the fitness function . Namely, an organism is defined by means of the first N(k) binary digits ω i of Ω k :

To complete the construction of the organism from the proto-organism , we need two more ingredients. One is to make it a self-delimiting program by including a prefix string 1N(k)0 (see Methods section III) and the other one is to prefix a program p Ω to read the fitness of the resulting organism. Altogether, the organism looks like:

The mutation acts on the organism by trying to improve the lower bounds on Ω. According to AIT, a natural move is

Notice that this mutation induces, in turn, a mutation in the organism by the rules specified in its construction above. These mutations represent challenging an organism to find a better a better lower bound of Ω which amounts to an ever increasing source of knowledge. To this end, the fitness function is introduced as follows:

To understand this selection, notice that no truncation Ω k can be greater that the real Ω and thus, this represents a failure. On the contrary, if the new truncation is still less than Ω, we have increased our knowledge of how many programs will halt upon running a UTM . As Chaitin notices, this implies the use of an oracle2,3,4.

It is possible to define a variant of the Busy Beaver function in terms of Ω N as the least N for which the first k bits of the binary string of Ω N are correct. In AIT it can be proved that both Busy Beavers are approximately equal,

B. Chaitin's Evolution Scenarios.

Let us denote T N the total mutation time, i.e., the number of mutations tried in order to evolve an initial organism up to a certain more fitted organism . Depending on the strategy followed by Nature, Chaitin considers three scenarios and computes the scaling of T N with N. In this way, one can assess which is the best evolutionary scenario. The results are the following:

Scenario I: Exhaustive Search.

This scenario represents that there is no strategy in Nature and every possible organism is tested regardless which was the previous organism that originated it. Thus, there is no effective application of a fitness function but Nature explores all possible codes available in the phase space. As from AIT we know that in a given set of strings of length up to N there are 2N − 1 strings, then the order of the evolution time is

It takes an exponential time to reach a certain organism .

Scenario II: Intelligent Design.

This scenario is the opposite to the previous one. Now, Nature is not dumb but assumed to be intelligent enough so as to know about AIT and this model of evolution. The initial proto-organism is . The best strategy is to apply a process of interval halving to track down better lower bounds to O by applying mutations , k = 1, 2, …, N in this increasing order. Thus the mutation time takes of the order of N trials:

Thus, by selecting intelligently the order of the mutations, since we assume that Natures knows the structure of Ω, then the total evolution time for an organism grows linearly in N.

Scenario III: Cumulative Evolution at Random.

A more natural assumption is that Nature choses randomly the mutations among the set of possible mutations. It is a random walk in the space of mutations. Remarkably enough, the evolution time grows in between quadratic and cubic in N:

Although this is worse than scenario II, it is still a polynomial growth and far from the exponential growth of scenario I.

C. Quantum Chaitin Model.

The following definitions are we well-motivated when trying to bring concepts from Quantum Information Theory (QIT) into Chaitin's classical model. They can be made even more general as discussed in Sect.II.

i Quantum Organism : it is a pure quantum state in a Hilbert space of infinitely countable qubits: . In practice, we shall be dealing with a finite truncation to a number of qubits N denoted as . The meaning of this choice is motivated by the notion of classical organism as a program for a TM. Now, the quantum version is a pure state that encodes the information of a quantum program. This is meaningful since we have adhered to an abstraction process in which a living organism is divested of everything except its genetic code that is represented by a classical program. Thus, a quantum organism is not a form of quantum life, but represents quantum effects in the classical code of DNA. ii Quantum Mutation : it is a quantum algorithm that transforms the original quantum organism into a mutated quantum organism : iii Quantum Fitness : it is a cost function that selects a mutated organism when it is fittest than the original.

The traditional characters of Quantum Information20,21 Alice A and Bob B, can be adapted to the quantum evolution scenario: Alice is the organism before the mutation and Bob is the mutated organism . Then, will success or fail depending on the fitness of the pair (A, B).

In order to complete the above quantum definitions we need to specify how to choose a triplet in the quantum case. We shall follow the classical model and try to find a quantum version of organisms as lower bounds to some Ω number to be specify. Once this is done, the quantum notions of mutation and fitness function will also follow. All this can be done by defining a notion of quantum algorithmic complexity.

D. Quantum Algorithmic Complexity.

The quantumness of the Ω number that we are searching for our definition of quantum organism will depend on the notion of quantum algorithmic complexity H q that we decide to use. In fact, there are several versions of H q 22,23,24,25 and not all of them are equivalent. We shall choose the definition of Mora and Briegel25 that is called network complexity H net because of the following properties25,26,27:

a H net is a classical algorithmic complexity associated to a quantum state. It describes how many classical bits of information are required to describe a quantum state of N qubits. Being classical, it will allow us to compare to previous evolution rates on equal footing. b H net has the special property that it requires an exponential number of classical bits for the description of generic quantum states. In particular, it detects a sharp difference between multipartite entangled states and separable states.

The network complexity is a description that Alice does of a quantum state she has and she wants to send this information to Bob through a classical channel so that Bob could eventually reproduce that state on his side. It describes the classical effort Bob would have to do. In order to define network complexity, we need several operational elements: a) a universal set of quantum gates ; b) an alphabet to code circuit operations and c) a fidelity or degree of precision ε ∈ (0,1). With the aid of these elements, we can construct a mapping from quantum states in to finite strings , such that

and then,

The first equality represents our choice of quantum algorithmic complexity while the second is the definition of network complexity (1).

The mapping (19) is constructed from the elements a)-c) as follows: let us select a universal finite set of gates for example, the one generated by the gates 28, i.e., the Hadamard gate, the π/8-phase gate and the Cnot gate, respectively. Then, Alice sets up a quantum circuit of gates called U by concatenating gates from and constructs a state, namely, , from an initialization state . This prepared state can approximate the desired state with precision given by

In all what follows, ε will be a fixed parameter once and for all from the beginning.

Next, Alice needs to use the alphabet in order to code all the operations in the circuit U and preparation of the state with ε-precision (21). This is represented by a certain string of bits , where M is the length of the resulting bit-string and is a certain function of the number of qubits N. Then, the mapping (19) is given by

With this, the network complexity (20) is well-defined. An additional minimization process is assumed in (1) since the circuit U is not unique and it is natural to request to use the minimal circuit that prepares the state with the desired precission.

Our choice of quantum algorithmic complexity has very important consequences for studying quantum effects in biological evolution:

1 According to this definition of quantum algorithmic complexity in terms of a classical network complexity, we realize that the set of quantum states is mapped onto the set of bit-strings. Thus, while the former is uncountable, the latter is infinitely denumerable. 2 By virtue of this mapping we are complying with the Turing barrier. 3 The fact that the network complexity is classical will make that our quantum Ωq will be also real numbers and not quantum states or operators. However, we can make classical definitions of Ω numbers that represent different types of quantum states (see later). 4 In a traditional quantum information scenario, Bob needs to agree with Alice on which alphabet to use in order to communicate. In a quantum evolution scenario, there is no need to agree on a common language for the description since there are not two observers, but a single organism that evolves.

We shall use the following fundamental results from network complexity and quantum states25. As a consequence of the Solovay-Kitaev theorem20,29,30, the number of gates (bit-string) M of the circuit needed to construct a given multipartite state grows exponentially with N for a fixed accuracy ε.

Furthermore, the network complexity quantifies very differently the complexity of separable and entangled states25:

Separable States :

Maximally Entangled States :

Generic States:

where is the Schmidt measure which quantifies the degree of entanglement in the multipartite pure state31.

The fact that separable states are less complex than entangled states means that separable states are more likely: If we type a random bit-string at a computer, most likely it will correspond to a separable state. This raises a fundamental question: can we use the higher complexity of entangled states to accelerate the rate of biological evolution? To answer this question we need to introduce the corresponding quantum Ω numbers and different scenarios for mutations evolution in which evolution will develop.

E. Quantum Omega Numbers.

In order to describe different types of quantum organisms we need to define different types of Ω numbers associated to quantum states. Thus, we shall use the basic results on network complexity H net . However, we can define Omega numbers associated to selected classes of states. As we know from the geometry of the Hilbert space of states that the set of separable states does not intersect the set of truly entangled (maximally) states, we can define Ω numbers by restricting the sum on the programs originated by the mapping (19) to those yielding either separable or entangled states. By construction, these sets are discrete since we are using a discrete set of universal quantum gates .

Separable number:

where p S is a program that describes the network complexity of a separable state | S . To do this sum, we construct all possilble separable states and apply the mapping (19) to perform the sum. As the method is constructive, the separable states are obtained on demand.

Entangled (maximally) number:

where p E is a program that describes the network complexity of a maximally entangled state | E . To do this sum, we fix the accuracy ε which behaves as an overhead factor, then we construct all posilble maximally entangled states and apply the mapping (19) to perform the sum. The decision problem of whether a given constructed state is maximally entangled is solved by computing its Schmidt measure and testing that it is maximal. We take this as an operational definition of maximally entangled state in this context.

In both sums, the programs p S and p E are assumed to be prefix-free in order to guarantee their convergence. The typical behaviour of their general terms are 2N and , repectively. We drop off the overhead factor from now on. From the viewpoint of AIT, we may use another equivalent definitions in terms of the network complexity explicitly:

The above quantum Ω numbers are introduced relying on the choice of quantum algorithmic complexity in terms of network complexity. Other choices of quantum complexity may lead to different definitions of quantum Ω numbers that may become quantum states32,33 or even quantum operators.

F. Quantum Evolution Scenarios.

We want to compare quantum evolution in a world of maximally entangled quantum organisms w.r.t. a classical world both in intelligent design and cumulative evolution scenarios.

In order to study quantum effects in evolution scenarios as in Sect.I A, (15) (16) (17), we need to define a triplet . This is achieved by introducing truncated versions of the quantum Ω numbers in (29), as follows. For separable states, we have

where the sum runs over truncations up to N qubits, , corresponding to the construction process described in (27), (29). The quantum separable organism is a lower bound to (31). The key distinctive feature is that the typical behaviour of one element in this truncated sum decreases as 2−k. Thus, the corresponding mutation is defined such as to produce a significant change in the organisms as

Therefore, the analysis of the evolution rates for the quantum evolution scenarios dealing with separable states are similar to those with classical organisms in (15), (16), (17). A classical state is a state that can be prepared classically, thus it can evolve classically. The same treatment as with the classical scenarios in Sect.I A reproduces the same evolution rates.

A different result will be obtained with maximally entangled organisms. Now, let us introduce the truncated entangled Ω number as

This allows us to obtain a quantum version of the triplet . In particular, the quantum entangled organism at time step k is defined by the same process in Sect.I A, (11) of producing lower bounds but now with the truncated quantum Ω number (33). yields a lower bound to defined above (33).

Next, we introduce a mutation that tries to make this quantum organism to progress. A significative progress will occur if we try to increase the form of the quantum Ω number (33) according to the typical behaviour of its terms in the sum. This is given by for spaces up to N qubits. Thus, now we define an entangled version of the mutation as

Notice that this choice of move in the space of quantum organisms is motivated by the typical behaviour of quantum circuits representing quantum algorithms acting on quantum states. This is the natural scale for quantum mutations to occur at the level of quantum organisms.

The fitness function is determined by the oracle of ΩE which decides whether the mutated organism with (34) succeeds or fails according to the criteria (13).

Now, we have all the ingredients to analyze the rates for different quantum evolution scenarios, mainly with entangled organisms.

1. Quantum Exhaustive Search

As indicated by its name, this strategy is defined by searching all classical possible programs that can be generated quantum states available in the Hilbert space of N qubits by means of the mapping (19). For a strings of length M we know this grows as 2M. In turn, the length of these strings is related to the number of qubits as M ≈ 2N. Thus, as in this evolution scenario each mutation is exhaustive, i.e., it tries every possible quantum organism regardless which original organism may be, then the evolution rate behaves as

2. Quantum Intelligent Design

This strategy is like climbing a hill via the optimal path, knowing such a path before hand. In such a way that each step is always better than the previous one. There is no backtracking. A more proper name would be Quantum Optimal Evolution.

Now, we have to use the quantum mutations (34). If we produce an optimal ordered sequence of these mutations as follows: k = 1, 2, …, N we shall reach the first N valid digits of , by construction and then the evolution rate is:

Thus, quantum intelligent design behaves linear in the number of trials N in a maximally entangled world. This behaviour is equal as the intelligent design in a classical world (16). Notice that the quantum mutations have a different growth rate than classical mutations, but nevertheless the evolution time is the same: they are optimal.

3. Quantum Cumulative Evolution

This strategy is like climbing a hill, but now we do not have a priori knowledge of the best strategy to improve the lower bounds of the quantum number. Thus, a natural strategy is to mutate by means of a random walk in the space of quantum mutations given by (34). In this case, the quantum mutations must be drawn at random and often enough so as to produce the same final quantum organism.

The quantum mutation is characterized by the growth k2k. For simplicity, we shall take it as the leading behaviour 2k. As the we have chosen the network complexity as our measure for quantum algorithmic complexity, we can now use classical formulas and Methods section III, (45) to estimate the complexity of a quantum mutation associated to a maximally entangled state:

Its probability is and its frequency is k22k. The total evolution time T N is of the order of

which grows exponentially up to polynomial factors,

Thus, quantum cumulative evolution in a maximally entangled world behaves exponentially worse than cumulative evolution in a classical world. Quite likely, it is more favorable to evolve in a classical world than in a quantum world. This may explain why we live in a classical world at the macroscopic level. We should remark that this conclusion does not contradicts the fact that quantum algorithms can be more efficient than classical algorithms since our conclusions refer to algorithmic complexity, while quantum algorithms deal with computational complexity (time and space resources for computation).