Disillusioned, I cannot say that I understand how complexity emerges from simplicity, especially in the context of the origin and the evolution of the universe. To do that I would need to have a deeper understanding of time, space and entropy than I currently possess. But I would like to posit a serious idea. It seems to me that the concept of interaction is vital to complexity. Where there is the possibility and potential for interaction, change occurs. Where there is no possibility of interaction between anything, nothing new can arise. There is no change. So, the complexity of a given system cannot increase without the possibility of interactions within itself or with another system. The theorized Heat Death of the Universe is the best example I can think of where reality will be reduced to a static state on non-interaction. No further complexity can emerge from within the universe because there is no further potential for interactions within anything that makes up the universe. So far, so (tentatively) good. But how does this help explain the incredible complexity of the universe today and how this complexity arose from an apparent beginning, 13.72 billion years ago? I have another idea. Confession! This is just pure speculation on my part. Woo! What if the true potential for interaction is infinite and eternal, somehow pre-dating the beginning of our universe and somehow outlasting it too? Via Heat Death our universe will one day become ‘worn out’ and unable to interact with anything. But in an infinite and eternal multiverse, while other universes are born in the same way and die in the same way, the infinite and eternal whole carries on. So, how does such an eternal multiverse explain the emergence of complexity in our universe? To tackle that question we will need to look at an aspect of Quantum Mechanics. We agree that all theories and models are mathematical constructs used to describe reality. They are not reality itself, whatever that is. But problems arise when two theories or models appear to be saying radically things about reality. The tension between QM and GR is a classic example. Both appear to be highly accurate on their own scales, but they stubbornly refuse to ‘talk to each other’. I submit that within QM itself there are similar tensions, as I will now describe. Mathematical Model A: The Planck Scale The quantum realm itself is not a smooth and continuous domain but is instead made up of discrete and separated parts, with ‘nothing’ between them. Mathematical Model B: The Cosmic Scale The wave function of a quantum system (e.g., a photon) is not confined to discrete location but is spread across the entire universe. On the face of it A and B seem to be describing things that are simultaneously incredibly small and infinitely large. The scale of A is so small that we can only investigate it indirectly. Conversely, the scale of B is so large that we can never investigate it. For the sake of clarity its worth mentioning that the wave function of B is considered to be spread out, not just across the observable universe, but across the entire universe – however large that is. So, A and B appear to be saying radically different things, at least in terms of scale. One solution to this quandary is to redefine what we mean by the words Location, Size, Scale, Time and Space. Steps have already been taken in this direction. Please see the link to Local Realistic Theory on this page. https://en.wikipedia.org/wiki/Quantum_nonlocality But there is another (purely speculative) way of proceeding that is hinted at by the Many Worlds interpretation of QM. In the multiverse I posited earlier the number of interactions taking place is always infinite. This means that at any given moment an infinite number of identical copies of the same physicist are performing double-slit experiments. They all appear to see a single photon divide itself and pass through both slits. To explain this behaviour, they all posit that that the wave function of that individual photon is spread across the entire universe, allowing it to behave in this way. But in theorizing this they all run up against the quandary I described in A and B. For all of them this single photon is both a discrete quantum entity, yet it is also spread across the entire universe. Model A and Model B appear to be saying radically different things! However, this quandary can disappear if we posit that such quantum phenomenon as tunnelling, super-position and entanglement are not caused by quanta in this universe interacting with themselves or only with other quanta within this universe. What if the realm of the Planck scale is ‘leaky’ and quanta from one universe can hop over into another? Virtual particles are posited to ‘appear’ from nowhere and to go back there again. Could it be that there is no such thing as ‘nowhere’? That there is a constant exchange of quanta between universes? If so, this is where our infinitely multiplied physicist comes into play. They are not witnessing photons from only their own universes behaving strangely. Instead they are seeing photons from other universes ‘hopping’ over and causing their photons to appear to be in two places at once, to pass through solid barriers or to be instantaneously connected with other photons many kilometres away. Even if they ‘tag’ individual photons, giving them opposite spins, because there are an infinite number of identical physicists doing the same thing in other universes, their instruments will always appear to detect the correctly tagged photons. Now to bring this back to complexity and interactions. If complexity emerges from interactions at the Planck scale and the level of interactions is always infinite in a multiverse, then the question of how complexity emerged in our universe is answered by referring to the multiversal whole. Our universe gained its complexity as the quanta in the Big Bang fireball interacted with quanta in other universes. The complexity came from ‘outside’ not from ‘within’. The complexity is not a ‘local’ phenomenon but a ‘global’ one. Thank you. Walter.