More on STEM Compression as the apparent driver of universal accelerating change

(geochemical, biological, cultural, and technological) can be found in the following paper::

Evo Devo Universe? A Framework for Speculations on Cosmic Culture (PDF), 2008-10.

Feedback, edits, and critiques always appreciated. STEM Compression: A Brief Introduction The developmental history of human civilization, life on Earth, and the universe itself may be elegantly summarized as doing more (computation or matter-energy transformation), better (more intelligence, innovation, interdependence, immunity, and informational inertia (meaning) in leading complex systems) with less (universal resources per standard computation or transformation). Here on Earth, this process has gotten so advanced that it feels like just few centuries hence humanity's descendants will be capable of doing "almost everything" with "virtually nothing" in terms of physical resources. All that we care about will be done on nano and femtoscales in physical space, and in incredibly complex and sublime simulations (consciousness is one such simulation) in virtual space. I call this process STEM (Space, Time, Energy, and Matter/Mass) compression, a term that represents the combination of both increasing STEM efficiency (of standardized computation or physical transformation) and increasing STEM density of the most complex adaptive systems over the history of universal development. STEM compression appears to be an unrealized attractor for the leading edge of complexity development, of emergent hierarchical intelligence, in the universe. A recent book that independently describes and provides good examples this process, but without exploring its longer-run or cosmic implications, is energy expert Robert Bryce's Smaller Faster Lighter, Denser, Cheaper, 2014. The earliest scholarly writing I've been able to find on this concept comes from architect and futurist Buckminster Fuller, who in 1938 (Nine Chains to the Moon) described the process of "ephemeralization", a move of nature away from physicality and toward informational abstraction, and specifically, the use of less energy, volume, time, and mass "per each given level of functional performance." Thus Fuller saw STEM efficiency (per standard computation or physical transformation, however we define it) yet he missed the concept of STEM density. With respect to STEM density, the engineer Adrian Bejan (Shape and Structure, 2000, Constructal Theory of Social Dynamics, 2007) has documented the relentless thermodynamic efficiencies (spatiotemporal, energy and matter flow densities, entropy minimization) optimization of natural and social sytems. Astrophysicist Eric Chaisson (Cosmic Evolution, 2001) has discovered that late-emerging complex dissipative structures have exponentially greater energy densities than earlier-emerging structures. Quantum physicist Seth Lloyd (Ultimate physical limits to computation, 1999) has even extrapolated this density trend to its physical limit in our universe, a black hole. STEM compression is the term I suggest we may use to combine the observations of STEM efficiency and density in leading-edge complex adaptive systems in the universe. Such systems are always undergoing exponential or greater growth in their efficiency or density of space, time, energy, and matter utilization, for reasons that are as yet quite poorly understood. As a consequence of STEM compression, I believe we can say the following. Inner space, not outer space, is the apparent constrained developmental destiny of increasingly complex systems in the universe. Here we mean inner space both in terms of 1) computational complexity (eg, the human and computer "minds" are where complexity and processes of change increasingly "go"), and 2) increasingly more localized zones of space and time being the ideal ecological niches for Earth's future intelligence. A black hole-equivalent transcension, not lightspeed expansion, seems likely to be the developmental destiny for the future of intelligence on all Earth-like planets. For more on this quite speculative concept, see the developmental singularity hypothesis. To incremental evidence for this mechanism, let us look at STEM compression from each of the four partially separable STEM perspectives, space, time, energy, and matter. 1. Space Compression: Locality Perhaps the most obvious universal developmental trend of these four is space compression or locality, the increasingly local (smaller, restricted) spatial zones within which the leading edge of computational change has historically emerged in the hierarchical development of universal complexity. Consider how the leading edge of structural complexity in our universe has transitioned from universally distributed early matter, to galaxies, to replicating stars within galaxies, to solar systems in the galactic habitable zone, to life on the surface of special planets in that zone, to higher life within the surface biomass, to cities, and soon, to intelligent technology. Each transition to date has involved a sharply increasing spatial locality of the system environment (Smart 2000). Consider biogenesis, the emergence of life on Earth. It once looked like life emerged in a warm pond and expanded outside its original computational environment into a larger spatial envelope. But more recent evidence (read Paul Davies, The Fifth Miracle, 2000 for an accessible account) strongly suggests that the cooling Earth, in toto, is best thought of as a catalyst for the emergence of archaebacteria, presumably in geothermal vents. Sulfide using life sprung forth as the Earth's crust itself was cooling, implying the entire planetary system was a geological catalyst primed for this emergence. Exactly where did life emerge in this complex adaptive geophysical system? In a local subset of Earthspace, specifically on the "sliver of surface," between magma and vacuum, that we call home. Consider next the emergence of plant life. In another popular misconception, plants (and then tetrapods) "pioneered" the Earth's crust. But in reality, aerobic, anaerobic, and archaebacteria were there long before them, running perhaps miles deep all across the planet, as well as miles into the atmosphere. So where did these computationally accelerated new forms arise? Within a further restricted subset of the original developmental space. Now consider the emergence of human civilization. At the planetary-cultural level, scholars have noted space compression due to digital networks, sensors, effectors, memory, and computation (Broderick 1997; Kurzweil 1999), as the ‘end of geography’ (O’Brien 1992) or the ‘death of distance’ (Cairncross 1998). This is a real developmental trend, and it impacts future choices for human cultural evolution in ways we are just beginning to extrapolate. In perhaps the most obvious misconception, we sometimes think of humans as spatial "pioneers" in comparision to the biota that spawned us. But intelligent humans have not, and if the STEM compression trend continues, will never venture beyond the biosphere in an autonomous fashion. In each case, we see the next emergent substrate occupying a tiny spatial subset of the previous one. So it will soon be with tomorrows artificially intelligent technology, which will model the birth and death of the universe using highly miniaturized, energy efficient, and local technology. 2. Time Compression: Sagan's Cosmic Calendar Carl Sagan observed in his groundbreaking Cosmic Calendar metaphor (Dragons of Eden, 1977), that when we look back over our own evolutionary development in informational terms, we are struck by the clearly accelerating succession of information processing emergences (e.g., galactic, stellar, planetary-molecular/chemetic, cellular/genetic, neurologic, cultural/memetic, and technologic/technetic "intelligence" eras) in universal time. Experts may disagree on boundary definitions, or specifically, what physical-computational structures represent the next important emergence at any point in the chain. More recently, technology scholar and systems theorist Ray Kurzweil (2005) has compiled more than fifteen (at least partially) independent accounts of emergence frequency for ‘key events’ in Earth and human history, in an attempt to demonstrate that though the event selection process in each case must be subjective, the acceleration pattern seen by independent observers is apparently not. These are all examples of what we might call the "Time compression" trajectory of universal development. Explaining this accelerating succession may be the most important challenge of our era. We live on the threshold of a coming singularity in these successions, as observed from our unmodified biological perspective. As evidence of this, technological change has already become near-instantaneous at the circuit-electron level in a variety of our silicon sytems, and in coming years is sure to become effectively (never actually, of course) instantaneous at progressively higher levels of machine intelligence. Plants, Modern Human Society, and Tomorrow’s AIs Appear to Have

Roughly Equivalent Scalar ‘Distance’ Between their Intrinsic Learning Rates

How time compressed is the emergent substrate of postbiological intelligence likely to be, relative to human culture? Consider the ten millionfold difference between the speed of biological thought (roughly ‘150 km/hr’ chemical diffusion in and between neurons) and electronic thought (near-speed-of-light electron flow). The scalar distance between Phi-measured learning rates (a topic we will explain shortly) of modern technological society (perhaps 10^7 ergs/s/g) and tomorrow’s autonomous computers (perhaps 10^12 ergs/s/g), is roughly the same as the difference between modern society and plants. In other words, to self-aware postbiological systems, the dynamics of human thought and culture may be so slow and static by comparison that they will appear as immobilized in space and time as the plant world appears to the human psyche. All of our learning, yearning, thinking, feeling, all our biological desires to merge with our electronic extensions, or to pull their plugs, must move forever at plantlike pace relative to postbiological intelligences. Furthermore, such intelligences are far less computationally restricted, with their near-perfect memories, ability to create variants of themselves, reintegrate at will, and think, learn, share and experiment in virtual space at the universal speed limit, the speed of light. To be sure, as evo devo systems they must also be bound by developmental cycling and death, but for such systems death comes as archiving or erasure of poorly adapted intelligence architectures and redundant or harmful information, or the death-by-transformation seen in any continually growing system. On first analysis, such processes seem far less informationally destructive and subjectively violent than the death we face. We may be dismayed by such comparisons, yet such prodigious leaps in the critical rates of change for new computational substrates are apparently built into the special physics of our universe. More than anything else, these leaps define the one-way, accelerating, and developmental nature of the universe’s leading evolutionary computational processes over the long term. Discovering such preexistent paths for computational acceleration and efficiency seems the developmental destiny of universal intelligence, though the creative evolutionary paths taken to such destiny are never predictable, and each path adds its own unique value. 3. Energy Compression: Chaisson's Phi Eric Chaisson, in Cosmic Evolution (2001) has described universal development in terms of hierarchical levels of emergent complexity, each of which employs orders of magnitude greater free energy rate density (Phi) than the previous system from which it emerged. Chaisson's work provides a very helpful quantitative measure of energy flow density acceleration over time in dissipative structures. This "Energy compression" trajectory appears statistically directional, or developmental. Chaisson has shown that energy-dissipative CAS can be placed on a universal emergence hierarchy, from galaxies to human societies and beyond, with the most accelerated new systems, our electronic computers, having roughly seven orders of magnitude (ten millionfold) greater energy rate density than human culture. ‘Free energy’ is energy available to build structural complexity (von Bertalanffy 1932; Schrödinger 1944). This measure can be related to both marginal entropy production (Kleidon and Lorenz 2005), and dynamic complexity (Chaisson 2003), or, marginal learning capacity of the dissipative structure. Note that Chaisson’s list is a mix of both autonomous and nonautonomous CAS (planets are dependent on stars for replication, computers are (presently) dependent on human society for replication). Note also that replication (life cycle) always seems necessary for learning-by-dissipation, assuming galaxies replicate as dependents on their parent universes, in the multiverse. Below are Chaisson’s estimates for Phi (free energy rate density) for a set of semi-discrete complex adaptive systems in our universe (units are ergs/sec/g). Note that this is not an exponential, but a superexponential function, implying some universal limit will be reached relatively soon in astronomical time. This limit, for energy density trends, is of course a black hole. For further consideration of the implications of intelligent civilizations engaging in this apparently universal developmental process, see the Developmental Singularity Hypothesis, a proposal that considers the future of intelligent civilization under STEM compression constriants. Free energy rate density values in emergent hierarchical CAS.

When the accelerating curve of dissipation rate begins is not yet clear.

We draw it beginning at matter condensation (10^5 yrs) to the present.

(Adapted from Chaisson 2001).

Complex Adaptive System Galaxies (Milky Way)

Stars (Sun)

Planets (Cooling Earth)

Ecosystems (Biosphere)

Animals (Human body)

Brains (Human cranium)

Society (Modern culture)

Modern engines

Intel 8080 (1970's)

Pentium II (1990's) Phi (energetic 'learning rate') 0.5

2

75

900

2x10^4

1.5x10^5

5x10^5

10^5 to 10^8

10^10

10^11 Extrapolating now to the nearer future, we can expect fully autonomous computers to have Phi values of at least 10^12, seven orders of magnitude greater than human society (10^5). Even today, our global set of electronic computing systems are presently learning information about the universe, encoding knowledge from their human-aided, quasi-evolutionary searches, as much as ten millionfold faster than human society, albeit still in narrow ways and only for intermittent periods. However, if tomorrow’s best commercial computers will increasingly improve themselves (self-provision, self-repair, self-evolve), as many designers expect they must, they will be able to exploit their greatly superior learning rate on a general and continuous basis, escaping the present need for human manufacturers and consumers in each upgrade cycle. This assumes that quasi-organic, self-improving computers can be selected for stability, productivity, and deep symbiosis with humanity, just as our domestic animals have been over the last 10,000 years (5,000 breeding cycles), organisms whose brain structures are also a complete mystery to us. This assumption will certainly be carefully empirically tested in coming generations. If in turn evolutionary experimentation by computers in ultrafast digitally simulated environments is an increasingly useful proxy for experimentation in slow physical space (a topic we consider in the longer version of this paper) we can begin to understand how ten-millionfold-accelerated computers might recapitulate our 500 million years of metazoan evolutionary developmental learning in as short a period as 50 years. Turning briefly to computational structure, a universal energy efficiency trend can be observed in the progressively decreasing ‘binding energy’ levels employed at the leading edge of evo devo computation. As some examples show (adapted from Laszlo 1987), each newly emergent substrate in the quintet hierarchy has greatly decreased the binding energies it uses to store and process information in its physical structure, allowing far greater energy (and space, time, and matter) efficiency of computation: Hierarchy Computing Substrate Binding System / Computation 'Mechanics' Physics

Chem

Bio

Socio

Tech

Post-Tech

Matter

Molecules

Cells

Brains

Computers

Black holes Nuclear exchange (strong forces)

Ionic and covalent bonds (electromagnetic forces)

Cell adhesion molecules, weak peptide bonds

Synaptic weighting, neural arborization

Gated electron flow, single electron transistors

Gravitons? (Note: Gravity is the weakest of the known forces. Dark energy is weaker, but it is repulsive, not binding.) Finally, energy (and space, time, and matter) density and efficiency may be considered through the framework of Adrian Bejan (2000) and his constructal law, which proposes that for any finite-size system to persist in time (to live), “it must evolve [and develop] in such a way that it provides ever-easier access to the imposed currents that flow through it." Constructal theory, a type of operations research, seeks to describe developmental limits on evolutionary action in nature, describing ‘imperfectly optimal’ conditions for animate and inanimate flow systems, and championing both the emergence of and boundaries to all fractal (self-similar) hierarchies in physical systems. 4. Matter Compression: Life's DNA and Drexler's Nanotechnology In one sense, we can understand the human organism, and the DNA guided protein synthesis and other molecular machinery on which we are based, as the most effective product yet of billennia of encoding of evolutionary intelligence in highly miniaturized molecular systems. Early life and pre-life forms must have been far less genomically and cellularly efficient and dense, and DNA folding and unfolding regimes in every eukaryotic (vs. prokaryotic) cell are a marvel of material compression (efficiency and density of genetic computation) which we are only now beginning to unravel. Consider a;sp density and efficiency of social computation (increasing ‘human biological and material flow density’) in a modern city, vs. early nomadic and pretechnologic humans. Note the matter compression (increasing efficiency and to a lesser degree growing physical density) in the technological substrate itself, in Moore’s and a large family of related ‘laws’ in electronic computing, in emerging nanotechnology, optical, quantum and now single electron transistor devices, and in the most plentiful and powerful universal energy source known, nuclear fusion. As Eric Drexler first explored in Engines of Creation (1986) technological systems (his example was the "rod logic computer") have tremendously greater capacity to compute and do physical processes (sensing, storage, fabrication, disassembly) than biological systems. As with artificial intelligence, which has greatly exceeded human capacity in only very narrow ways today, our technologies have also greatly exceeded human physical capacities in many narrow ways. But the most powerful and intelligent of all technological capacities consistently come from a program of miniaturizing and matter compressing them as effectively as possible. Finally, consider the extreme matter compression (efficiency and density) in the black hole forming processes that led to our initial cosmic singularity if Lee Smolin's Cosmological Natural Selection hypothesis is correct, and if black hole formation is in our local future, as the speculative but interesting Developmental Singularity hypothesis suggests 5. STEM Density and Efficiency - Anti-Kardashev Measures of Complexity Development Integrating space, time, matter, and energy processes, let us briefly consider a brain, a social organization, and a planet to see if we can identify STEM density and STEM efficiency growth in each as they progress through their life cycle. Human brains, as they learn any algorithm, must increase synaptic connectivity, causing greater material, spatial, and temporal density at the circuit and protein complex level, and this allows them much greater energy efficiency per learned algorithm. As social organizations, we use languages and artifacts to communicate, compete and cooperate. Our languages grow increasingly information dense on the social level (social vocabulary grows in complexity/total corpus/technical subsets, in level of abstraction, andspeed of communication increases), and our artifacts and social networks grow greatly in complexity and density (we move from villages with simple tools to modern highly STEM-dense cities with advanced automation). STEM efficiency also accelerates (energy use per instruction in electronic computers declines exponentially with time, see Richards and Shaw, 2004), technical productivity per worker grows exponentially, at 2-9%/year in most countries today, cities are much more STEM efficient than villages at providing almost any type of social good, etc.). Considering the long-term, postbiological future of our planet, we can envision megacities of “living” computational machinery, carpeting Earth like a technological neocortex, with robotic sensors and effectors ranging throughout the solar system. This would be global brain of vastly greater STEM density and efficiency of computation than anything that presently exists, and a community of entitites that fully absorbs and exceeds our biological humanity. As I discuss in the developmental singularity hypothesis, such an entity, as its density grows, may seem increasingly like a black hole to external observers. Futurists, engineers, and physicists frequently champion the Kardashev scale, which proposes that growth in the amount and spatial scale of energy use (planet, sun, then galaxy) is an appropriate metric for future levels of civilization development. But if STEM compression exists, this "expansion hypothesis" is 180 degrees out of phase with the vector of universal complexity development, which is transcension, not expansion. Cosmologist John Barrow in Impossibility, 1998, has usefully proposed an anti-Kardashev scale, where the appropriate metric for civilization complexity is not total energy use, but the miniaturization of a civilization’s engineering. The developmental singularity hypothesis is a variant of Barrow's perspective which proposes that STEM density and STEM efficiency of our physical and computational engineering are the best metrics for an anti-Kardashev scale. Miniaturization is a good proxy for this, as the closer we approach engineering on the Planck scale, the greater the densities and efficiencies of our engineered objects. It is our increasing approach to black hole level densities and the black hole's unique computational efficiencies (Seth Lloyd, 1999) that truly measures civilization development. Our historical human era of planetary exploration may appear, on untutored examination, like a journey "outward", but actually, no new zones of space have ever been colonized, in an autopoetic fashion, by the efforts of later, more complex organisms arriving on the scene. In other words, the trajectory of hierarchically developing universal complexity has never actually involved a true journey out, in the cosmological sense. Even the cyclic birth and death of suns in supernovas is best seen as an initially galactic-scale event that rapidly creates locally interesting, high-metallicity solar systems within which further development occurs. And once biological intelligence emerges, all the really interesting computation occurs on one special planet per habitable solar system, on a sliver of surface between magma and vacuum that we call home. All of Earth's human explorers have been part of a largely unconscious effort to wire up an already previously verdant Earth into one global technological intelligence—making our world smaller, not larger. Today's intelligent bipeds colonize only a small fraction of the space inhabited by our bacterial ancestors, who dwell at least six miles deep in our crust and two miles up in the clouds, as well as having left Earth entirely, and been transported to neighboring planets, as spores on impacting meteorites billennia ago. The superexponential 'developmental' trajectory is always, on average, relentlessly inward, even as 'evolutionary' individuals regularly do exactly the reverse, using their own lives as experiments. This fundamental constraint, this overwhelming developmental vector toward inner space, has been overlooked for for many years. It is my hope that this will change in coming decades.