As noted, I’ve now turned my attention to thermodynamics, but rather than simply describe the physics alone, I’ve decided instead to supplement my work with software that actually implements the models of physics I’m describing. This software is in turn rooted in my work in A.I., which together, allow for sophisticated scientific computing to be executed on cheap consumer devices. In this note, I’ll discuss an observation that I made in my original paper on physics about the nature of apparently random behavior.

Sequences of States

In, A Computational Model of Time-Dilation [1], I presented a model of physics rooted in information theory that implies the correct equations of physics, while nonetheless making use of objective time. The net result is a simple, and ultimately correct model of physics that is, even for scientific purposes, indistinguishable from relativity. It has the added benefit of being entirely quantized, which makes it ideal for scientific computing. In the final section of [1], I introduced some tangential ideas that I didn’t want to leave out, but didn’t fully unpack, since they were not necessary to the main result of the paper, which is a mechanical model of time-dilation without space-time. Specifically, I noted that what seems to be random behavior, could nonetheless be generated by deterministic rules, politely suggesting that perhaps the notion of physical randomness is not really well-described (see Section 6.4 of [1]).

The intuition is as follows: just imagine a simple Newtonian projectile that we’ve observed times along its path from launch till landing. Because of Newton, we can write down an equation that will produce a curve that has what is, for all practical purposes, a one-to-one correspondence to the measurements of its positions as a function of time, provided that we know its mass and initial velocity. Therefore, we can define a function that takes the initial conditions of the projectile as its inputs, and generates each point along the path as its output. Expressed symbolically, the function would generate the position of the projectile at time , given its mass, initial velocity, and the time in question. This is how you normally do physics, which is to take a set of initial conditions, and solve for some future state of the system, applying a rule to those initial conditions.

Now imagine instead, that we simply created a dictionary of the observations of the projectile over time, where the first entry of the dictionary contains the first observed position of the projectile, the second entry contains the second observed position, and so on. Note that if we take this approach, then we don’t need a rule of physics, because we can instead simply describe the path of the projectile using the indices of the dictionary in order. That is, the path of the projectile can be described by the sequence of integers , since these integers correspond to the entries of our dictionary, that in turn contain the positions of the projectile over the course of its path. Expressed symbolically, , which expressed in plain English, means that the th position of the projectile is in entry of our dictionary.

If we’re modeling a simple Newtonian projectile, then this is probably a pointless exercise, because we are by definition simply repeating our observations. That is, this approach simply takes a set of observations over time, and indexes them in that same order. But, if we’re modeling a complex system that either doesn’t have a closed-form equation, or is too complex to allow for one to be discovered, then this approach could be useful.

Specifically, we can use the software that I introduced in a previous article, to compress the observed states of a system into some small number of what are really macro-states. And then, going forward, we can describe the current state of the system using some small number of macro-states that we can index in a dictionary. This will allow us to say, as a practical matter, that, “the system is in some category of states that looks like this”. Moreover, it could allow for patterns to be discovered in the behavior of the system that might be obfuscated at the micro-state level. That is, a system might have some enormous number of micro-states, and so any sequence indexing those states might appear chaotic, whereas sequencing the macro-states might yield periodic, or otherwise regular behavior, that will be easier to identify, simply because we’re now considering a sequence over a much smaller set of integers than would be required to index the micro-states of the system.

So in summary, by compressing the micro-states of a system into some manageable number of macro-states, we can take an intractable set of observations, and reduce them to a tractable set of representations. This in turn, allows us to test hypotheses over a manageable set of macro-state representations. Even if this is still too complex for a human being to make sense of, it might nonetheless be possible for machine learning techniques to be applied to observed sequences of macro-states, which could allow for the discovery of predictable macroscopic behavior, in what is an otherwise, superficially chaotic system.

Expressed in simple terms, we use A.I. to first compress initial observations of a system into some tractable set of macroscopic representations of the system. We then make a subsequent set of observations, and again use A.I. to search for useful hypotheses regarding the macroscopic behavior of the system in those observations, that can then be tested experimentally. We can automate the hypothesis testing by using the model of error I introduced in a previous article, which will allow a machine to dismiss a hypothesis as simply incorrect, allowing the machine to focus on refining other hypotheses that are merely imprecise.