Large information processing objects have some serious limitations due to signal delays and heat production.

Latency

Consider a spherical “Jupiter-brain” of radius . It will take maximally seconds to signal across it, and the average time between two random points (selected uniformly) will be .

Whether this is too much depends on the requirements of the system. Typically the relevant question is if the transmission latency is long compared to the processing time of the local processing. In the case of the human brain delays range between a few milliseconds up to 100 milliseconds, and neurons have typical frequencies up to maximally 100 Hz. The ratio between transmission time and a “processing cycle” will hence be between 0.1-10, i.e. not far from unity. In a microprocessor the processing time is on the order of s and delays across the chip (assuming 10% c signals) s, .

If signals move at lightspeed and the system needs to maintain a ratio close to unity, then the maximal size will be (or if information must also be sent back after a request). For nanosecond cycles this is on the order of centimeters, for femtosecond cycles 0.1 microns; conversely, for a planet-sized system (R=6000 km) s, 25 Hz.

The cycle size is itself bounded by lightspeed: a computational element such as a transistor needs to have a radius smaller than the time it takes to signal across it, otherwise it would not function as a unitary element. Hence it must be of size or, conversely, the cycle time must be slower than seconds. If a unit volume performs computations per second close to this limit, , or . (More elaborate analysis can deal with quantum limitations to processing, but this post will be classical.)

This does not mean larger systems are impossible, merely that the latency will be long compared to local processing (compare the Web). It is possible to split the larger system into a hierarchy of subsystems that are internally synchronized and communicate on slower timescales to form a unified larger system. It is sometimes claimed that very fast solid state civilizations will be uninterested in the outside world since it both moves immeasurably slowly and any interaction will take a long time as measured inside the fast civilization. However, such hierarchical arrangements may be both very large and arbitrarily slow: the civilization as a whole may find the universe moving at a convenient speed, despite individual members finding it frozen.

Waste heat dissipation

Information processing leads to waste heat production at some rate Watts per cubic meter.

Passive cooling

If the system just cools by blackbody radiation, the maximal radius for a given maximal temperature is

where is the Stefan–Boltzmann constant. This assumes heat is efficiently distributed in the interior.

If it does computations per volume per second, the total computations are – it really pays off being able to run it hot!

Still, molecular matter will melt above 3600 K, giving a max radius of around km. Current CPUs have power densities somewhat below 100 Watts per cm ; if we assume 100 W per cubic centimetre and $R<29$ cm! If we assume a power dissipation similar to human brains the the max size becomes 2 km. Clearly the average power density needs to be very low to motivate a large system.

Using quantum dot logic gives a power dissipation of 61,787 W/m^3 and a radius of 470 meters. However, by slowing down operations by a factor the energy needs decrease by the factor . A reduction of speed to 3% gives a reduction of dissipation by a factor , enabling a 470 kilometre system. Since the total computations per second for the whole system scales with the size as slow reversible computing produces more computations per second in total than hotter computing. The slower clockspeed also makes it easier to maintain unitary subsystems. The maximal size of each such system scales as , and the total amount of computation inside them scales as . In the total system the number of subsystems change as : although they get larger, the whole system grows even faster and becomes less unified.

The limit of heat emissions is set by the Landauer principle: we need to pay at least Joules for each erased bit. So the number of bit erasures per second and cubic meter will be less than . To get a planet-sized system P will be around 1-10 W, implying for a hot 3600 K system, and for a cold 3 K system.

Active cooling

Passive cooling just uses the surface area of the system to radiate away heat to space. But we can pump coolants from the interior to the surface, and we can use heat radiators much larger than the surface area. This is especially effective for low temperatures, where radiation cooling is very weak and heat flows normally gentle (remember, they are driven by temperature differences: not much room for big differences when everything is close to 0 K).

If we have a sphere with radius R with internal volume of heat-emitting computronium, the surface must have area devoted to cooling pipes to get rid of the heat, where is the amount of Watts of heat that can b carried away by a square meter of piping. This can be formulated as the differential equation:

.

The solution is

.