The Simulation Hypothesis

I was twiddling the simulation hypothesis, and it raised questions I wanted to explore. The following is entirely for fun and by no means conclusive.

A few days ago, I was thinking of Cantor’s Diagonal Argument (a proof of different sizes of infinity) and connected it to Shannon’s Information Theory (in particular about encoding information and efficient storage), and the Simulation Hypothesis (that our world is a simulation).

The question that came to mind is: What are the limits for simulations?

I find the question interesting having worked on games, trying to create real-time experiences for players, and the scope and challenges we face. There are constraints on building the assets for games, rendering the environment, audio, streaming data since it can’t be in memory all the time, computing the next frame from the previous, etc.

Simulations are bounded by what substrate (what is doing the simulation) is capable of processing. For example, if we are in a simulation, and our substrate is finite, we are necessarily finite.

This is where real numbers come to mind. If space is continuous instead of discrete, then encoding could require infinite storage to store position. If time is continuous, computing change is similarly non-trivial. An alternative is relying on a proxy representation of the simulating substrate itself (in which case that could be reflected in the simulation).

To demonstrate the difference, the first is like computer games, where we represent the world as information stored in the computer. The second is like a projection of a real thing, like a shadow of a tree, where there is a tree, but we only get to observe the shadow as part of the simulation.

A continuum would make a simulation extremely unlikely due to storage and computation issues.

An alternative that came to mind to avoid real numbers is to use algebraic numbers instead (numbers that can be expressed as roots of polynomials). Algebraic numbers only work if all the underlying math of the simulation is algebraic. The algebraic scenario is interesting as any phenomena that involve non-algebraic behaviour may indicate that a simulation isn’t practical.

For example, we have phenomena that act as waves (which, in pure form, cannot be expressed algebraically) and particles. The complexity could be reduced to something algebraic and computable, such as using the Taylor series (https://en.wikipedia.org/wiki/Taylor_series) expansion for sine waves, only computing a finite number of terms.

Forces such as gravity are a more concerning example. As distance increases for forces, the precision required (and corresponding storage requirement) to model them grows. To constrain forces, the simulation would need to be finite in scope.

Even with algebraic computation, the scope of simulating the universe (our galaxy, or even our solar system) is a phenomenal feat. The most practical simulation would be individual thinking beings and manipulating perception, like Descartes’ evil demon.

Besides representation, there is the problem of computation. How is each ‘step’ of the universe computed from the previous? If the substrate is as vast as ours, the computing resources would be even more immense to process efficiently, particularly for interactions at great distances. To remedy the distance problem, if an arbitrary number of dimensions were at your disposal, you could arrange the data within a fixed distance of the origin of as many dimensions as needed. However, it shifts the computation challenge to navigating the volume.

Those were my musings. Some of these challenges might have interesting solutions, but that’s a project for another day.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.