A Simulated Hypothesis: Thoughts on Nick Bostrom's Simulation Argument

This piece is actually quite a long time coming. Back in August of 2017 I had recently completed reading Nick Bostrom's Superintelligence and became interested in seeking out his Simulation Argument for myself. I did so, and found it to be rather less sound than I would have expected given the attention it has garnered throughout the years and eager proclamations of the non-scientific that "The Matrix is Real". I sketched out my thoughts then, but considered them incomplete and shelved them and am only now getting around to pulling them together.

Bostrom lays out his position in a short paper (Bostrom, 2003), which argues that at least one of the following propositions is true: (1) humans will go extinct before reaching a "post-human" stage (alternatively, no such "stage" is likely to ever occur, with or without extinction, although this view is not expressed by Bostrom), (2) any post-human civilization is extremely unlikely to run any significant number of simulations of their evolutionary history, and (3) we are living in a computer simulation.


As part of his argument, Bostrom lays out his reasoning for why a simulated reality can even be considered a possibility. Essentially he asserts that the reality we collectively experience could be entirely simulated. It follows also that the minds in the simulation are simulated themselves. Bostrom relies on the idea of substrate-independence (that a suitably implemented "mind" program on a computer would "do the trick" of being a mind) and the assumption that such a program need only contain detail to the level of synapses as part of his assertion on the possibility of simulated minds. I take issue with a number of speculations Bostrom makes in order to increase the persuasiveness of his argument.

Bostrom speculates (or asserts) that creating a believable reality that all observers agree upon would require only simulation of macroscopic phenomena with exceptions made for special cases like computers which rely on microscopic phenomena for their functioning. This is a very crude view of reality. All macroscopic phenomena rely on microscopic phenomena for their functioning. Plants cannot exist as we know them without photosynthesis, a process that involves quantum level phenomena. Simulating a single photosynthesizing leaf is akin to simulating a quantum computer. If high-level approximations of leaves and other such things are possible, then why the exception for computers? Our shared reality is remarkably robust and shows every evidence of being built from the bottom up, with no corners having been cut. To assert that what we perceive is simply a macroscopic approximation is akin to saying that things aren't there when we aren't looking.

Which leads to the second problem: the occlusion-culling assumption. Bostrom further speculates (or asserts) that the simulation need not keep track of those things that are not being observed.  Microscopic phenomena are 'called' into existence by the act of attempting to observe them. This assumes that a sufficiently accurate simulation of our shared reality can be achieved by a coarse-grained approach, by ignoring the microscopic phenomena. Such an assumption is not at all a given, and while many systems (for example equilibrium systems) are ably described by simple descriptions that ignore their microscopic constituents, many other systems (disequilibrium, chaos systems) do not yield to this approach and it may be the case that there are some (indeed many) phenomena for which microscopic modelling is required to achieve macroscopic properties as observed. This also leads to the question of who the observers are. Are humans the observers? This is implicit in Bostrom's argument, but what of non-human animals? Does the simulation accommodate chimpanzees? Gorillas? Dolphins? Octopuses? If not, at what level would it select as the 'cut-off' needed to establish whether something deserved full simulation or not? If non-human animals can be accurately simulated without treating them as real observers, that is by not modelling their mental states, then that raises the question of why humans cannot also be modelled so crudely. What is it about the behaviour of a dog that makes it amenable to prediction without modelling its mental states yet which requires it for humans?

This leads into the third problem: the argument is grossly anthropocentric. Bostrom puts humans at the centre of the universe. Me! Me! Me! Me! Me! We're sooo important. Everything is simulated for our benefit! *Barf* The trouble arrives when one asks why stop the optimization with all of humanity? Why run an ancestor simulation and not an individual simulation? Why doesn't Bostrom simply assume that he is the only one whose mental states are simulated and that all other humans are coarse-grained approximations? While full ancestor simulations might be possible, individual simulations would be vastly less computationally complex, so we could use Bostrom's own reasoning to say that there should be vastly more individual simulations than there are ancestor simulations. Therefore by all probability Bostrom lives inside a Bostrom simulation, and he is well and truly alone. Bostrom counters this argument by saying that unless the number of me-simulations vastly outnumbers the number of ancestor-simulations then we are more likely to find ourselves in an ancestor-simulation, estimating that me-simulations would need to outnumber ancestor-simulations by about 100 billion to one. But me-simulations would be vastly computationally cheaper than ancestor-simulations, so they should vastly outnumber them. Ancestor-simulations might be run to develop interesting individuals, and then those individuals would be split off into a huge variety of me-simulations. Bostrom notes that it is not clear that zombie-humans could be simulated, that is simulations of humans that behave like humans but lack simulation of mental states. Fair enough. But it is also not clear that zombie-animals could be simulated, or zombie-plants, or any of the other optimization tricks implied by the occlusion-culling assumption. Why do we stop to cast doubt on the ability to simulate humans but everything else is of course readily simulated? Simple bare-faced anthropocentrism and nothing more.

And of course there is a fourth and very troubling problem: the inconsistency paradox. Bostrom argues that the ancestor simulation need not get everything right the first time, it can use coarse-grained approximations and then ex-post facto correct any mistakes by winding back the clock and re-running the simulation with corrections. We can imagine then that this simulated reality is filled with much cutting and rewinding of tape. This amounts to a get out of jail free card in regards to epistemology. We cannot make any predictions about the simulated reality that would contradict the real reality because were we to test such predictions and discover that we were indeed simulated, the simulation would simply spin back the clock and re-run things, this time making sure to thwart our attempts to discover the machinations behind the curtain. It is a nonsense view that prevents us from knowing anything. We cannot proceed under such an assumption, so by convention it cannot be adopted.

The above position may be unconvincing to some. Might it still be possible that we are in fact inside of a constantly revised simulation even though thinking so does us no good and cannot advance our knowledge? After all, if we or our ancestors are able to create simulations where we can stop and rewind time at will, might this undermine our confidence that we do not live in such a simulation? Perhaps not. There may be two equally valid perspectives that seem irreconcilable but nevertheless describe the same situation. The inhabitants of the constant-flux simulation experience a world free of flux, so that from within the simulation the valid perspective is one of no flux, which may well lead to a conclusion of not being in a simulation. Such a perspective might be completely valid, that is true, while at the same time the view from outside the simulation would be one of constant flux. Inside the simulation we model reality from the perspective of an inhabitant of the simulation while outside of it we model it from the perspective of the observer. The notion of single individuals is lost to the observer, who sees only constantly changing states, with billions of lives continuously being deleted and instantiated. In order to recover the perspective of an inhabitant within the simulation, the observer must step inside the simulation (metaphorically) and in so doing loses the perspective of many realities being created and destroyed and instead finds only one reality. The simulants may be correct in concluding that they were not in a simulation, as from their perspective such a hypothesis would be nonsensical. It may seem that I am undermining my earlier argument, but we cannot assert a position such as the universe is a simulation without assuming the perspective of an observer. We observe the simulation as simulants inside of it, and so our theory must follow.

Bostrom considers the simulation hypothesis verifiable in the sense that those simulating us could make it known such as by causing text to appear in space reading "You are in a simulation!" This is rather unconvincing. After all we have already determined that those doing the simulating will take every option, including rewriting our history, to conceal the simulation from us. This is about the same as saying that any god that might exist could reveal itself to us at any time, it simply chooses not to. We need not put any credence to such a theory until such evidence actually appears. This is also different from being falsifiable. It can't be disproved but there is always the possibility that it might be proved. This behaviour is opposite of a scientific theory, we might call it anti-theoretic.

Actually, the ways in which the simulation theory might be verified would be through "flaws" in its implementation. For example perfect time-rewind cannot be proven or falsified, but evidence for imperfect time-rewind might be discoverable in the form of remnant chunks of inconsistent data that gradually resolve to consistency. Determining how this would practically take form is the great difficulty, with a large risk that all sorts of hard to detect experimental results would be submitted as evidence for imperfect time-rewind.

Bostrom identifies the fifth problem, though he doesn't recognize it as such: it's turtles all the way down! If we are in a simulation, then there is the possibility that we are in a simulation of a simulation (of a simulation of a simulation (of a simulation of a simulation (...))). An infinite regress of null diverting us from a path to knowledge.

Bostrom further argues that because we do not yet have a physical theory of everything, that we cannot discount the possibility of breakthroughs that will subvert current physical limitations, such as the speed of light. While technically true this line of argumentation is at best diversionary. I might say that it is possible that I will walk through that brick wall on my next attempt, and you, lacking a complete theory that describes every aspect of the wall on a fundamental level and every aspect of me on a fundamental level are incapable of proving that I cannot walk through it. You might be unable to provide a complete proof that I will not be able to walk through the wall (in fact such a proof would be impossible even if you had a complete theory of the fundamental forces and particles since you would still be unable to prove that this law will continue to hold into the future as I walked through the wall), yet you would be a fool not to take my idiotic wager and bet against me. This argumentation is inessential to Bostrom's reasoning but is sprinkled around as diversionary pixie dust so as to provide the appearance of increased probability to his claims that simulations would be computationally inexpensive to a post-human civilization.

To proceed with any idea of a simulation, one must eliminate the optimisation assumptions: that of occlusion-culling and that of inconsistency. Then one is on firmer footing with regards to an actual speculative idea, rather than a meaningless collection of words masquerading as one. Our careful observations of our reality lead us to conclude that it is everywhere constituted from at least the fundamental particles known by the standard model and that it extends to the observable universe. The observable universe extends back in time at least some 14.7 billion years. A simulation then, should include all the fundamental particles and forces in the universe from the beginning of time until our present day. That such a simulation could not be represented by any less than all the particles in the observable universe is evident. The simplest simulation is self-containing - it simply is the observable universe. Our observations and inferences lead us to conclude that we do not live in a simulated reality.

Might our inferences go too far? For example, present knowledge would give us good grounding for supposing that the entire Earth is simulated down to particle-scale, but extending this out to the cosmos is perhaps a bit premature. Just as we get ahead of ourselves in supposing the other terrestrial planets contain all of the geological complexities we see on Earth - mountains, valleys, plate tectonics, etc. - prior to obtaining observational evidence, so too we get a bit ahead of ourselves when we assume that the far reaches of our observation are subject to the same fine-grained detailing of our everyday experience. As we gather more evidence, we may grow in confidence in our application of the Copernican principle, but the possibility remains at least for a significant amount of coarse-graining at cosmic distances. Crucially, this is subject to empirical observation. We can speculate about what distant structures of the cosmos might be coarse-grained as part of a simulation and eventually test such hypotheses with evidence. Each specific simulation optimisation can thus be falsified in turn until we have built up enough confidence to completely discount the theory.


Therefore, it is possible that we live in a simulation, but the possibility is far less likely than Bostrom apportions it because it is not computationally simple. A simulation that is consistent with all of our present observations of reality is a very robust simulation and would be quite computationally complex. As we continue to probe at the largest and smallest scales of the universe we may lower the probability of the simulation hypothesis through detailed observation. We might also increase the probability that we are living in a simulation through such observations, by discovering some limits consistent with the kinds of optimisations we would expect to find in such a universe. Further, even if we are not living in a simulation, our ancestors may yet run simulations, although not as deep or rich as the universe we inhabit. Such simulations would include optimisations that would be discoverable by the inhabitants of those simulations. Bostrom's argument has nothing to tell us about future civilisations. We may say that we are unlikely to be simulations in a universe very similar to our own given the enormous resources required to achieve such a simulation.

All of Bostrom's various optimisations are intended to ease the computational burden and thereby increase the plausibility of a simulated reality. The more corners that are cut, the more likely it is that a powerful enough computer could be built, or that many such computers could be built, running many simulations. However, even with Bostrom's optimisations (and certainly without them), the computation costs are non-trivial. Bostrom conveniently ignores cooling requirements in his back of the envelope calculations but at scale these easily become the most limiting demands. A Jupiter-sized computer, a so-called Jupiter-brain with terrestrial planet density would have greater gravity than Jupiter. Intense heat and pressures would ensue below the surface, making even the most advanced computer technology infeasible. Bostrom cannot simply hand-wave away concerns to do with heat, gravity, and pressure with vague allusions to advanced technology. At sufficient scale these issues become insurmountable and present severe limits on the scale of computation that is feasible in our universe.


References


Bostrom, N. (2003) 'Are You Living in a Computer Simulation?', Philosophical Quaterly, 53(211), pp. 243-255. Available at: https://www.simulation-argument.com/simulation.pdf (Accessed: 26 Aug 2017).