Why There Can be No Reliable Relativistic Hypercomputers

mh hypercomputation

Figure 1: Simple model of computation in an M-H space-time.

A hypercomputer is able to finish infinitely many computations in finite time. Any user could hence get all the answers to any computable question rather quickly, including for instance the greatest prime number. Given our laws of physics, such devices cannot be built. However, there are theories of hypercompuation that makes use of relativistic space-time curvature. These would have to operate though for an infinite amount of time in their little pocket of the universe. This requires a rather reliable machine. Here I argue against such reliability. I start by examining the physical Church-Turing (PhCT) thesis and its interplay with supertasks and hypercomputation. I will introduce Piccinini’s usability constraint for testing the viability of possible counterexamples to PhCT and will examine relativistic hypercomputation to that end. I propose that ontic pancomputationalism is a possible solution of an infinitely persisting computation, but note that instead of the machine, the observer now perishes and also the computing function becomes unusable. Quintessentially, I conclude that on the reliability constraint alone, all hypercomputation will fail in nomologically accessible worlds.

1 – The physical Church-Turing Thesis and Hypercomputation

One of the most significant discoveries in early computer science was the identification of the class of problems that can be solved by an effective method, with those that are computable by a Turing machine  (TM) (Copeland, 2002: 2). An effective method can be carried out by mathematical procedures expressible algorithmically (Ord, 2006: 143). A TM is a simple apparatus that transcribes inputs into mathematical representations that can be processed until an output is produced (ibid). TMs hence do things doable by “rule of thumb” or are “purely mechanical” (Turing, 1948: 7). This claim that all effective computable functions are Turing computable is the mathematical Church-Turing thesis (Piccinini, 2011: 734; see original Kleene, 1967: 232). Recently, this proposal has been expanded to physical systems in asking whether the limits of physical computation are also captured by the limits of TMs (Hogarth, 1994: 133; Piccinini, 2011: 734). This equation of physical and TM computability is the PhCT thesis (for an analysis of different modal strengths, see Piccinini, 2011: 753). To the best of our knowledge, any currently constructed computing device suffers from this inherent limitation. The question I would like to tackle in this post is whether there can be physical systems that challenge PhCT by having computing power that surpasses that of TMs, hence are hypercomputers. For my purposes I shall concentrate on a specific class of these that is capable of processing infinitely many steps in finite time (Ord, 2006: 144) and thereby cannot be completed in countable steps with paper and pencil (Copeland, 2003: 251).

Let us start by discussing which physical processes should count as computing to run as candidates for hypercomputation. Piccinini brings forward a usability constraint which restricts the notion of computation to activities that can be utilised by agents for their purposes to attain the value of some preconceived function by plugging it into the computing system and then waiting for an output (2011: 735). Such a process according to Piccinini ought to be executable in its having 1) recognisable inputs and outputs, 2) it ought to solve a specific problem p that is purposefully implemented in a computing system P, such that p is knowable before implementation and we do not just compute the future state of P. 3) A process, in order to be useful for a finite agent should give an answer in finite time and consequently be repeatable. 4) No useful computing device is unconstructible (ibid: 740-746). In order to make this constructibility more immediately interesting, allow me to restrict my attention to nomologically accessible worlds. Our set of worlds should not be all logically possible ones, since surely then constructible counterexamples are conceivable and PhCT is trivially false. Instead, let us concentrate on those that are identical in nomic properties and give PhCT a “fighting chance” (Hogarth, 1994: 133). Lastly, 5) any useful computing device ought to reach an output by being reliable (Piccinini, 2011: 734).

Many models of hypercomputation are immediately disqualified under these restrictions: Hypercomputers that require Thompson’s lamp style supertasks to be conducted, such as accelerating TMs that perform a computation in t, the second in t+t/2 and so forth, such that by 2t an infinite number of computations is completed, violate special relativity and hence 4) constructibility in nomologically accessible worlds (see Clark and Read, 1984: 389; Shagrir, 2004: 105; see Thompson, 1954: 5; Russel, 1936: 143-144; in Ord, 2006: 148). Davies’ infinity machines that continuously replicate smaller and faster copies that do not require supertasks, or infinite mass-energy (Davies, 2001: 676; also Button, 2009: 769), are still based on the controversial premise that matter is infinitely divisible beyond the Planck scale  and also violate 4) (Button, 2009: 769; also Zizzi, 2004). (For another attempt at hypercomputation through genuinely random processes see Turing, 1948: 9; Bowie 1973: 74; Ord, 2006: 147; Church, 1940: 134-135; in Copeland, 2003: 253 and Piccinini, 2011: 750, 754). Instead, let me explore an alternate promising approach that invokes general relativity (GR).

2 – Computation in Malament-Hogarth (M-H) Spacetimes

GR allows for spacetimes in which there are speed up effects that reach infinity (Németi and Dávid, 2006: 120). Consider the simple setup in figure 1: At q, an observer launches a TM along 𝛾, while traveling along  ϱ herself. The TM is programmed check for counterexamples to Goldbach’s conjecture. If one is found, it shall send a signal (Hogarth, 1994: 126). As the TM enters the spatially finite, yet temporally infinite C, its proper time continues ordinarily, while from the frame of reference of the observer along ϱ, the TM appears to speed  up exponentially. By the time it reaches l, the TM will have persisted for infinity, such that the observer travelling at ϱ can in finite proper time “position herself to the entire future” of the worldline of the TM at r along the “circumventable edge” of C (Hogarth, 1994: 126; 2004: 682). Since an infinite number of operations are “squeezed” into finite time for ϱ, there is prima facie a supertask involved (Button, 2009: 768). However, no component of M-H performs one since the TM on 𝛾 completes without acceleration infinite computations in infinite proper time (Hogarth, 2004: 682; Németi and Dávid, 2006: 130).


According to Piccinini, simple M-H hypercomputers fail to satisfy 3) repeatability, considering they only can access “at most one space-time edge” (Piccinini, 2011: 756). Another conceptual problem is the unbound memory store and mass-energy required (ibid). Gravitational forces might “shorten the signal” from TM to r (see the blueshift problem in Hogarth, 1994: 127; Németi and Dávid, 2006: 132), and simple M-H spacetimes might violate the cosmic censorship hypothesis (Earman and Norton, 1993: 35; in Button, 2009: 776, also Hogarth, 1994: 127) since the singularity at l is exposed by the TM’s ability to send a signal out from it, i.e. does not have some event horizon, within which a signal could not escape, which makes these arguably troubling in nomologically accessible worlds.

3 – Sophisticated M-H Spacetime Cases in Nordström and Kerr Black Holes

While the first critiques go beyond the scope of this post, the last point can be swiftly overcome by looking at a more complex M-H model which places the observer inside a black hole (BH). Stellar mass BHs have a Schwarzschild radius inside which the anomaly’s escape velocity surpasses the speed of light, called an outer event horizon. Schwarzschild BHs produce great gravitational tearing eddies along that radius, crushing any intelligible observer. However, supermassive BHs with lower “tidal forces“ (Németi and Dávid, 2006: 122), that slowly rotate (Kerr-type) or are electrically charged (Nordström black hole), or both, offer a counterforce to the gravitational pull observable in a second inner event horizon where “the repelling force overcomes the gravitational” (ibid: 123). Here an observer could be non-fatally suspended. Imagine a scenario as in figure 2: The observer launches a TM into a stable BH orbit, as she approaches and enters the outer event horizon unharmed along 𝛾p. As the observer enters deeper into the anomaly towards the inner event horizon, clocks melt and the universe seemingly speeds up from the observers frame of reference. The TM appears to compute exponentially faster reaching infinite speed as the observer falls into the inner event horizon at b (Németi and Dávid, 2006: 122). If the TM finds a counterexample, it will send a recognisable photon pattern to the observer which in this setup will reach her before she perishes at b (for possible survival see ibid: 123).

Nemeti 123

Figure 2: Worldline 𝛾p of the observer entering a Kerr black hole (Németi and David, 2006: 123)

The beauty of this setup is that it is not just nomologically accessible, but can be envisioned in our universe, since there is empirical evidence for supermassive black holes, some of which might be Kerr or Nordström (Roy and Watzke, 2005). Let us test it against the previously introduced Piccinini usability constraint. It has 1) readable outputs of photon patterns from the TM into the black hole; solves a 2) preconceived p, namely Goldbach’s conjecture; finishes 3) in the observer’s finite time, is arguably 4) constructible in nomologically accessible worlds and perchance 5) reliable. While particularly 1) and 4) are disputable, allow me to concentrate on the perhaps most shaky premise in this test, namely that BH hypercomputation is 5) reliable.

Setting aside that Németi failed to consider that GR does not allow for infinitely stable orbits due to gravitational waves (Baker, et al.: 2006); perhaps the most threatening counterexamples target the increasing probability of failure of the TM towards infinity. The reasons why our TM might malfunction are diverse and range from unexpected events in the hardware to interference from background radiation. No matter how unlikely any of these are, the more computations are performed, the slightly higher the probability of failure. As we reach the infinity bound, Button forcefully argues that the probability of failure will increase incrementally to 1 (for the detailed argument see 2009: 778). This applies to both BH and simple M-H setups. Therefore the observer at r or b respectively will either receive a false positive, or no signal at all if the TM broke down.

Furthermore, there are physical limitations to infinitely long computations that our TM will suffer in the relevant set of possible worlds: The protons in the matter that make up the TM will decay within 10100 years (Kaku, 2005: 298-299). Németi and Dávid propose to shift the scenario into a world whose expansion rate slows down. There they argue, it would be possible to construct a fleet of robots to collect protons to replace the ones that are decaying inside the TM indefinitely (2006: 136). I reject this line of argument for the same reason that we ought to reject simple M-H machines: The probability of failure in the robots to gather protons will reach catastrophic proportions in the very long run.

Alternatively, they suggest ignoring proton decay in worlds where only GR and not particle physics holds. I must also refute this argument, since this violates not only nomological accessibility, but also once a grand unified theory of physics is discovered, I doubt such separability between GR and particle physics will persist. Demanding a world with GR, but no proton decay might turn out to be demanding squares without rectangles. Instead, allow me to propose a somewhat exotic solution to this reliability problem that is compatible with both GR and particle physics and could even hold in our universe: Let us not look at an orbiting TM, but rather a computing system that in virtue of it being set in an infinite universe cannot break or decay, namely the entire universe itself.

4 – A Solution from Ontic Pancomputationalism?

Unlike philosophical notions of pancomputationalism that posit in their weakest form that every physical system is involved in at least some computations (Piccinini, 2010: 16), ontic pancomputationalism that originated in physics regards the entire universe as giant cellular automaton (Zuse, 1967: 336). Under the assumption that the universe is discrete beyond the Planck scale, particles are expressible as informational point sized entities in cells that are algorithmically manipulated to bring about all natural phenomena. Critiques against this view of come in many varieties. From physics, it seems highly speculative how such computations could physically proceed forward for each cell without infinite steps (Feynman, 1965: 57). Also discreteness has been challenged (for a response with a quantum variety of ontic pancomputationalism see Lloyd, 2006; or Zizzi, 2004). Still, adopting this view is highly attractive for satisfying our 5) reliability constraint since the universe cannot break or stop computing in virtue of its infinity. Hence an observer in a Nordström BH can in finite proper time observe the universe perform a hypercomputation in calculating its own state at infinity.

Piccinini might successfully respond to my line of reasoning by arguing that I merely trade one evil for another. I might salvage reliability by sacrificing both 1) readable outputs as well as 2) a preconceived problem that our P computes since the universe is not programmable and the very nature of pancomputationalism makes it a counterexample to the usability constraint. I shall not discuss this objection here though, since I believe there is a more pressing problem for my argument that ontic pancomputationalism would bring about reliability in the observer’s ability to attain an answer, even if it is simply what the final state of the universe is.

Taking quantum effects into account, we find that Kerr/Nordström BHs will have unstable mass due to evaporation in the very long run and eventually forcefully disappear destroying any observer inside within 1035 years (Etesi and Németi, 2002: 368), hence before hypercomputation could be achieved. Németi and Dávid again propose to shift the scenario into a world whose expansion rate slows down. There they argue it would be possible to construct a fleet of robots to collect matter to sustain the BH indefinitely (2006: 134). I again reject this argument since the probability of failure in the robots that sustain the BH will eventually reach catastrophic proportions. In essence, reliability of the computing system through pancomputationalism can only be achieved at the expense of the observer since there is no structure including BHs that will persist into infinity, which I would dare to argue is prima facie a salient feature of nomologically accessible worlds.

5 – Conclusion

Under the PhCT thesis, idealised TMs present the “upper bound” of physical computability, even if not constructible themselves (Piccinini, 2011: 744-745). I have argued that what opponents of this notion need to show is that in nomologically accessible worlds, hypercomputation is possible. Given the usability constraint on what a candidate must look like, I have considered a variety of options from accelerating TMs, to simple M-H setups and BH approaches, yet for none were we able to establish reliability. Even given pancomputationalism that keeps computation going into infinity, no observer will manage to in saliently similar possible worlds. Therefore I conclude that already under Piccinini’s reliability constraint, hypercomputation seems troubling. The next step in the debate must be to first test my counterexamples against the other components of usability and further examine my notion of nomological accessibility, especially the nomic necessity of proton decay and BH evaporation, upon which the bulk of my argumentation resides. That task, however, must be left for another post.


Baker, J. G., Centrella, J., Choi, D., Koppitz, M., Meter, J. van. 2006. Gravitational Wave Extraction from an Inspiraling Configuration of Merging Black Holes. Physical Review Letters 96: 111102-111106. Bowie, G. L. 1973. An Argument Against Church’s Thesis. The Journal of Philosophy 70: 66-76. Button, T. 2009. SAD Computers and two Versions of the Church-Turing Thesis. British Journal of the Philosophy of Science 60: 765-792. Church, A. 1940. On the Concept of a Random Sequence. American Mathematical Society Bulletin 46: 130-135. Clark, P., Read, S. 1984. Hypertasks. Synthese 61: 387-390. Copeland, B. J. 2002. The Church-Turing Thesis. Stanford Encyclopedia of Philosophy. URL=[plato.stanford.edu/entries/church-turing]. Accessed 24.03.2013. Copeland, B. L. 2004. Hypercomputation: Philosophical Issues. Theoretical Computer Science 317: 251-267. Davies, E. B. 2001. Building Infinity Machines. British Journal of the Philosophy of Science 52: 671-682. Earman, J., Norton, J. D. 1993. Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes. Philosophy of Science 60: 22-42. Etesi, G., Németi, I. 2002. Turing Computability in Malament-Hogarth Spacetimes. International Journal of Theoretical Physics 41: 342-370. Feynman, R. 1965. The Character of Physical Law. Cambridge. MIT University Press. Hogarth, M. 1994. Non-Turing Computers and Non-Turing Computability. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association. Vol. 1. 126-138. Hogarth, M. 2004. Deciding Arithmetic Using SAD Computers. British Journal of the Philosophy of Science 55: 681-691. Kaku, M. 2005. Parallel Worlds. London. Penguin. Kleene, S. C. 1967. Mathematical Logic. New York. Wiley. Lloyd, S. 2006. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. New York. Knopf. Németi, I., Dávid, G. 2006. Relativistic Computers and the Turing Barrier. Applied Mathematics and Computation 178: 118-142. Ord, T. 2006. The Many Forms of Hypercomputation. Applied Mathematics and Computation 178: 143-153. Piccinini, G. 2010. Computation in Physical Systems. Stanford Encyclopedia of Philosophy. URL=[plato.stanford. edu/entries/computation-physicalsystems]. Accessed 15.02.2013. Piccinini, G. 2011. The Physical Church-Turing Thesis: Modest or Bold? British Journal of the Philosophy of Science 62: 733-769. Pitowsky, I. 1990. The Physical Church-Thesis and Physical Computation Complexity. Iyyun A Jerusalem Philosophical Quarterly 39: 81-99. Roy, S., Watzke, M. 2005. Chandra Finds Evidence for Swarm of Black Holes Near Galactic Centre. NASA release 05-05 January 10. Russel, B. A. W. 1936. The Limits of Empiricism. Proceedings of the Aristotelian Society 36: 131-150. Shagrir, O. 2004. Super-Tasks, Accelerating Turing Machines and Uncomputability. Theoretical Computer Science 317: 105-114. Thompson, J. F. 1954. Tasks and Supertasks. Analysis 15: 1-13. Turing, A. M. 1948. Intelligent Machinery. National Physical Laboratory Report. In Meltzer, B., Michie, D. Eds. Machine Intelligence 5. Edinburgh University Press. Zizzi, P. 2004. Space-Time at the Planck Scale: The Quantum Computer View. Foundations of Quantum Mechanics in Cesena: 345-358. Zuse, K. 1967. Rechnender Raum. Elektronische Datenverarbeitung 8: 336-344.