np-complete problems and physical reality

23
arXiv:quant-ph/0502072v2 21 Feb 2005 NP-complete Problems and Physical Reality Scott Aaronson Abstract Can NP-complete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adia- batic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing.” The section on soap bubbles even includes some “experimental” re- sults. While I do not believe that any of the proposals will let us solve NP-complete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics. 1 Introduction “Let a computer smear—with the right kind of quantum randomness—and you create, in effect, a ‘parallel’ machine with an astronomical number of processors . . . All you have to do is be sure that when you collapse the system, you choose the version that happened to find the needle in the mathematical haystack.” —From Quarantine [30], a 1992 science-fiction novel by Greg Egan If I had to debate the science writer John Horgan’s claim that basic science is coming to an end [47], my argument would lean heavily on one fact: it has been only a decade since we learned that quantum computers could factor integers in polynomial time. In my (unbiased) opinion, the showdown that quantum computing has forced—between our deepest intuitions about computers on the one hand, and our best-confirmed theory of the physical world on the other—constitutes one of the most exciting scientific dramas of our time. But why did this drama not occur until so recently? Arguably, the main ideas were already in place by the 1960’s or even earlier. I do not know the answer to this sociological puzzle, but can suggest two possibilities. First, many computer scientists see the study of “speculative” models of computation as at best a diversion from more serious work; this might explain why the groundbreaking papers of Simon [67] and Bennett et al. [17] were initially rejected from the major theory conferences. And second, many physicists see computational complexity as about as relevant to the mysteries of Nature as dentistry or tax law. Today, however, it seems clear that there is something to gain from resisting these attitudes. We would do well to ask: what else about physics might we have overlooked in thinking about the limits of efficient computation? The goal of this article is to encourage the serious discussion of this question. For concreteness, I will focus on a single sub-question: can NP-complete problems be solved in polynomial time using the resources of the physical universe? I will argue that studying this question can yield new insights, not just about computer science but about physics as well. More controversially, I will also argue that a negative answer might * Institute for Advanced Study, Princeton, NJ. Email: [email protected]. Supported by the NSF. 1

Upload: bixente64

Post on 09-Sep-2015

28 views

Category:

Documents


2 download

DESCRIPTION

Can NP-complete problems be solved efficiently in the physical universe? I survey proposalsincluding soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabaticalgorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation,analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and“anthropic computing.” The section on soap bubbles even includes some “experimental” results.While I do not believe that any of the proposals will let us solve NP-complete problemsefficiently, I argue that by studying them, we can learn something not only about computationbut also about physics.

TRANSCRIPT

  • arX

    iv:q

    uant

    -ph/

    0502

    072v

    2 2

    1 Fe

    b 20

    05

    NP-complete Problems and Physical Reality

    Scott Aaronson

    Abstract

    Can NP-complete problems be solved efficiently in the physical universe? I survey proposalsincluding soap bubbles, protein folding, quantum computing, quantum advice, quantum adia-batic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation,analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, andanthropic computing. The section on soap bubbles even includes some experimental re-sults. While I do not believe that any of the proposals will let us solve NP-complete problemsefficiently, I argue that by studying them, we can learn something not only about computationbut also about physics.

    1 Introduction

    Let a computer smearwith the right kind of quantum randomnessand youcreate, in effect, a parallel machine with an astronomical number of processors . . . Allyou have to do is be sure that when you collapse the system, you choose the versionthat happened to find the needle in the mathematical haystack.

    From Quarantine [30], a 1992 science-fiction novel by Greg Egan

    If I had to debate the science writer John Horgans claim that basic science is coming to anend [47], my argument would lean heavily on one fact: it has been only a decade since we learnedthat quantum computers could factor integers in polynomial time. In my (unbiased) opinion, theshowdown that quantum computing has forcedbetween our deepest intuitions about computerson the one hand, and our best-confirmed theory of the physical world on the otherconstitutesone of the most exciting scientific dramas of our time.

    But why did this drama not occur until so recently? Arguably, the main ideas were alreadyin place by the 1960s or even earlier. I do not know the answer to this sociological puzzle,but can suggest two possibilities. First, many computer scientists see the study of speculativemodels of computation as at best a diversion from more serious work; this might explain whythe groundbreaking papers of Simon [67] and Bennett et al. [17] were initially rejected from themajor theory conferences. And second, many physicists see computational complexity as about asrelevant to the mysteries of Nature as dentistry or tax law.

    Today, however, it seems clear that there is something to gain from resisting these attitudes.We would do well to ask: what else about physics might we have overlooked in thinking about thelimits of efficient computation? The goal of this article is to encourage the serious discussion ofthis question. For concreteness, I will focus on a single sub-question: can NP-complete problemsbe solved in polynomial time using the resources of the physical universe?

    I will argue that studying this question can yield new insights, not just about computer sciencebut about physics as well. More controversially, I will also argue that a negative answer might

    Institute for Advanced Study, Princeton, NJ. Email: [email protected]. Supported by the NSF.

    1

  • eventually attain the same status as (say) the Second Law of Thermodynamics, or the impossibilityof superluminal signalling. In other words, while experiment will always be the last appeal, thepresumed intractability of NP-complete problems might be taken as a useful constraint in the searchfor new physical theories. Of course, the basic concept will be old hat to computer scientists wholive and die by the phrase, Assuming P 6= NP, . . .

    To support my arguments, I will survey a wide range of unusual computing proposals, fromsoap bubbles and folding proteins to time travel, black holes, and quantum nonlinearities. Someof the proposals are better known than others, but to my knowledge, even the folklore ones havenever before been collected in one place. In evaluating the proposals, I will try to insist thatall relevant resources be quantified, and all known physics taken into account. As we will see,these straightforward ground rules have been casually ignored in some of the literature on exoticcomputational models.

    Throughout the article, I assume basic familiarity with complexity classes such as P and NP(although not much more than that). Sometimes I do invoke elementary physics concepts, but thedifficulty of the physics is limited by my own ignorance.

    After reviewing the basics of P versus NP in Section 2, I discuss soap bubbles and relatedproposals in Section 3, and even report some original experimental work in this field. ThenSection 4 summarizes what is known about solving NP-complete problems on a garden-varietyquantum computer; it includes discussions of black-box lower bounds, quantum advice, and thequantum adiabatic algorithm. Section 5 then considers variations on quantum mechanics thatmight lead to a more powerful model of computation; these include nonlinearities in the Schrodingerequation and certain assumptions about hidden variables. Section 6 moves on to consider analogcomputing, time dilation, and exotic spacetime geometries; this section is basically a plea to thosewho think about these matters, to take seriously such trivialities as quantum mechanics and thePlanck scale. Relativity and quantum mechanics finally meet in Section 7, on the computationalcomplexity of quantum gravity theories, but the whole point of the section is to explain why thisis a premature subject. Sections 8 and 9 finally set aside the more sober ideas (like solving thehalting problem using naked singularities), and give zaniness free reign. Section 8 studies thecomputational complexity of time travel, while Section 9 studies anthropic computing, whichmeans killing yourself whenever a computer fails to produce a certain output. It turns out thateven about these topics, there are nontrivial things to be said! Finally, Section 10 makes the casefor taking the hardness of NP-complete problems to be a basic fact about the physical world; andweighs three possible objections against doing so.

    I regret that, because of both space and cognitive limitations, I was unable to discuss everypaper related to the solvability of NP-complete problems in the physical world. Two examples ofomissions are the gear-based computers of Vergis, Steiglitz, and Dickinson [75], and the proposedadiabatic algorithm for the halting problem due to Kieu [53]. Also, I generally ignored papersabout hypercomputation that did not try to forge some link, however tenuous, with the laws ofphysics as we currently understand them.

    2 The Basics

    I will not say much about the original P versus NP question: only that the known heuristic algo-rithms for the 3SAT problem, such as backtrack, simulated annealing, GSAT, and survey propaga-tion, can solve some instances quickly in practice, but are easily stumped by other instances; thatthe standard opinion is that P 6= NP [40]; that proving this is correctly seen as one of the deepestproblems in all of mathematics [50]; that no one has any idea where to begin [34]; and that we have

    2

  • Figure 1: A Steiner tree connecting points at (.7, .96), (.88, .46), (.88, .16), (.19, .26), (.19, .06)(where (0, 0) is in the lower left corner, and (1, 1) in the upper right). There are two Steinervertices, at roughly (.24, .19) and (.80, .26).

    a pretty sophisticated idea of why we have no idea [62]. See [69] or [39] for more information.Of course, even if there is no deterministic algorithm to solve NP-complete problems in polyno-

    mial time, there might be a probabilistic algorithm, or a nonuniform algorithm (one that is differentfor each input length n). But Karp and Lipton [52] showed that either of these would have a con-sequence, namely the collapse of the polynomial hierarchy, that seems almost as implausible asP = NP. Also, Impagliazzo and Wigderson [49] gave strong evidence that P = BPP; that is, thatany probabilistic algorithm can be simulated by a deterministic one with polynomial slowdown.

    It is known that P 6= NP in a black box or oracle setting [11]. This just means that anyefficient algorithm for an NP-complete problem would have to exploit the problems structure ina nontrivial way, as opposed to just trying one candidate solution after another until it finds onethat works. Interestingly, most of the physical proposals for solving NP-complete problems thatwe will see do not exploit structure, in the sense that they would still work relative to any oracle.Given this observation, I propose the following challenge: find a physical assumption under whichNP-complete problems can provably be solved in polynomial time, but only in a non-black-box setting.

    3 Soap Bubbles et al.

    Given a set of points in the Euclidean plane, a Steiner tree (see Figure 1) is a collection of linesegments of minimum total length connecting the points, where the segments can meet at vertices(called Steiner vertices) other than the points themselves. Garey, Graham, and Johnson [38]showed that finding such a tree is NP-hard.1 Yet a well-known piece of computer science folkloremaintains that, if two glass plates with pegs between them are dipped into soapy water, then thesoap bubbles will rapidly form a Steiner tree connecting the pegs, this being the minimum-energyconfiguration.

    It was only a matter of time before someone put the pieces together. Last summer Bringsjordand Taylor [24] posted a paper entitled P=NP to the arXiv. This paper argues that, since (1)finding a Steiner tree is NP-hard, (2) soap bubbles find a Steiner tree in polynomial time, (3) soapbubbles are classical objects, and (4) classical physics can be simulated by a Turing machine withpolynomial slowdown, it follows that P = NP.

    1Naturally, the points coordinates must be specified to some finite precision. If we only need to decide whetherthere exists a tree of total length at most L, or whether all trees have length at least L + (for some small > 0),then the problem becomes NP-complete.

    3

  • My immediate reaction was that the paper was a parody. However, a visit to Bringsjordshome page2 suggested that it was not. Impelled, perhaps, by the same sort of curiosity thatcauses people to watch reality TV shows, I checked the discussion of this paper on the comp.theorynewsgroup to see if anyone recognized the obvious error. And indeed, several posters pointed outthat, although soap bubbles might reach a minimum-energy configuration with a small number ofpegs, there is no magical reason why this should be true in general. By analogy, a rock in amountain crevice could reach a lower-energy configuration by rolling up first and then down, but itis not observed to do so. A poster named Craig Feinstein replied to these skeptics as follows [33]:

    Have experiments been done to show that it is only a local minimum that is reached bysoap bubbles and not a global minimum or is this just the party line? Id like to believethat nature was designed to be smarter than we give it credit. Id be willing to makea gentlemans bet that no one can site [sic] a paper which describes an experiment thatshows that the global minimum is not always achieved with soap bubbles.

    Though I was unable to find such a paper, I was motivated by this post to conduct the exper-iment myself.3 I bought two 8 9 glass plates, paint to mark grid points on the plates, thincopper rods which I cut into 1 pieces, suction cups to attach the rods to the plates, liquid oilsoap, a plastic tub to hold the soapy water, and work gloves. I obtained instances of the EuclideanSteiner Tree problem from the OR-Library website [14]. I concentrated on instances with 3 to 7vertices, for example the one shown in Figure 1.

    The result was fascinating to watch: with 3 or 4 pegs, the optimum tree usually is found.However, by no means is it always found, especially with more pegs. Soap-bubble partisans mightwrite this off as experimental error, caused (for example) by inaccuracy in placing the pegs, orby the interference of my hands. However, I also sometimes found triangular bubbles of threeSteiner verticeswhich is much harder to explain, since such a structure could never occur in aSteiner tree. In general, the results were highly nondeterministic; I could obtain entirely differenttrees by dunking the same configuration more than once. Sometimes I even obtained a tree thatdid not connect all the pegs.

    Another unexpected phenomenon was that sometimes the bubbles would start in a suboptimalconfiguration, then slowly relax toward a better one. Even with 4 or 5 pegs, this process couldtake around ten seconds, and it is natural to predict that with more pegs it would take longer. Inshort, then, I found no reason to doubt the party line, that soap bubbles do not solve NP-completeproblems in polynomial time by magic.4

    There are other proposed methods for solving NP-complete problems that involve relaxation toa minimum-energy state, such as spin glasses and protein folding. All of these methods are subjectto the same pitfalls of local optima and potentially long relaxation times. Protein folding is aninteresting case, since it seems likely that proteins evolved specifically not to have local optima. Aprotein that folded in unpredictable ways could place whatever organism relied on it at an adaptivedisadvantage (although sometimes it happens anyway, as with prions). However, this also meansthat if we engineered an artificial protein to represent a hard 3SAT instance, then there would beno particular reason for it to fold as quickly or reliably as do naturally occurring proteins.

    2www.rpi.edu/brings3S. Aaronson, NP-complete Problems and Physical Reality, SIGACT News Complexity Theory Column, March

    2005. I win.4Some people have objected that, while all of this might be true in practice, I still have not shown that soap

    bubbles cannot solve NP-complete problems in principle. But what exactly does in principle mean? If it meansobeying the equations of classical physics, then the case for magical avoidance of local optima moves from empiricallyweak to demonstrably false, as in the case of the rock stuck in the mountain crevice.

    4

  • 4 Quantum Computing

    Outside of theoretical computer science, parallel computers are sometimes discussed as if they werefundamentally more powerful than serial computers. But of course, anything that can be donewith 1020 processors in time T can also be done with one processor in time 1020T . The same is truefor DNA strands. Admittedly, for some applications a constant factor of 1020 is not irrelevant.5

    But for solving (say) 3SAT instances with hundreds of thousands of variables, 1020 is peanuts.When quantum computing came along, it was hoped that finally we might have a type of

    parallelism commensurate with the difficulty of NP-complete problems. For in quantum mechanics,we need a vector of 2n complex numbers called amplitudes just to specify the state of an n-bitcomputer (see [3, 35, 57] for more details). Surely we could exploit this exponentiality inherent inNature to try out all 2n possible solutions to an NP-complete problem in parallel? Indeed, manypopular articles on quantum computing have given precisely that impression.

    The trouble is that if we measure the computers state, we see only one candidate solution x,with probability depending on its amplitude x.

    6 The challenge is to arrange the computation insuch a way that only the xs we wish to see wind up with large values of x. For the special case offactoring, Shor [66] showed that this could be done using a polynomial number of operationsbutwhat about for NP-complete problems?

    The short answer is that we dont know. Indeed, letting BQP be the class of problems solvablein polynomial time by a quantum computer, we do not even know whether NP BQP wouldimply P = NP or some other unlikely consequence in classical complexity.7 But in 1994, Bennett,Bernstein, Brassard, and Vazirani [17] did show that NP 6 BQP relative to an oracle. In particular,they showed that any quantum algorithm that searches an unordered database of N items for asingle marked item must query the database N times. (Soon afterward, Grover [43] showedthat this is tight.)

    If we interpret the space of 2n possible assignments to a Boolean formula as a database,and the satisfying assignments of as marked items, then Bennett et al.s result says that anyquantum algorithm needs at least 2n/2 steps to find a satisfying assignment of with highprobability, unless the algorithm exploits the structure of in a nontrivial way. In other words,there is no brute-force quantum algorithm to solve NP-complete problems in polynomial time,just as there is no brute-force classical algorithm.

    In Bennett et al.s original proof, we first run our quantum algorithm on a database with nomarked items. We then mark the item that was queried with the smallest total probability, andshow that the algorithm will need many queries to notice this change. By now, many other proofshave been discovered, including that of Beals et al. [13], which represents an efficient quantumalgorithms acceptance probability by a low-degree polynomial, and then shows that no such poly-nomial exists; and that of Ambainis [9], which upper-bounds how much the entanglement betweenthe algorithm and database can increase via a single query. Both techniques have also led to lowerbounds for many other problems besides database search.

    The crucial property of quantum mechanics that all three proofs exploit is its linearity : thefact that, until a measurement is made, the vector of amplitudes can only evolve by means oflinear transformations. Intuitively, if we think of the components of a superposition as paralleluniverses, then linearity is what prevents the universe containing the marked item from simplytelling all the other universes about it.

    5This is one fact I seem to remember from my computer architecture course.6Some authors recognized this difficulty even in the 1980s; see Pitowsky [59] for example.7On the other hand, if #P-complete problems were solvable in quantum polynomial time, then this would have

    an unlikely classical complexity consequence, namely the collapse of the so-called counting hierarchy.

    5

  • 4.1 Quantum Advice

    The above assumed that our quantum computer begins in some standard initial state, such as theall-0 state (denoted |0 0). An interesting twist is to consider the effects of other initial states.Are there quantum states that could take exponential time to prepare, but that would let us solveNP-complete problems in polynomial time were they given to us by a wizard? More formally, letBQP/qpoly be the class of problems solvable in quantum polynomial time, given a polynomial-sizequantum advice state |n that depends only on the input length n. Then recently I showed thatNP 6 BQP/qpoly relative to an oracle [2]. Intuitively, even if the state |n encoded the solutionsto every 3SAT instance of size n, only a miniscule fraction of that information could be extractedby measuring |n, at least within the black-box model that we know how to analyze. The proofuses the polynomial technique of Beals et al. [13] to prove a so-called direct product theorem, whichupper-bounds the probability of solving many database search problems simultaneously. It thenshows that this direct product theorem could be violated, if the search problem were efficientlysolvable using quantum advice.

    4.2 The Quantum Adiabatic Algorithm

    At this point, some readers may be getting impatient with the black-box model. After all, NP-complete problems are not black boxes, and classical algorithms such as backtrack search do exploittheir structure. Why couldnt a quantum algorithm do the same? A few years ago, Farhi et al.[32] announced a new quantum adiabatic algorithm, which can be seen as a quantum analogue ofsimulated annealing. Their algorithm is easiest to describe in a continuous-time setting, using theconcepts of a Hamiltonian (an operation that acts on a quantum state over an infinitesimal timeinterval t) and a ground state (the lowest-energy state left invariant by a given Hamiltonian).The algorithm starts by applying a Hamiltonian H0 that has a known, easily prepared groundstate, then slowly transitions to another Hamiltonian H1 whose ground state encodes the solutionto (say) an instance of 3SAT. The quantum adiabatic theorem says that if a quantum computerstarts in the ground state of H0, then it must end in the ground state of H1, provided the transitionfrom H0 to H1 is slow enough. The key question is how slow is slow enough.

    In their original paper, Farhi et al. [32] gave numerical evidence that the adiabatic algorithmsolves random, critically-constrained instances of the NP-complete Exact Cover problem in polyno-mial time. But having learned from experience, most computer scientists are wary of taking suchnumerical evidence too seriously as a guide to asymptotic behavior. This is especially true whenthe instance sizes are small (n 20 in Farhi et al.s case), as they have to be when simulating aquantum computer on a classical one. On the other hand, Farhi relishes pointing out that if theempirically-measured running time were exponential, no computer scientist would dream of sayingthat it would eventually become polynomial! In my opinion, the crucial experiment (which has notyet been done) would be to compare the adiabatic algorithm head-on against simulated annealingand other classical heuristics. The evidence for the adiabatic algorithms performance would bemuch more convincing if the known classical algorithms took exponential time on the same randominstances.

    On the theoretical side, van Dam, Mosca, and Vazirani [74] constructed 3SAT instances forwhich the adiabatic algorithm provably takes exponential time, at least when the transition betweenthe initial and final Hamiltonians is linear. Their instances involve a huge basin of attractionthat leads to a false optimum (meaning most but not all clauses are satisfied), together with anexponentially small basin that leads to the true optimum. To lower-bound the algorithms runningtime on these instances, van Dam et al. showed that the spectral gap (that is, the gap between the

    6

  • smallest and second-smallest eigenvalues) of some intermediate Hamiltonian decreases exponentiallyin n. As it happens, physicists have almost a century of experience in analyzing these spectralgaps, but not for the purpose of deciding whether they decrease polynomially or exponentially asthe number of particles increases to infinity!

    Such hands-on analysis of the adiabatic algorithm was necessary, since van Dam et al. alsoshowed that there is no black-box proof that the algorithm takes exponential time. This is because,given a variable assignmentX to the 3SAT instance , the adiabatic algorithm computes not merelywhether X satisfies , but also how many clauses it satisfies. And this information turns out tobe sufficient to reconstruct itself.

    Recently Reichardt [63], building on work of Farhi, Goldstone, and Gutmann [31], has con-structed 3SAT instances for which the adiabatic algorithm takes polynomial time, whereas simu-lated annealing takes exponential time. These instances involve a narrow obstacle along the pathto the global optimum, which simulated annealing gets stuck at but which the adiabatic algorithmtunnels past. On the other hand, these instances are easily solved by other classical algorithms.An interesting open question is whether there exists a family of black-box functions f : {0, 1}n Zfor which the adiabatic algorithm finds a global minimum using exponentially fewer queries thanany classical algorithm.

    5 Variations on Quantum Mechanics

    Quantum computing skeptics sometimes argue that we do not really know whether quantum me-chanics itself will remain valid in the regime tested by quantum computing.8 Here, for example, isLeonid Levin [55]: The major problem [with quantum computing] is the requirement that basicquantum equations hold to multi-hundredth if not millionth decimal positions where the significantdigits of the relevant quantum amplitudes reside. We have never seen a physical law valid to overa dozen decimals.

    The irony is that most of the specific proposals for how quantum mechanics could be wrongsuggest a world with more, not less, computational power than BQP. For, as we saw in Section 4,the linearity of quantum mechanics is what prevents one needle in an exponentially large haystackfrom shouting above the others. And as observed by Weinberg [77], it seems difficult to changequantum mechanics in any consistent way while preserving linearity.

    But how drastic could the consequences possibly be, if we added a tiny nonlinear term to theSchrodinger equation (which describes how quantum states evolve in time)? For starters, Gisin[41] and Polchinski [60] showed that in most nonlinear variants of quantum mechanics, one coulduse entangled states to transmit superluminal signals. More relevant for us, Abrams and Lloyd [6]showed that one could solve NP-complete and even #P-complete problems in polynomial timeatleast if the computation were error-free. Let us see why this is, starting with NP.

    Given a black-box function f that maps {0, 1}n to {0, 1}, we want to decide in polynomialtime whether there exists an input x such that f (x) = 1. We can start by preparing a uniformsuperposition over all inputs, denoted 2n/2

    x |x, and then querying the oracle for f , to produce

    2n/2

    x |x |f (x). If we then apply Hadamard gates to the first register and measure thatregister, one can show that we will obtain the outcome |0 0 with probability at least 1/4.Furthermore, conditioned on the first register having the state |0 0, the second register will bein the state

    (2n s) |0+ s |1(2n s)2 + s2

    8Personally, I agree, and consider this the main motivation for trying to build a quantum computer.

    7

  • where s is the number of inputs x such that f (x) = 1. So the problem reduces to that ofdistinguishing two possible states of a single qubitfor example, the states corresponding to s = 0and s = 1. The only difficulty is that these states are exponentially close.

    But a nonlinear operation need not preserve the angle between quantum statesit can prythem apart. Indeed, Abrams and Lloyd showed that by repeatedly applying a particular kind ofnonlinear gate, which arises in a model of Weinberg [77], one could increase the angle between twoquantum states exponentially, and thereby distinguish the s = 0 and s = 1 cases with constantbias. It seems likely that almost any nonlinear gate would confer the same ability, though it isunclear how to formalize this statement.

    To solve #P-complete problems, we use the same basic algorithm, but apply it repeatedly tozoom in on the value of s using binary search. Given any range [a, b] that we believe contains s,by applying the nonlinear gate a suitable number of times we can make the case s = a correspondroughly to |0, and the case s = b correspond roughly to |1. Then measuring the state will provideinformation about whether s is closer to a or b. This is true even if (b a) /2n is exponentiallysmall.

    Indeed, if arbitrary 1-qubit nonlinear operations are allowed, then it is not hard to see that wecould even solve PSPACE-complete problems in polynomial time. It suffices to solve the followingproblem: given a Boolean function f of n bits x1, . . . , xn, does there exist a setting of x1 such thatfor all settings of x2 there exists a setting of x3 such that. . . f (x1, . . . , xn) = 1? To solve this, wecan first prepare the state

    1

    2n/2

    x1,...,xn

    |x1 . . . xn, f (x1 . . . xn) .

    We then apply a nonlinear AND gate to the nth and (n+ 1)st qubits, which maps |00 + |10,|00+ |11, and |01+ |10 to |00+ |10, and |01+ |11 to itself (omitting the 2 normalization).Next we apply a nonlinear OR gate to the (n 1)st and (n+ 1)st qubits, which maps |00+ |11,|01+ |10, and |01+ |11 to |01+ |11, and |00+ |10 to itself. We continue to alternate betweenAND and OR in this manner, while moving the control qubit leftward towards x1. At the end,the (n+ 1)st qubit will be |1 if the answer is yes, and |0 if the answer is no.

    On the other hand, any nonlinear quantum computer can also be simulated in PSPACE. Foreven in nonlinear theories, the amplitude of any basis state at time t is an easily-computable functionof a small number of amplitudes at time t 1, and can therefore be computed in polynomial spaceusing depth-first recursion. It follows that, assuming arbitrary nonlinear gates and no error,PSPACE exactly characterizes the power of nonlinear quantum mechanics.

    But what if we allow error, as any physically reasonable model of computation must? In thiscase, while it might still be possible to solve NP-complete problems in polynomial time, I am notconvinced that Abrams and Lloyd have demonstrated this.9 Observe that the standard quantumerror-correction theorems break down, since just as a tiny probability of success can be magnifiedexponentially during the course of a computation, so too can a tiny probability of error. Whetherthis problem can be overcome might depend on which specific nonlinear gates are available; theissue deserves further investigation.

    9Abrams and Lloyd claimed to give an algorithm that does not require exponentially precise operations. Theproblem is that their algorithm uses a nonlinear OR gate, and depending on how that gate behaves on states otherthan |00 + |10, |00 + |11, |01 + |10, and |01 + |11, it might magnify small errors exponentially. In particular,I could not see how to implement a nonlinear OR gate robustly using Abrams and Lloyds Weinberg gate.

    8

  • 5.1 Hidden-Variable Theories

    Most people who quote Einsteins declaration that God does not play dice seem not to realize thata dice-playing God would be an improvement over the actual situation. In quantum mechanics, aparticle does not have a position, even an unknown position, until it is measured. This means thatit makes no sense to talk about a trajectory of the particle, or even a probability distributionover possible trajectories. And without such a distribution, it is not clear how we can make evenprobabilistic predictions for future observations, if we ourselves belong to just one component of alarger superposition.

    Hidden-variable theories try to remedy this problem by supplementing quantum mechanics withthe actual values of certain observables (such as particle positions or momenta), together withrules for how those observables evolve in time. The most famous such theory is due to Bohm [20],but there are many alternatives that are equally compatible with experiment. Indeed, a key featureof hidden-variable theories is that they reproduce the usual quantum-mechanical probabilities atany individual time, and so are empirically indistinguishable from ordinary quantum mechanics.It does not seem, therefore, that a hidden-variable quantum computer could possibly be morepowerful than a garden-variety one.

    On the other hand, it might be that Nature needs to expend more computational effort tocalculate a particles entire trajectory than to calculate its position at any individual time. Thereason is that the former requires keeping track of multiple-time correlations. And indeed, I showedin [4] that under any hidden-variable theory satisfying a reasonable axiom called indifference tothe identity, the ability to sample the hidden variables history would let us solve the GraphIsomorphism problem in polynomial time. For intuitively, given two graphs G and H with nonontrivial automorphisms, one can easily prepare a uniform superposition over all permutations ofG and H:

    12n!

    Sn

    (|0 | | (G)+ |1 | | (H)) .

    Then measuring the third register yields a state of the form |i | if G and H are not isomorphic,or (|0 |+ |1 |) /2 for some 6= if they are isomorphic. Unfortunately, if then we measuredthis state in the standard basis, we would get no information whatsoever, and work of myself [1],Shi [65], and Midrijanis [56] shows that no black-box quantum algorithm can do much better.But if only we could make a few non-collapsing measurements! Then we would see the samepermutation each time in the former case, but two permutations with high probability in the latter.

    The key point is that seeing a hidden variables history would effectively let us simulate non-collapsing measurements. Using this fact, I showed that by sampling histories, we could simulatethe entire class SZK of problems having statistical zero-knowledge proofs, which includes GraphIsomorphism, Approximate Shortest Vector, and other NP-intermediate problems for which noefficient quantum algorithm is known. On the other hand, SZK is not thought to contain theNP-complete problems; indeed, if it did then the polynomial hierarchy would collapse [21]. And itturns out that, even if we posit the unphysical ability to sample histories, we still could not solveNP-complete problems efficiently in the black-box setting! The best we could do is search a list ofN items in N1/3 steps, as opposed to N1/2 with Grovers algorithm.

    But even if a hidden-variable picture is correct, are these considerations relevant to any com-putations we could perform? They would be, if a proposal of Valentini [73, 72] were to pan out.Valentini argues that the ||2 probability law merely reflects a statistical equilibrium (analogousto thermal equilibrium), and that it might be possible to find nonequilibrium matter (presum-ably left over from the Big Bang) in which the hidden variables still obey a different distribution.Using such matter, Valentini showed that we could distinguish nonorthogonal states, and thereby

    9

  • transmit superluminal signals and break quantum cryptographic protocols. He also claimed thatwe could solve NP-complete problems in polynomial time. Unfortunately, his algorithm involvesmeasuring a particles position to exponential precision, and if we could do that, then it is un-clear why we could not also solve NP-complete problems in polynomial time classically ! So in myview, the power of Valentinis model with realistic constraints on precision remains an intriguingopen question. My conjecture is that it will turn out to be similar to the power of the historiesmodelthat is, able to solve SZK problems in polynomial time, but not NP-complete problems inthe black-box setting. I would love to be disproven.

    6 Relativity and Analog Computing

    If quantum computers cannot solve NP-complete problems efficiently, then perhaps we should turnto the other great theory of twentieth-century physics: relativity. The idea of relativity computingis simple: start your computer working on an intractable problem, then board a spaceship andaccelerate to nearly the speed of light. When you return to Earth, all of your friends will be longdead, but the answer to your problem will await you.

    What is the problem with this proposal? Ignoring the time spent accelerating and decelerating,if you travelled at speed v relative to Earth for proper time t (where v = 1 is light speed), thenthe elapsed time in your computers reference frame would be t = t/

    1 v2. It follows that, if

    you want t to increase exponentially with t, then v has to be exponentially close to the speed oflight. But this implies that the amount of energy needed to accelerate the spaceship also increasesexponentially with t. So your spaceships fuel tank (or whatever else is powering it) will need tobe exponentially largewhich means that you will again need exponential time, just for the fuelfrom the far parts of the tank to affect you!

    Similar remarks apply to traveling close to a black hole event horizon: if you got exponentiallyclose then you would need exponential energy to escape.10 On the other hand, Malament andHogarth (see [46]) have constructed spacetimes in which, by traveling for a finite proper time alongone worldline, an observer could see the entire infinite past of another worldline. Naturally, thiswould allow that observer to solve not only NP-complete problems but the halting problem as well.It is known that these spacetimes cannot be globally hyperbolic; for example, they could havenaked singularities, which are points at which general relativity no longer yields predictions. Butto me, the mere existence of such singularities is a relatively minor problem, since there is evidencetoday that they really can form in classical general relativity (see [68] for a survey).

    The real problem is the Planck scale. By combining three physical constantsPlancks constant~ 1.05 1034m2kg1s1, Newtons gravitational constant G 6.67 1011m3kg1s2, and thespeed of light c 3.00 108m1kg0s1one can obtain a fundamental unit of length known as thePlanck length:

    P =

    ~G

    c3 1.62 1035m.

    The physical interpretation of this length is that, if we tried to confine an object inside a sphereof diameter P , then the object would acquire so much energy that it would collapse to form ablack hole. For this reason, most physicists consider it meaningless to discuss lengths shorter thanthe Planck length, or times shorter than the corresponding Planck time P /c 5.39 1044s.They assume that, even if there do exist naked singularities, these are simply places where general

    10An interesting property of relativity is that it is always you who has to go somewhere or do something in theseproposals, while the computer stays behind. Conversely, if you wanted more time to think about what to say nextin a conversation, then your conversational partner is the one who would have to be placed in a spaceship.

    10

  • relativity breaks down on length scales of order P , and must be replaced by a quantum theory ofgravity.

    Indeed, Bekenstein [16] gave an upper bound on the total information content of any isolated,weakly gravitating physical system, by assuming the Second Law of Thermodynamics and thenconsidering a thought experiment in which the system is slowly lowered into a black hole. Specifi-cally, he showed that S 2ER, where S is the entropy of the system, or ln 2 times the number ofbits of information; E is the systems gravitating energy; and R is the radius of the smallest spherecontaining the system. Note that E and R are in Planck units. Since the energy of a system canbe at most proportional to its radius (at least according to the widely-believed hoop conjecture),one corollary of Bekensteins bound is the holographic bound : the information content of any regionis at most proportional to the surface area of the region, at a rate of one bit per Planck lengthsquared, or 1.4 1069 bits per square meter. Bousso [23], whose survey paper on this subject iswell worth reading by computer scientists, has reformulated the holographic bound in a generallycovariant way, and marshaled a surprising amount of evidence for its validity.

    Some physicists go even further, and maintain that space and time are literally discrete on thePlanck scale. Of course, the discreteness could not be of the straightforward kind that occurs incellular automata such as Conways Game of Life, since that would fail to reproduce Lorentz or evenGalilean invariance. Instead, it would be a more subtle, quantum-mechanical kind of discreteness,as appears for example in loop quantum gravity (see Section 7). But I should stress that theholographic bound itself, and the existence of a Planck scale at which classical ideas about spaceand time break down, are generic conclusions that stand independently of any specific quantumgravity theory.

    The reason I have taken this detour into Planck-scale physics is that our current understandingseems to rule out, not only the Malament-Hogarth proposal, but all similar proposals for solvingthe halting problem in finite time. Yet in the literature on hypercomputation [28, 46], onestill reads about machines that could bypass the Turing barrier by performing the first step ofa computation in one second, the second in 1/2 second, the third in 1/4 second, and so on, sothat after two seconds an infinite number of steps has been performed. Sometimes the proposedmechanism invokes Newtonian physics (ignoring even the finiteness of the speed of light), whileother times it requires traveling arbitrarily close to a spacetime singularity. Surprisingly, in thepapers that I encountered, the most common response to quantum effects was not to discuss themat all!

    The closest I found to an account of physicality comes from Hogarth [46], who stages an inter-esting dialogue between a traditional computability theorist named Frank and a hypercomputingenthusiast named Isabel. After Isabel describes a type of spacetime that would support non-Turing computers, the following argument ensues:

    Frank: Yes, but surely the spacetime underlying our universe is not like that. Thesesolutions [to Einsteins equation] are just idealisations.

    Isabel: Thats beside the point. You dont want to rubbish a hypothetical computerTuring or non-Turingsimply because it cant fit into our universe. If you do, youllleave your precious Turing machine to the mercy of the cosmologists, because accordingto one of their theories, the universe and all it contains, will crunch to nothing in a fewbillion years. Your Turing machine would be cut-off in mid-calculation! [46, p. 15]

    I believe that Isabels analogy fails. For in principle, one can generally translate theoremsabout Turing machines into statements about what Turing computers could or could not do within

    11

  • the time and space bounds of the physical universe.11 By contrast, it is unclear if claims abouthypercomputers have any relevance whatsoever to the physical universe. The reason is that, if thenth step of a hypercomputation took 2n seconds, then it would take fewer than 150 steps to reachthe Planck time.

    In my view, the foaminess of space and time on the Planck scale also rules out approachesto NP-complete problems based on analog computing. (For present purposes, an analog computeris a machine that performs a discrete sequence of steps, but on unlimited-precision real numbers.)As an example of such an approach, in 1979 Schonhage [64] showed how to solve NP-complete andeven PSPACE-complete problems in polynomial time, given the ability to compute x + y, x y,xy, x/y, and x in a single time step for any two real numbers x and y 6= 0. Intuitively, one canuse the first 2(n) bits in a real numbers binary expansion to encode an instance of the QuantifiedBoolean Formula problem, then use arithmetic operations to calculate the answer in parallel, andfinally extract the binary result.12 The problem, of course, is that unlimited-precision real numberswould violate the holographic entropy bound.

    7 Quantum Gravity

    Here we enter a realm of dragons, where speculation abounds but concrete ideas about computationare elusive. The one clear result is due to Freedman, Kitaev, Larsen, andWang [36, 37], who studiedtopological quantum field theories (TQFTs). These theories, which arose from the work of Wittenand others in the 1980s, involve 2 spatial dimensions and 1 time dimension. Dropping from 3 to 2dimensions might seem like a trivial change to a computer scientist, but it has the effect of makingquantum gravity radically simpler; basically, the only degree of freedom is now the topology ofthe spacetime manifold, together with any punctures in that manifold. Surprisingly, Freedmanet al. were able to define a model of computation based on TQFTs, and show that this modelis equivalent to ordinary quantum computation: more precisely, all TQFTs can be simulated inBQP, and some TQFTs are universal for BQP. Unfortunately, the original papers on this discoveryare all but impossible for a computer scientist to read, but Aharonov, Jones, and Landau [7] arecurrently working on a simplified presentation.

    From what I understand, it remains open to analyze the computational complexity of (3 + 1)-dimensional quantum field theories even in flat spacetime. Part of the problem is that thesetheories are not mathematically rigorous: they have well-known infinities, which are swept underthe rug via a process called renormalization. However, since the theories in some sense preservequantum-mechanical unitarity, the expectation of physicists I have asked is that they will not leadto a model of computation more powerful than BQP.

    The situation is different for speculative theories incorporating gravity, such as M-theory, thelatest version of string theory. For these theories involve a notion of locality that is muchmore subtle than the usual one: in particular, the so-called AdS/CFT correspondence proposesthat theories with gravity in d dimensions are somehow isomorphic to theories without gravityin d 1 dimensions (see [19]). As a result, Preskill [61] has pointed out that even if M-theoryremains based on standard quantum mechanics, it might allow the efficient implementation of

    11As an example, Stockmeyer and Meyer [71] gave a simple problem in logic, such that solving instances of size610 provably requires circuits with at least 10125 gates.

    12Note that the ability to apply the floor function (or equivalently, to access a specific bit in a real numbers binaryexpansion) is essential here. If we drop that ability, then we obtain the beautiful theory of algebraic complexity[18, 26], which has its own P versus NP questions over the real and complex numbers. These questions are logicallyunrelated to the original P versus NP question so far as anyone knowspossibly they are easier.

    12

  • unitary transformations that would require exponential time on an ordinary quantum computer.It would be interesting to develop this idea further.

    String theorys main competitor is a theory called loop quantum gravity.13 Compared to stringtheory, loop quantum gravity has one feature that I find attractive as a computer scientist: itexplicitly models spacetime as discrete and combinatorial on the Planck scale. In particular, onecan represent the states in this theory by sums over spin networks, which are undirected graphswith edges labeled by integers. The spin networks evolve via local operations called Pachnermoves; a sequence of these moves is called a spin foam. Then the amplitude for transitioningfrom spin network A to spin network B equals the sum, over all spin foams F going from A to B,of the amplitude of F . In a specific model known as the Riemannian14 Barrett-Crane model, thisamplitude equals the product, over all Pachner moves in F , of an expression called a 10j symbol,which can be evaluated according to rules originally developed by Penrose [58].

    Complicated, perhaps, but this seems like the stuff out of which a computational model couldbe made. So two years ago I spoke with Dan Christensen, a mathematician who along withGreg Egan gave an efficient algorithm [27] for calculating 10j symbols that has been crucial inthe numerical study of spin foams. I wanted to know whether one could define a complexityclass BQGP (Bounded-Error Quantum Gravity Polynomial-Time) based on spin foams, and ifso, how it compared to BQP. The first observation we made is that evaluating arbitrary spinnetworks (as opposed to 10j symbols) using Penroses rules is #P-complete. This follows by asimple reduction from counting the number of edge 3-colorings of a trivalent planar graph, whichwas proven #P-complete by Vertigan and Welsh [76].

    But what about simulating the dynamics of (say) the Barrett-Crane model? Here we quicklyran into problems: for example, in summing over all spin foams between two spin networks, shouldone impose an upper bound on the number of Pachner moves, and if so, what? Also, supposing wecould compute amplitudes for transitioning from one spin network to another, what would thesenumbers represent? If they are supposed to be analogous to transition amplitudes in ordinaryquantum mechanics, then how do we normalize them so that probabilities sum to unity? In thequantum gravity literature, issues such as these are still not settled.15

    In the early days of quantum mechanics, there was much confusion about the operationalmeaning of the wavefunction. (Even in Borns celebrated 1926 paper [22], the idea that one hasto square amplitudes to get probabilities only appeared in a footnote added in press!) Similarly,Einstein struggled for years to extract testable physics from a theory in which any coordinate systemis as valid as any other. So maybe it is no surprise that, while todays quantum gravity researcherscan write down equations, they are still debating what seem to an outsider like extremely basicquestions about what the equations mean. The trouble is that these questions are exactly the oneswe need answered, if we want to formulate a model of computation! Indeed, to anyone who wantsa test or benchmark for a favorite quantum gravity theory,16 let me humbly propose the following:can you define Quantum Gravity Polynomial-Time?

    A possible first step would be to define time. For in many quantum gravity theories, there isnot even a notion of objects evolving dynamically in time: instead there is just a static spacetime

    13If some physicist wants to continue the tradition of naming quantum gravity theories using monosyllabic words forelongated objects that mean something completely different in computer science, then I propose the most revolutionaryadvance yet: thread theory.

    14Here Riemannian means not taking into account that time is different from space. There is also a LorentzianBarrett-Crane model, but it is considerably more involved.

    15If the normalization were done manually, then presumably one could solve NP-complete problems in polynomialtime using postselection (see Section 9). This seems implausible.

    16That is, one without all the bother of making numerical predictions and comparing them to observation.

    13

  • UHA HB

    Figure 2: Deutschs causal consistency model consists of chronology-respecting qubits in Hilbertspace HA, and CTC qubits in Hilbert space HB whose quantum state must be invariant under U .

    manifold, subject to a constraint such as the Wheeler-DeWitt equation H = 0. In classicalgeneral relativity, at least we could carve the universe into spacelike slices if we wanted to, andassign a local time to any given observer! But how do we do either of those if the spacetimemetric itself is in quantum superposition? Regulars call this the problem of time (see [70] fora fascinating discussion). The point I wish to make is that, until this and the other conceptualproblems have been clarifieduntil we can say what it means for a user to specify an input andlater receive an outputthere is no such thing as computation, not even theoretically.

    8 Time Travel Computing

    Having just asserted that a concept of time something like the usual one is needed even to definecomputation, I am now going to disregard that principle, and discuss computational models thatexploit closed timelike curves (CTCs). The idea was well explained by the movie Star Trek IV:The Voyage Home. The Enterprise crew has traveled back in time to the present (meaning to 1986)in order to find humpback whales and bring them into the twenty-third century. The problem isthat building a tank to transport the whales requires a type of plexiglass that has not yet beeninvented. In desperation, the crew seeks out the company that will invent the plexiglass, andreveals its molecular formula to that company. The question is, where did the work of inventingthe formula take place?

    In a classic paper on CTCs, Deutsch [29] observes that, in contrast to the much better-knowngrandfather paradox, the knowledge creation paradox involves no logical contradiction. Theonly paradox is a complexity-theoretic one: a difficult computation somehow gets performed, yetwithout the expected resources being devoted to it. Deutsch goes further, and argues that this isthe paradox of time travel, the other ones vanishing once quantum mechanics is taken into account.The idea is this: consider a unitary matrix U acting on the Hilbert space HA HB, where HAconsists of chronology-respecting qubits and HB consists of closed timelike curve qubits (seeFigure 2). Then one can show that there always exists a mixed quantum state of the HB qubits,such that if we start with |0 0 in HA and in the HB, apply U , and then trace out HA, theresulting state in HB is again . Deutsch calls this requirement causal consistency. What it meansis that is a fixed point of the superoperator17 acting on HB, so we can take it to be both theinput and output of the CTC.

    17A superoperator is a generalization of a unitary matrix that can include interaction with ancilla qubits, andtherefore need not be reversible.

    14

  • Strictly speaking, Deutschs idea does not depend on quantum mechanics; we could equally wellsay that any Markov chain has a stationary distribution. In both the classical and quantum cases,the resolution of the grandfather paradox is then that you are born with 1/2 probability, and ifyou are born you go back in time to kill your grandfather, from which it follows that you are bornwith 1/2 probability, and so on.

    One advantage of this resolution is that it immediately suggests a model of computation. Forsimplicity, let us first consider the classical case, and assume all bits go around the CTC (thisassumption will turn out not to matter for complexity purposes). Then the model is the following:first the user specifies as input a polynomial-size circuit C : {0, 1}n {0, 1}n. Then Nature choosesa probability distribution D over {0, 1}n that is left invariant by C. Finally, the user receives asoutput a sample x from D, which can be used as the basis for further computation. An obviousquestion is, if there is more than one stationary distribution D, then which one does Nature choose?The answer turns out to be irrelevant, since we can construct circuits C such that a sample fromany stationary distribution could be used to solve NP-complete or even PSPACE-complete problemsin polynomial time.

    The circuit for NP-complete problems is simple: given a Boolean formula , let C (x) = x if xis a satisfying assignment for , and C (x) = x + 1 otherwise, where x is considered as an n-bitinteger and the addition is mod 2n. Then provided has any satisfying assignments at all, the onlystationary distributions of C will be the singleton distributions concentrated on those assignments.

    I am indebted to Lance Fortnow for coming up with a time travel circuit for the more generalcase of PSPACE-complete problems. LetM1, . . . ,MT be the successive configurations of a PSPACEmachine M . Then our circuit C will take as input a machine configuration Mt together with abit i {0, 1}. The circuit does the following: if t < T , then C maps each (Mt, i) to (Mt+1, i).Otherwise, if t = T , then C maps (MT , i) to (M1, 0) if MT is a rejecting state, or (MT , i) to (M1, 1)if MT is an accepting state. Notice that if M accepts, then the only stationary distribution of Cis the uniform distribution over the cycle {(M1, 1) , . . . , (MT , 1)}. On the other hand, if M rejects,then the only stationary distribution is uniform over {(M1, 0) , . . . , (MT , 0)}. So in either case,measuring i yields the desired output.

    Conversely, it is easy to see that a PSPACE machine can sample from some stationary distri-bution of C. For the problem reduces to finding a cycle in the exponentially large graph of thefunction C : {0, 1}n {0, 1}n, and then choosing a uniform random vertex from that cycle. Thesame idea works even if not all n of the bits go around the CTC. It follows that PSPACE exactlycharacterizes the classical computational complexity of time travel, if we assume Deutschs causalconsistency requirement.

    But what about the quantum complexity of time travel? The model is as follows: first theuser specifies a polynomial-size quantum circuit C acting on HA HB; then Nature adversariallychooses a mixed state such that TrA [C (|0 0 0 0| )] = , where TrA denotes partial traceover HA; and finally the user can perform an arbitrary BQP computation on . Let BQPCTC bethe class of problems solvable in this model. Then it is easy to see that BQPCTC contains PSPACE,since we can simulate the classical time travel circuit for PSPACE using a quantum circuit. Onthe other hand, the best upper bound I know of on BQPCTC is a class called SQG (Short QuantumGames), which was defined by Gutoski and Watrous [44] and which generalizes QIP (the class ofproblems having quantum interactive proof protocols). Note that QIP contains but is not knownto equal PSPACE. Proving that BQPCTC SQG, and hopefully improving on that result to pindown the power of BQPCTC exactly, are left as exercises for the reader.

    15

  • 8.1 The Algorithms of Bacon and Brun

    My goal above was to explore the computational power of time travel in a clear, precise, complexity-theoretic way. However, there are several other perspectives on time travel computing; two weredeveloped by Bacon [10] and Brun [25].

    I assumed before that we have access to only one CTC, but can send a polynomial number ofbits (or qubits) around that CTC. Bacon considers a different model, in which we might be ableto send only one bit around a CTC, but can use a polynomial number of CTCs. It is difficult tosay which model is the more reasonable!

    Like me, Bacon assumes Deutschs causal consistency requirement. Bacons main observation isthat, by using a CTC, we could implement a 2-qubit gate similar to the nonlinear gates of Abramsand Lloyd [6], and could then use this gate to solve NP-complete problems in polynomial time.Even though Bacons gate construction is quantum, the idea can be described just as well usingclassical probabilities. Here is how it works: we start with a chronology-respecting bit x, as wellas a CTC bit y. Then a 2-bit gateG maps x to x y (where denotes exclusive OR) and y to x.Let p = Pr [x = 1] and q = Pr [y = 1]; then causal consistency around the CTC implies that p = q.So after we apply G, the chronology-respecting bit will be 1 with probability

    p = Pr [x y = 1] = p (1 q) + q (1 p) = 2p (1 p) .Notice that if p = 0 then p = 0, while if p is nonzero but sufficiently small then p 2p. It followsthat, by applying the gate G a polynomial number of times, we can distinguish a bit that is 0 withcertainty from a bit that is 1 with positive but exponentially small probability. Clearly such anability would let us solve NP-complete problems efficiently.18 To me, however, the most interestingaspect of Bacons paper is that he shows how standard quantum error-correction methods could beapplied to a quantum computer with CTCs, in order to make his algorithm for solving NP-completeproblems resilient against the same sort of noise that plagues ordinary quantum computers. Thisseems to be much easier with CTC quantum computers than with nonlinear quantum computersas studied by Abrams and Lloyd. The reason is that CTCs create nonlinearity automatically ; onedoes not need to build it in using unreliable gates.

    Brun [25] does not specify a precise model for time travel computing, but from his examples,I gather that it involves a program computing a partial result and then sending it back in timeto the beginning of the program, whereupon another partial result is computed, and so on. Byappealing to the need for a self-consistent outcome, Brun argues that NP-complete as well asPSPACE-complete problems are solvable in polynomial time using this approach. As pointed outby Bacon [10], one difficulty is that it is possible to write programs for which there is no self-consistent outcome, or rather, no deterministic one. I also could not verify Bruns claim to solvea PSPACE-complete problem (namely Quantified Boolean Formulas) in polynomial time. Indeed,since deciding whether a polynomial-time program has a deterministic self-consistent outcome isin NP, it would seem that PSPACE-complete problems cannot be solvable in this model unlessNP = PSPACE.

    Throughout this section, I have avoided obvious questions about the physicality of closed time-like curves. It is not hard to see that CTCs would have many of the same physical effects asnonlinearities in quantum mechanics: they would allow superluminal signalling, the violation ofHeisenbergs uncertainty principle, and so on. As pointed out to me by Daniel Gottesman, thereare also fundamental ambiguities in explaining what happens if half of an entangled quantum stateis sent around a CTC, and the other half remains in a chronology-respecting region of spacetime.

    18Indeed, in the quantum case one could also solve #P-complete problems, using the same trick as with Abramsand Lloyds nonlinear gates.

    16

  • 9 Anthropic Computing

    There is at least one foolproof way to solve 3SAT in polynomial time: given a formula , guess arandom assignment x, then kill yourself if x does not satisfy . Conditioned on looking at anythingat all, you will be looking at a satisfying assignment! Some would argue that this algorithm workseven better if we assume the many-worlds interpretation of quantum mechanics. For according tothat interpretation, with probability 1, there really is a universe in which you guess a satisfyingassignment and therefore remain alive. Admittedly, if is unsatisfiable, you might be out of luck.But this is a technicality: to fix it, simply guess a random assignment with probability 1 22n,and do nothing with probability 22n. If, after the algorithm is finished, you find that you have notdone anything, then it is overwhelmingly likely that is unsatisfiable, since otherwise you wouldhave found yourself in one of the universes where you guessed a satisfying assignment.

    I propose the term anthropic computing for any model of computation in which the probabilityof ones own existence might depend on a computers output. The name comes from the anthropicprinciple in cosmology, which states that certain things are the way they are because if they weredifferent, then we would not be here to ask the question. Just as the anthropic principle raisesdifficult questions about the nature of scientific explanations, so anthropic computing raises similarquestions about the nature of computation. For example, in formulating a model of computation,should we treat the user who picks an input x as an unanalyzed, godlike entity, or as part of thecomputational process itself?19

    The surprising part is that anthropic computing leads not only to philosophical questions, butto nontrivial technical questions as well. For example, while it is obvious that we could solveNP-complete problems in polynomial time using anthropic postselection, could we do even more?Classically, it turns out that we could solve exactly the problems in a class called BPPpath, whichwas defined by Han, Hemaspaandra, and Thierauf [45] and which sits somewhere between MA andBPPNP. The exact power of BPPpath relative to more standard classes is still unknown. Also, ina recent paper [5] I defined a quantum analogue of BPPpath called PostBQP. This class consistsof all problems solvable in quantum polynomial time, given the ability to measure a qubit witha nonzero probability of being |1 and postselect on the measurement outcome being |1. I thenshowed that PostBQP = PP, and used this fact to give a simple, quantum computing based proofof Beigel, Reingold, and Spielmans celebrated result [15] that PP is closed under intersection.

    10 Discussion

    Many of the deepest principles in physics are impossibility statements: for example, no superluminalsignalling and no perpetual motion machines. What intrigues me is that there is a two-wayrelationship between these principles and proposed counterexamples to them. On the one hand,every time a proposed counterexample fails, it increases our confidence that the principles are reallycorrect, especially if the counterexamples almost work but not quite. (Think of Maxwells Demon,or of the subtle distinction between quantum nonlocality and superluminal communication.) Onthe other hand, as we become more confident of the principles, we also become more willing to usethem to constrain the search for new physical theories. Sometimes this can lead to breakthroughs:for example, Bekenstein [16] discovered black hole entropy just by taking seriously the impossibilityof entropy decrease.

    So, should the NP Hardness Assumptionloosely speaking, that NP-complete problems areintractable in the physical worldeventually be seen as a principle of physics? In my view, the

    19The same question is also asked in the much more prosaic setting of average-case complexity [54].

    17

  • answer ought to depend on (1) whether is there good evidence for the assumption, and (2) whetheraccepting it places interesting constraints on new physical theories. Regarding (1), we have seenthat special relativity and quantum mechanics tend to support the assumption: there are plausible-sounding arguments for why these theories should let us solve NP-complete problems efficiently,and yet they do not, at least in the black box model. For the arguments turn out to founder onnontrivial facts about physics: the energy needed to accelerate to relativistic speed in one case,and the linearity of quantum mechanics in the other. As for (2), if we accept the NP HardnessAssumption, then presumably we should also accept the following:

    There are no nonlinear corrections to the Schrodinger equation, not even (for example) at ablack hole singularity.20

    There are no closed timelike curves. Real numbers cannot be stored with unlimited precision (so in particular, there should be afinite upper bound on the entropy of a bounded physical system).

    No version of the anthropic principle that allows arbitrary conditioning on the fact of onesown existence can be valid.

    These are not Earth-shaking implications, but neither are they entirely obvious.Let me end this article by mentioning three objections that could be raised against the NP

    Hardness Assumption. The first is that the assumption is ill-defined: what, after all, does it meanto solve NP-complete problems efficiently? To me this seems like the weakest objection, sinceit is difficult to think of a claim about physical reality that is more operational. Most physicalassertions come loaded with enough presuppositions to keep philosophers busy for decades, butthe NP Hardness Assumption does not even presuppose the existence of matter or space. Insteadit refers directly to information: an input that you, the experimenter, freely choose at time t0,

    21

    and an output that you receive at a later time t1. The only additional concepts needed are thoseof probability (in case of randomized algorithms), and of waiting for a given proper time t1 t0.Naturally, it helps if there exists a being at t1 who we can identify as the time-evolved version ofthe you who chose the input at t0!

    But what about the oft-repeated claim that asymptotic statements have no relevance for phys-ical reality? This claim has never impressed me. For me, the statement Max Clique requiresexponential time is simply shorthand for a large class of statements involving reasonable instancesizes (say 108) but astronomical lengths of time (say 1080 seconds). If the complexity of themaximum clique problem turned out implausibly to be 1.000000001n or n10000, then so much theworse for the shorthand; the finite statements are what we actually cared about anyway. Withthis in mind, we can formulate the NP Hardness Assumption concretely as follows: Given anundirected graph G with 108 vertices, there is no physical procedure by which you can decide ingeneral whether G has a clique of size 107, with probability at least 2/3 and after at most 1080

    seconds as experienced by you.The second objection is that, even if the NP Hardness Assumption can be formulated precisely,

    it is unlike any other physical principle we know. How could a statement that refers not to

    20Horowitz and Maldacena [48] recently proposed such a modification as a way to resolve the black hole informationloss paradox. See also a comment by Gottesman and Preskill [42].

    21Of course, your free will to choose an input is no different in philosophical terms from an experimenters freewill to choose the initial conditions in Newtonian mechanics. In both cases, we have a claim about an infinity ofpossible situations, most of which will never occur.

    18

  • PSPACE#PNPInverting One-Way Functions

    Graph IsomorphismFactoringP

    PSPACE#PNPInverting One-Way Functions

    Graph IsomorphismFactoringP

    Figure 3: My intuitive map of the complexity universe, showing a much larger gap between struc-tured and unstructured problems than within either category. Needless to say, this map doesnot correspond to anything rigorous.

    the flat-out impossibility of a task, but just to its probably taking a long time, reflect somethingfundamental about physics? On further reflection, though, the Second Law of Thermodynamicshas the same character. The usual n particles in a box will eventually cluster on one side; it willjust take expected time exponential in n. Admittedly there is one difference: while the SecondLaw rests on an elementary fact about statistics, the NP Hardness Assumption rests on some ofthe deepest conjectures ever made, in the sense that it could be falsified by a purely mathematicaldiscovery such as P = NP. So as a heuristic, it might be helpful to split the Assumption into amathematical component (P = NP, NP 6 BQP, and so on), and a physical component (there isno physical mechanism that achieves an exponential speedup for black-box search).

    The third objection is the most interesting one: why NP? Why not PSPACE or #P or GraphIsomorphism? More to the point, why not assume factoring is physically intractable, thereby rulingout even garden-variety quantum computers? My answer is contained in the intuitive map shown inFigure 3. I will argue that, while a fast algorithm for graph isomorphism would be a mathematicalbreakthrough, a fast algorithm for inverting one-way functions, breaking pseudorandom generators,or related problems22 would be an almost metaphysical breakthrough.

    Even many computer scientists do not seem to appreciate how different the world would be ifwe could solve NP-complete problems efficiently. I have heard it said, with a straight face, that aproof of P = NP would be important because it would let airlines schedule their flights better, orshipping companies pack more boxes in their trucks! One person who did understand was Godel.In his celebrated 1956 letter to von Neumann (see [69]), in which he first raised the P versus NPquestion, Godel says that a linear or quadratic-time procedure for what we now call NP-completeproblems would have consequences of the greatest magnitude. For such an procedure wouldclearly indicate that, despite the unsolvability of the Entscheidungsproblem, the mental effort ofthe mathematician in the case of yes-or-no questions could be completely replaced by machines.

    But it would indicate even more. If such a procedure existed, then we could quickly find thesmallest Boolean circuits that output (say) a table of historical stock market data, or the humangenome, or the complete works of Shakespeare. It seems entirely conceivable that, by analyzingthese circuits, we could make an easy fortune on Wall Street, or retrace evolution, or even generateShakespeares 38th play. For broadly speaking, that which we can compress we can understand,

    22Strictly speaking, these problems are almost NP-complete; it is an open problem whether they are completeunder sufficiently strong reductions. Both problems are closely related to approximating the Kolmogorov complexityof a string or the circuit complexity of a Boolean function [8, 51].

    19

  • and that which we can understand we can predict. Indeed, in a recent book [12], Eric Baum arguesthat much of what we call insight or intelligence simply means finding succinct representationsfor our sense data. On his view, the human mind is largely a bundle of hacks and heuristicsfor this succinct-representation problem, cobbled together over a billion years of evolution. So ifwe could solve the general caseif knowing something was tantamount to knowing the shortestefficient description of itthen we would be almost like gods. The NP Hardness Assumption isthe belief that such power will be forever beyond our reach.

    11 Acknowledgments

    I thank Al Aho, Pioter Drubetskoy, Daniel Gottesman, Klas Markstrom, David Poulin, JohnPreskill, and others who I have undoubtedly forgotten for enlightening conversations about thesubject of this article. I especially thank Dave Bacon, Dan Christensen, and Antony Valentini forcritiquing a draft, and Lane Hemaspaandra for pestering me to finish the damn thing.

    References

    [1] S. Aaronson. Quantum lower bound for the collision problem. In Proc. ACM STOC, pages 635642,2002. quant-ph/0111102.

    [2] S. Aaronson. Limitations of quantum advice and one-way communication. Theory of Computing, 2004.To appear. Conference version in Proc. IEEE Complexity 2004, pp. 320-332. quant-ph/0402095.

    [3] S. Aaronson. Limits on Efficient Computation in the Physical World. PhD thesis, University of Cali-fornia, Berkeley, 2004.

    [4] S. Aaronson. Quantum computing and hidden variables. Accepted to Phys. Rev. A. quant-ph/0408035and quant-ph/0408119, 2004.

    [5] S. Aaronson. Quantum computing, postselection, and probabilistic polynomial-time. Submitted. quant-ph/0412187, 2004.

    [6] D. S. Abrams and S. Lloyd. Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and #P problems. Phys. Rev. Lett., 81:39923995, 1998. quant-ph/9801041.

    [7] D. Aharonov, V. Jones, and Z. Landau. On the quantum algorithm for approximating the Jonespolynomial. Unpublished, 2005.

    [8] E. Allender, H. Buhrman, and M. Koucky. What can be efficiently reduced to the Kolmogorov-randomstrings? In Proc. Intl. Symp. on Theoretical Aspects of Computer Science (STACS), pages 584595,2004. To appear in Annals of Pure and Applied Logic.

    [9] A. Ambainis. Quantum lower bounds by quantum arguments. J. Comput. Sys. Sci., 64:750767, 2002.Earlier version in ACM STOC 2000. quant-ph/0002066.

    [10] D. Bacon. Quantum computational complexity in the presence of closed timelike curves. quant-ph/0309189, 2003.

    [11] T. Baker, J. Gill, and R. Solovay. Relativizations of the P=?NP question. SIAM J. Comput., 4:431442,1975.

    [12] E. B. Baum. What Is Thought? Bradford Books, 2004.

    [13] R. Beals, H. Buhrman, R. Cleve, M. Mosca, and R. de Wolf. Quantum lower bounds by polynomials.J. ACM, 48(4):778797, 2001. Earlier version in IEEE FOCS 1998, pp. 352-361. quant-ph/9802049.

    [14] J. E. Beasley. OR-Library (test data sets for operations research problems), 1990. Atwww.brunel.ac.uk/depts/ma/research/jeb/info.html.

    20

  • [15] R. Beigel, N. Reingold, and D. Spielman. PP is closed under intersection. J. Comput. Sys. Sci.,50(2):191202, 1995.

    [16] J. D. Bekenstein. A universal upper bound on the entropy to energy ratio for bounded systems. Phys.Rev. D, 23(2):287298, 1981.

    [17] C. Bennett, E. Bernstein, G. Brassard, and U. Vazirani. Strengths and weaknesses of quantum com-puting. SIAM J. Comput., 26(5):15101523, 1997. quant-ph/9701001.

    [18] L. Blum, F. Cucker, M. Shub, and S. Smale. Complexity and Real Computation. Springer-Verlag, 1997.

    [19] J. de Boer. Introduction to the AdS/CFT correspondence. University of Amsterdam Institute forTheoretical Physics (ITFA) Technical Report 03-02, 2003.

    [20] D. Bohm. A suggested interpretation of the quantum theory in terms of hidden variables. Phys. Rev.,85:166193, 1952.

    [21] R. B. Boppana, J. Hastad, and S. Zachos. Does co-NP have short interactive proofs? Inform. Proc.Lett., 25:127132, 1987.

    [22] M. Born. Zur Quantenmechanik der Stovorgange. Zeitschrift fur Physik, 37:863867, 1926. Englishtranslation in Quantum Theory and Measurement (J. A. Wheeler and W. H. Zurek, eds.), Princeton,1983, pp. 5255.

    [23] R. Bousso. The holographic principle. Reviews of Modern Physics, 74(3), 2002. hep-th/0203101.

    [24] S. Bringsjord and J. Taylor. P=NP. 2004. cs.CC/0406056.

    [25] T. Brun. Computers with closed timelike curves can solve hard problems. Foundations of PhysicsLetters, 16:245253, 2003. gr-qc/0209061.

    [26] P. Burgisser, M. Clausen, and M. A. Shokrollahi. Algebraic Complexity Theory. Springer-Verlag, 1997.

    [27] J. D. Christensen and C. Egan. An efficient algorithm for the Riemannian 10j symbols. Classical andQuantum Gravity, 19:11841193, 2002. gr-qc/0110045.

    [28] J. Copeland. Hypercomputation. Minds and Machines, 12:461502, 2002.

    [29] D. Deutsch. Quantum mechanics near closed timelike lines. Phys. Rev. D, 44:31973217, 1991.

    [30] G. Egan. Quarantine: A Novel of Quantum Catastrophe. Eos, 1995. First printing 1992.

    [31] E. Farhi, J. Goldstone, and S. Gutmann. Quantum adiabatic evolution algorithms versus simulatedannealing. quant-ph/0201031, 2002.

    [32] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda. A quantum adiabaticevolution algorithm applied to random instances of an NP-complete problem. Science, 292:472476,2001. quant-ph/0104129.

    [33] C. Feinstein. Post to comp.theory newsgroup on July 8, 2004.

    [34] C. Feinstein. Evidence that P is not equal to NP. cs.CC/0310060, 2003.

    [35] L. Fortnow. One complexity theorists view of quantum computing. Theoretical Comput. Sci.,292(3):597610, 2003.

    [36] M. Freedman, A. Kitaev, and Z. Wang. Simulation of topological quantum field theories by quantumcomputers. Commun. Math. Phys., 227:587603, 2002. quant-ph/0001071.

    [37] M. Freedman, M. Larsen, and Z. Wang. A modular functor which is universal for quantum computation.Commun. Math. Phys., 227:605622, 2002. quant-ph/0001108.

    [38] M. R. Garey, R. L. Graham, and D. S. Johnson. Some NP-complete geometric problems. In Proc. ACMSTOC, pages 1022, 1976.

    [39] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, 1979.

    21

  • [40] W. Gasarch. The P=?NP poll. SIGACT News, 33(2):3447, June 2002.

    [41] N. Gisin. Weinbergs non-linear quantum mechanics and superluminal communications. Phys. Lett. A,143:12, 1990.

    [42] D. Gottesman and J. Preskill. Comment on The black hole final state. J. High Energy Phys.,(0403:026), 2004. hep-th/0311269.

    [43] L. K. Grover. A fast quantum mechanical algorithm for database search. In Proc. ACM STOC, pages212219, 1996. quant-ph/9605043.

    [44] G. Gutoski and J. Watrous. Quantum interactive proofs with competing provers. To appear in STACS2005. cs.CC/0412102, 2004.

    [45] Y. Han, L. Hemaspaandra, and T. Thierauf. Threshold computation and cryptographic security. SIAMJ. Comput., 26(1):5978, 1997.

    [46] M. Hogarth. Non-Turing computers and non-Turing computability. Biennial Meeting of the Philosophyof Science Association, 1:126138, 1994.

    [47] J. Horgan. The End of Science. Helix Books, 1997.

    [48] G. Horowitz and J. Maldacena. The black hole final state. J. High Energy Phys., (0402:008), 2004.hep-th/0310281.

    [49] R. Impagliazzo and A. Wigderson. P=BPP unless E has subexponential circuits: derandomizing theXOR Lemma. In Proc. ACM STOC, pages 220229, 1997.

    [50] Clay Math Institute. Millennium prize problems, 2000. www.claymath.org/millennium/.

    [51] V. Kabanets and J.-Y. Cai. Circuit minimization problem. In Proc. ACM STOC, pages 7379, 2000.TR99-045.

    [52] R. M. Karp and R. J. Lipton. Turing machines that take advice. Enseign. Math., 28:191201, 1982.

    [53] T. D. Kieu. Quantum algorithm for Hilberts tenth problem. Intl. Journal of Theoretical Physics,42:14611478, 2003. quant-ph/0110136.

    [54] L. A. Levin. Average case complete problems. SIAM J. Comput., 15(1):285286, 1986.

    [55] L. A. Levin. Polynomial time and extravagant models, in The tale of one-way functions. Problems ofInformation Transmission, 39(1):92103, 2003. cs.CR/0012023.

    [56] G. Midrijanis. A polynomial quantum query lower bound for the set equality problem. In Proc.Intl. Colloquium on Automata, Languages, and Programming (ICALP), pages 9961005, 2004. quant-ph/0401073.

    [57] M. Nielsen and I. Chuang. Quantum Computation and Quantum Information. Cambridge UniversityPress, 2000.

    [58] R. Penrose. Angular momentum: an approach to combinatorial spacetime. In T. Bastin, editor,Quantum Theory and Beyond. Cambridge, 1971.

    [59] I. Pitowsky. The physical Church thesis and physical computational complexity. Iyyun, The JerusalemPhilosophical Quarterly, 39:8199, 1990.

    [60] J. Polchinski. Weinbergs nonlinear quantum mechanics and the Einstein-Podolsky-Rosen paradox.Phys. Rev. Lett., 66:397400, 1991.

    [61] J. Preskill. Quantum computation and the future of physics. Talk at UC Berkeley, May 10, 2002.

    [62] A. A. Razborov and S. Rudich. Natural proofs. J. Comput. Sys. Sci., 55(1):2435, 1997.

    [63] B. Reichardt. The quantum adiabatic optimization algorithm and local minima. In Proc. ACM STOC,pages 502510, 2004.

    22

  • [64] A. Schonhage. On the power of random access machines. In Proc. Intl. Colloquium on Automata,Languages, and Programming (ICALP), pages 520529, 1979.

    [65] Y. Shi. Quantum lower bounds for the collision and the element distinctness problems. In Proc. IEEEFOCS, pages 513519, 2002. quant-ph/0112086.

    [66] P. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantumcomputer. SIAM J. Comput., 26(5):14841509, 1997. Earlier version in IEEE FOCS 1994. quant-ph/9508027.

    [67] D. Simon. On the power of quantum computation. In Proc. IEEE FOCS, pages 116123, 1994.

    [68] T. P. Singh. Gravitational collapse, black holes, and naked singularities. In Proceedings of DiscussionWorkshop on Black Holes, Bangalore, India, Dec 1997. gr-qc/9805066.

    [69] M. Sipser. The history and status of the P versus NP question. In Proc. ACM STOC, pages 603618,1992.

    [70] L. Smolin. The present moment in quantum cosmology: challenges to arguments for the elimination oftime. In R. Durie, editor, Time and the Instant. Clinamen Press, 2000. gr-qc/0104097.

    [71] L. J. Stockmeyer and A. R. Meyer. Cosmological lower bound on the circuit complexity of a smallproblem in logic. J. ACM, 49(6):753784, 2002.

    [72] A. Valentini. On the Pilot-Wave Theory of Classical, Quantum, and Subquantum Physics. PhD thesis,International School for Advanced Studies, 1992.

    [73] A. Valentini. Subquantum information and computation. Pramana J. Physics, 59(2):269277, 2002.quant-ph/0203049.

    [74] W. van Dam, M. Mosca, and U. Vazirani. How powerful is adiabatic quantum computation? In Proc.IEEE FOCS, pages 279287, 2001. quant-ph/0206003.

    [75] A. Vergis, K. Steiglitz, and B. Dickinson. The complexity of analog computation. Mathematics andComputers in Simulation, 28(91-113), 1986.

    [76] D. L. Vertigan and D. J. A. Welsh. The computational complexity of the Tutte plane: the bipartitecase. Combinatorics, Probability, and Computing, 1(2), 1992.

    [77] S. Weinberg. Precision tests of quantum mechanics. Phys. Rev. Lett., 62:485488, 1989.

    23

    IntroductionThe BasicsSoap Bubbles et al.Quantum ComputingQuantum AdviceThe Quantum Adiabatic Algorithm

    Variations on Quantum MechanicsHidden-Variable Theories

    Relativity and Analog ComputingQuantum GravityTime Travel ComputingThe Algorithms of Bacon and Brun

    Anthropic ComputingDiscussionAcknowledgments