Interesting problems: The Church-Turing-Deutsch Principle

Our field is still in its embryonic stage. It’s great that we haven’t been around for 2000 years. We are still at a stage where very, very important results occur in front of our eyes.

– Michael Rabin, on the field of computer science

In natural science, Nature has given us a world and we’re just to discover its laws. In computers, we can stuff laws into it and create a world.

– Alan Kay

I am large, I contain multitudes.

– Walt Whitman (“Song of Myself”)

Note: This is the first part of a rather lengthy two-part post. It’s written in a style that is, I hope, accessible to an informed lay audience until near the end, when some undergraduate physics would help. The piece describes in a fair amount of detail a scientific problem that I think is important, and why I think it’s important. It’s very much an early draft, and feedback is welcome.

Introduction

This post describes what I think is one of the most interesting open problems in science.

Unlike quantum gravity or high temperature superconductivity it’s not a problem that has attracted a lot of public attention. Indeed, although the problem has been around for nearly twenty years, it hasn’t even attracted much attention from scientists. Yet I believe that the solution of the problem would be of tremendous long-term significance for both physics and computer science.

To explain the problem I need to explain an idea that I’ll refer to as the Church-Turing-Deutsch (CTD) Principle, after the three people (Alonzo Church, Alan Turing, and David Deutsch) who contributed most to the formulation of the principle.

The CTD Principle is a descendant of a famous idea known as the Church-Turing Thesis, taught to all computer scientists early in their degrees. I’ll talk later about the relationship between the Thesis and the Principle.

Just stating the CTD Principle makes it look deceptively obvious: Every physical process can be simulated by a universal computing device.

Most educated adults in the West today believe this Principle, whether they realize it or not.

We do not blink an eye to be told that a computer is being used to simulate a new aircraft, the explosion of a bomb, or even the creation of the Universe. The fact is, most people take it for granted that your standard desktop computer, given enough time and memory, can simulate pretty much any physical process.

Yet our ease with the CTD Principle is an ease brought by familiarity. One hundred years ago the statement would have been far more surprising, and, I suggest, even shocking to many people.

Viewed from the right angle, the CTD Principle still is shocking. All we have to do is look at it anew. How odd that there is a single physical system – albeit, an idealized system, with unbounded memory – which can be used to simulate any other system in the Universe!

The problem I discuss in this essay is the problem of proving the CTD Principle.

To get to this, though, I need to first take a bit of a detour. I will discuss the history of the Principle, and flesh out more detail what the Principle actually means. Then I’ll talk about what it would mean to prove it, why that would be important, and how one might approach the problem.

History

The historical roots of the CTD Principle lie in the Church-Turing thesis, proposed by Church and Turing in the 1930s. Essentially, the Church-Turing thesis asserted that the correct way to describe a computing machine was via what we now call the Turing machine, which is pretty much equivalent to a modern computer, but with an unlimited memory.

(Incidentally, although Church and Turing’s papers were sole author affairs, it’s very unclear to me what the influence of each was on the other. Church was Turing’s supervisor, and Turing thanks and cites Church in his paper. Turing’s paper was certainly a great deal more convincing than Church’s, in large part because Turing was so much more concrete. Church’s paper was written almost entirely from a very abstract point of view.)

Remarkably, Church and Turing did their work before the advent of modern computers. Indeed, their work – done with pencil and paper – played an important role in the construction of the first universal computers, and Turing himself had a hand in the practical building and design of some of the earliest machines.

What motivated Church and Turing to do such work?

You might guess that the motivation was some practical problem, perhaps a military problem, like scheduling rail freight or calculating trajectories.

In fact the motivation was a remarkable problem posed by the great German mathematician David Hilbert. Hilbert asked whether or not there is an automatic procedure – an algorithm – that can be used, in principle, to solve every possible mathematical problem.

Hilbert’s hope was to write down a simple cookbook recipe that would enable one, at least in principle, to determine whether a given proposition is true or false.

The procedure would tell you, for example, that “1+1=2” is true, and that “7 x 12 = 83” is false.

But, with patience, Hilbert’s hypothetical procedure would also tell you more complicated things. It would tell you, for example, that Fermat’s last theorem is true, that is, there is no set of four positive numbers, a, b,c and n such that a^n+b^n = c^n, with n > 2.

It would also tell you whether or not famous unproven conjectures, like the Riemann Hypothesis, are true or false.

In asking this question Hilbert wasn’t motivated by a practical concern – such a procedure might take a very long time to establish the truth or falsity of any given proposition. But Hilbert was interested in the in principle question of whether such a procedure even exists.

Alas, to Hilbert’s surprise it turned out that no such procedure exists, and the people who proved it were Church and Turing.

They proved this in two remarkable steps. First, they had to mathematically define what Hilbert meant by a “procedure”. Remember, in Hilbert’s day the notion of an algorithm or procedure was rather vague and amorphous. It certainly wasn’t a well-defined mathematical concept that one could prove precise mathematical theorems about.

So Church and Turing mathematically defined a class of functions – called, unsurprisingly, the “computable functions” – that they claimed corresponded to the class of functions that a human mathematician would reasonably be able to compute.

With this class of functions mathematically well defined, they could proceed to the second step, which was to mathematically prove that no function in their class of computable functions could ever be used to solve Hilbert’s problem.

Church and Turing’s first step, mathematically defining the class of computable functions, put computer science on a rigorous mathematical foundation. In making such a mathematical definition, Church and Turing were essentially putting forward a thesis – now known as the Church-Turing thesis – that their mathematical class of “computable functions” correspond exactly to the class of functions that we would naturally regard as computable.

The assertion of the Church-Turing thesis might be compared, for example, to Galileo and Newton’s achievement in putting physics on a mathematical basis. By mathematically defining the computable functions they enabled people to reason precisely about those functions in a mathematical manner, opening up a whole new world of investigation.

In mathematics, the assertion of the Church-Turing thesis might be compared to any truly fundamental definition, like that of the integers, or the class of continuous functions. (It is notable that the definition of neither of these was arrived at by a single person, but rather in each case was the end result of a process taking hundreds of years.)

A major drawback of the Church-Turing these is that Church and Turing’s justifications of the thesis were rather ad hoc. Church offered little support for the thesis in his paper, although he certainly does define what we now call the computable functions. Turing makes a rather more convincing case, talking at some length about the sorts of algorithmic processes that he could imagine a mathematician performing with pencil and paper, arguing that each such process could be simulated on a Turing machine.

Turing’s arguments were a creditable foundation for the Church-Turing thesis, but they do not rule out the possibility of finding somewhere in Nature a process that cannot be simulated on a Turing machine. If such a process were to be found, it could be used as the basis for a computing device that could compute functions not normally regarded as computable. This would force a revision of the Church-Turing thesis, although perhaps not a scrapping of the thesis entirely – what is more likely is that we would revise the definition of the Church-Turing thesis

The Church-Turing-Deutsch Principle

Fifty years after Church and Turing’s pioneering papers, in 1985 David Deutsch wrote a paper proposing a way to put the Church-Turing thesis on a firmer footing.

Deutsch’s idea was based on an observation that seems self-evident: computation is inherently a physical process, in the sense that any computation must be carried out by an actual physical computing device, and so must obey the laws of physics. Of course, just because an observation is self-evident doesn’t necessarily mean that we appreciate all of its implications. Deutsch realized that this observation leads to a very interesting possibility, that of deducing the Church-Turing thesis from the laws of physics.

This line of thought led Deutsch to propose a revision of the Church-Turing thesis, which we are calling the Church-Turing-Deutsch Principle. Just to restate it in the terms used above, the CTD Principle says that every physical process can be simulated by a universal computing device.

The way we’ve stated it, we haven’t said what that universal computing device is. Is it a Turing machine? Is it something else? Deutsch goes further, and talks about what would be a suitable universal computing device. We’ll come back to this below, but for now we’ll simply ignore the issue of exactly what device we need to use, and just look at some of the issues raised by the CTD Principle.

We’ve stated the CTD Principle in a form which is independent of any particular physical theory. This means that given any physical theory we can ask whether or not the CTD Principle is true in that particular theory? That is, within the scope of that theory is it possible to construct a universal computing device that can simulate an arbitrary physical system, also within that theory?

For Newtonian physics, the answer seems to be “yes”. In 1982 Fredkin and Toffoli put forward a “billiard ball” model of computation, in which computations can be performed using colliding billiard balls – that is, using just the principles of Newtonian physics. Fredkin and Toffoli showed that such a model could simulate conventional computers like those Turing had proposed.

In turn, it is widely believed that Turing machines are sufficient to simulate all of Newtonian physics. Some detailed investigations of this question have been done by Pour-El and Richards; with some caveats, my understanding is that their work shows that Turing machines can simulate all of Newtonian physics. (I should say that my understanding of their work is rather superficial, and I have not worked through all the relevant results in detail.)

When we put all this reasoning together, we conclude that Newtonian physics satisfies the CTD Principle, with Fredkin and Toffoli’s model as the universal computing device.

(As a technical note, Fredkin and Toffoli’s model as they described it is really more a circuit-based model of computation, rather than a proposal for a universal computing device. It seems relatively straightforward to construct a universal device according to the same general principles, although I am not aware of any published work describing such a construction.)

Of course, Newtonian physics is wrong. We live in a world that is governed by rules that combine quantum mechanics and general relativity in some way we don’t yet fully understand.

It seems to me to be an extremely interesting question to determine whether or not the correct ultimate physical theory obeys the CTD Principle of not. This challenge is made rather more difficult, of course, by the fact that we don’t yet have such an ultimate physical theory, although several candidates have been proposed.

Of course, we do have some good physical theories right now that will probably be incorporated into any hypothetical ultimate physical theory. It is a natural question whether or not such theories obey the CTD Principle.

For example, one might as whether or not it is possible to develop a model computer within quantum electrodynamics that is capable of simulating any quantum electrodynamical process.

One might ask a similar question with respect to the standard model of particle physics, which incorporates electromagnetism as well as the strong and weak nuclear forces.

(Note, incidentally, that the question doesn’t make sense for quantum mechanics, since quantum mechanics isn’t a complete physical theory in its own right, but rather a framework for the construction of physical theories.)

We can even ask the question of whether the CTD Principle is satisfied in other more exotic theories, such as string theory or spin networks, which some people believe may serve as a basis for an ultimate physical theory.

I’ll talk later about what challenges would be involved in addressing such questions.

First, however, I want to digress to talk about the status of the CTD Principle as a physical principle, and what some consequences might be.

The Church-Turing-Deutsch Principle as a guide to the construction of new physical theories

How likely is it that the CTD Principle is correct?

For a physicist it is very natural to regard the CTD Principle as something that ought to be true in any reasonable physical theory. After all, the CTD Principle amounts to the belief that a limited physical system, like the human brain, ought to be capable of simulating the behaviour of an arbitrary physical system, at least in principle.

This is a very appealing proposition to most scientists, and certainly, to most physicists, in part because it agrees with our prejudices – we would like the world to be comprehensible – and in part because all our experience in modeling real-world phenomena suggests that it is likely correct.

Let’s grant, then, that we expect the CTD Principle to be correct in any reasonable physical theory. This may or may not be correct, but let’s go with it.

This suggests tipping the CTD Principle on its head and actually using it as a guideline for the construction of new physical theories.

Let me give an example in another domain where this strategy has been used: the Principle of Conservation of Energy. This Principle was originally formulated in the context of Newtonian physics, yet it survived intact through special and general relativity, quantum mechanics, and is likely to survive in future theories. Although the exact mathematical expression of the Principle has changed from theory to theory, it remains a powerful general principle governing the construction of new theories. Indeed, if a theory is proposed that does not obey some sort of conservation of energy law, physicists are unlikely to take it terribly seriously unless truly compelling reasons are given.

This is not an isolated example. Einstein credited a 1913 insight, the so-called “Principle of Equivalence”, as being crucial to his discovery in 1915 of the general theory of relativity. Other principles such as the principles of relativity, gauge invariance and renormalizability, originally formulated in other contexts, have been extremely powerful guiding principles in the development of the standard model of physics, and continue to be powerful guiding principles in the construction of new physical theories.

More recently, many physicists believe that the holographic principle – the idea that the best way to describe a region of spacetime is through a theory defined on the region’s boundary, not its interior – is a guiding principle that will help formulate an as yet yet unknown theory of quantum gravity.

Taking such principles as a basis for the construction of new physical theories is not, of course, entirely scientific. Maybe the Principle of Conservation of Energy is purely a red herring, and won’t show up at all in the ultimate physical theory.

But constructing new physical theories is very hard work, and it’s especially hard in the absence of guiding principles. So physicists are very keen to identify deep principles that they believe are likely to be correct, and which have real physical consequence. It’s sort of a psychological ploy to make the construction of new physical theories possible, and it has worked extremely well in the past.

Guiding principles of this type are often rather vague, for if a principle is to be true in an as-yet-undiscovered physical theory, then of necessity it will not be possible to achieve full mathematical precision in the formulation of the principle. This is reflected in the formulation of the CTD Principle that I have given, with many of the key terms such as simulation and universal computing device not being precisely defined.

Of course, this is not to say that we do not expect instantiations of such principles in specific physical theories to be precise. The Principle of Conservation of Energy has a perfectly precise technical meaning within quantum mechanics. It has another precise technical meaning within general relativity. These technical meanings are quite different, yet both are recognizably instantiations of a more general, albeit rather more vague, Principle of Conservation of Energy. Later in this essay I will speculate on how the CTD Principle can be made much more precise.

What would it buy us to regard the correctness (or otherwise) of the CTD Principle in a given physical theory as a criterion for whether that physical theory is correct or incorrect?

I’m not really sure what the answer to this question is. It’s quite possible that the answer is “not a lot”. After all, so far as I’m aware no physical theory has ever been seriously proposed that is known not to satisfy the CTD Principle. On the other hand, we actually know precious little about which physical theories do (or don’t) satisfy the CTD Principle: so far as I know this problem is still open for the two leading physical theories of our time, the standard model of particle physics, and general relativity.

I should note, by the way, that closely related points of view have been put forward many times over the past few decades. Perhaps the most extreme are people like Ed Fredkin, who have proposed that the Universe is really a giant cellular automata, and that the laws of physics should therefore be formulated as rules for such a cellular automata. This is a particularly extreme form, for it presupposes that we already know what the candidate universal computing device in the CTD Principle is. (Less extreme views have been been put forward by other people. If I recall correctly, both Wheeler and Landauer have made arguments along these lines.)

6 comments

  1. Three quick comments:

    First, this is a great post, and I eagerly await part 2.

    Second, the paragraph beginning “More recently, many physicists” is duplicated.

    Third, it seems to me that the sense in which we understand “simulates” may prove to be very important. Presumably it’s something like “solves the equations of motion of”, rather than having a similar dynamical structure internally. But do we really mean, or need, exact solutions? Would it count if we could just find arbitrarily close approximations to the solutions? This would be (ordinary) convergence of the simulations to the true solutions. But since fundamental physics (seems to) be probabilistic, maybe we shouldn’t hope for more than being able to get arbitrarily close approximations with arbitrarily high probability, i.e., the simulations converge in probability to the solutions. Would this still be a useful version of the CTD? What if we couldn’t manage more than weak convergence?

  2. Thanks, Cosma. Part 2 is largely written, but needs a fairly thorough going over.

    I will fix the problem you mentioned, and one other duplication, immediately after this comment.

    The question you raise about simulation is an interesting one, which I’ll attempt to address in the second part of the post. It seems to be rather difficult to address convincingly because while everyone understands what an exact solution is, making precise the notion of an approximate solution is far from trivial.

  3. I like the idea of CTD principle as guiding principle, and would go further to say that proving the CTD is impossible and that only “turning it on its head,” as you say, makes any sense.

    Consider the problem of “proving” that the universe won’t end tomorrow, versus accepting this as a useful guiding principle for our physical theories.

    Also, both “guiding principles” seem to have the interesting property that they cannot be proven false. e.g. if the universe ends, no one will get to publish a paper about it. And (I think) it’s impossible to verify in finite time that some observed physical behavior cannot be simulated.

    Along the lines of the previous post, one could also consider the existence of simulations with linear time and space overheads. The best constant-error quantum simulations I know of use time:
    T^(1 + 1/sqrt(log(T)))
    or essentially T^(1+delta) for any delta>0.
    If one could prove a super-linear lower bound here, what would the implications be for our theories of quantum mechanics?

  4. Yes, this was well-done and interesting. However, it seems to me that the homunculus problem of modern physics pervades the entire discussion. And that is … does physics describe reality or does it describe our formulations of reality.

    Here is a ‘physical process’ … the depletion of the ozone layer. Here is your problem: George Bush and people of faith. Until the ozone layer comes down and destroys their hometown, it won’t matter a tinker’s damn. All physics formalisms amount to these people is the boggie man.

    The essential problem of physics remains no matter how it is hidden. The essense of our material life is not immediately revealed to all and obvious to all.

    So, I say the notion of ‘physical processes’ is a contentious one. And the Church-Turing Thesis remains a tinkering with numbers, states, symbols and other ‘homunculus’ non-starters.

  5. This is really an interesting read. I would rather suggest that we can start looking for making the Theory of quantum gravity as they say from the first principle of CTD rather than marrying QM and GR. Moreover I would also love to ask the question on the lines of Hilbert that:
    whether there exits an algorithm that can be used in principle to solve every possible “physical” problem and also would like to say if we can say CTD in more rigorous terms as:
    “Can Every physical system in principle simulate every other physical system given enough resources.”

  6. Hi,

    I came across this extremely clear an lucid explanation of the principle, its role and the practical consequences of using the CTD Principle. That clears alot of things up for me.

    I am very unclear as to the role the principle of “self-similarity” and ideas of a simulation as rendering might serve in the construction of knowledge. Moreover, what their relationship to the CTD could be construed to be. Finally, the another foundational issue of the CTD is its relationship to the foundations of mathematics as well as the theory of computation, as well as other problems related to that subject.

    I don’t know and maybe you have some idea, but I think that Deutsch is working toward describing the mechanism for creating the “fabric” – first qubits, then communication networks, constructors, etc.

    How might one imagine the emergence of higher order phenomena from the application of progressively fine-grained models of the CTD ?

    Fundamentally, either life is “computing” – ie a type of “processing” which is ultimately a metaphor for what’s going on, a metaphor in the sense that the process of coming to know is a process described by tools emerging from as well as constructing the rules & sense of what comes about – or it’s not.

    The attraction of the idea of a computational universe, mind and being, that thought is a simulator, or rendering process, is fascinating, and in my humble POV, as self-evident to many of those studying the brain-mind-consciousness connections.

    So if you’re not a heart a dualist, and it’s all one let’s say (big picture talking) then the thing is — getting “generated”, “processed”, “constructed” or what have you by the rules it discovers, creates/formalizes, tests and applies, refines, as it evolves. Otherwise you’ve got hidden factors everywhere, multiple rules and forces etc, all within contrived contexts, vitam aeternam.

    How can listening to music of let’s say Indian Ragas evoke a sense of the infinite so that when you look out at the open sky, SIMULTANEOUSLY “seeing” beyond the cloud danse in the sky the infinite night sky behind, all around you, so small, so insignificant, here, REALIZING where you are! just understanding for the first time really you’re identity admist it all – how can I both KNOW, UNDERSTAND & FEEL what seems the truth though I realize I am not encompassing everything? How can it be possible to BE certain, understand just one thing, feeling, for example, I am?

    What principle of knowledge enables this? Does a cell “realize” at some point in time its role in the unfolding drama as it sees all the others cells busily going up into the skyscrapers to do their job? What do the suns know? What role does knowledge play? Are quarks and fields having fun, are qubits “conscious” ?

    I know this sounds crazy, but I’m re-reading Deutsch and really starting to understand plausible implications of alot of this stuff. As a philosopher and computer science engineer, I have worked in AI for years. Previously I studied eastern religions as a yound adult and rediscover “myself” years later. Now I discover Quantum Computation, as well as reading Complexity Theory stuff (Stu Kaufmann, Holland, Crutchfield)

    This is a stupid question but goes along with those remarks above : in some suitable global model, are we not all instantiations of the CTD … or have I missed something 😉

Comments are closed.