# Interpretations II

Let me follow up on the previous post with a few comments on the interpretation of quantum mechanics.

First of all, I do agree that there *is* a problem still to be solved in the foundations of quantum mechanics.

Not everybody agrees on this. Quantum mechanics works extremely well in all situations, so far as we know, which leads to people adopting the shut-up-and-calculate interpretation of quantum mechanics. In 99 percent of my professional work, I adopt that interpretation myself, sometimes quite explicitly – probably the single most frequent complaint I’ve heard about my book with Ike Chuang is that we take too pragmatic an approach to the foundations. (I suspect part of the problem is that we’re rather brazenly pragmatic, stating upfront that we’re not going to talk about foundations at all.)

However, it being a Saturday, I’ll let my hair down and admit that yes, I think there is a problem in the foundations.

Part of the difficulty is deciding what exactly is the nature of that problem. Is it an interpretational problem? Is the problem in the physics?

My own belief is that the problem is in the physics, and that if that problem can be solved, then there won’t be any interpretational problem.

So what is the physical problem?

Quantum mechanics as presented in many textbooks usually has postulates telling you that (a) a closed quantum system evolves according to unitary dynamics, i.e., Schroedinger’s equation, and (b) a quantum system that is measured evolves according to the so-called “projection postulate”, or something similar. Part (a) is completely deterministic, while part (b) is the part where probabilities enter quantum mechanics.

Now, of course, a measuring device is itself a quantum system. Furthermore, the quantum system being measured and the measuring device are both parts of larger closed systems (e.g., the Universe). That larger system should therefore be describable by a unitary evolution, if we believe the postulates of quantum mechanics.

Naively, then, one might think that it ought to be possible to *derive* the projection postulate from the postulate that closed systems evolve unitarily. Certainly, such a derivation ought to be possible if quantum mechanics is to be put on a single unified dynamical foundation.

The physical problem, in my opinion, is that no one has ever succeeded in carrying out such a deriviation.

There have, of course, been many attempts to put quantum mechanics on such a unified dynamical foundation. Perhaps the most fashionable in recent years has been the so-called “decoherence program”. Unfortunately, so far as I can determine, although the decoherence program has contributed substantially to our understanding of how classical physics arises from quantum, I still know of no convincing derivation of the projection postulate from unitary dynamics.

What are the prospects for carrying out such a derivation in the near future?

Not good, in my opinion, without the injection of some major new ideas, and quite possibly some experimental input. (Indeed, the possibility of experimental input into this issue is one reason for finding mesoscopic physics and quantum computing interesting.) This problem has simply been beating around for too long to be solved without some significant new ideas.

My own favourite crazy idea for resolution of the problem is that, in fact, the projection postulate will not be derived from unitary dynamics. Instead, unitary dynamics will be derived from the projection postulate. One of the insights to come out of quantuum computing in the past few years is that any unitary dynamics can be simulated by measurements alone. Perhaps, then, measurement is the underlying basis for all physical dynamics.

Comments are closed.

In some sense, decoherence really does explain the projection postulate without the projection. Once decoherence has diagonalized the density matrix, each ‘branch’ of the wavefunction doesn’t know anything about the other branches. Thus, you can treat this as projection, essentially. (Not that I’m telling you anything you don’t already know, I’d guess.)

What decoherence doesn’t explain, at least to my mind, is why do we only perceive on ‘branch’ of the wavefunction? This is a philosophical morass, and I sort of hope that the wavefunction really does collapse so we don’t have to delve into that swamp to really understand QM. What I think is really cool is that they’re doing quantum erasure experiments with larger and larger systems, so it’s not out of the realm of possibility that we could actually resolve the question of the ‘interpretation’ of quantum mechanics experimentally.

In the meantime, you can stick me in with the “shut up and calculate” crowd.

Hi Aaron,

I certainly agree that decoherence doesn’t explain why we see only one branch of the wavefunction. As you say, it’ll be interesting to watch mesoscopics experiments like the quantum eraser as we get better at fine tuning between the quantum and classical realms.

On the other hand, I think I’d modify your comment that “decoherence really does explain the projection postulate without the projection”, replacing it with “decoherence really does show that unitary dynamics and the projection postulate are consistent with one another.”

One reason it’s not a derivation, in my opinion, is that the whole program of decoherence seems to rely a lot on calculating reduced density matrices (e.g., for the system being measured), and then interpreting the behaviour of those reduced density matrices.

This is putting the cart before the horse, in my opnion. Something that doesn’t seem to be sufficiently stressed in most textbooks is that the motivation for defining the reduced density matrix is that it gives the correct statistics (according to the projection postulate) for measurements on part of the system. So if you don’t accept the projection postulate before you start, then the reduced density matrix is just some weird mathematical object with no physical interpretation.

This is why I think that trying to derive the projection postulate from any argument involving reduced density matrices is a waste of time, unless you first come up with some alternate reason for considering the reduced density matrix a good representation of the physics of part of the system.

I should, in fairness, state that in last year’s PRL Zurek attempted to respond to this criticism of decoherence with an alternate derivation of the reduced density matrix. I’m not sure I buy it yet; I really ought to look much more closely at his argument.

(1) I would like to see added to the decoherence program not just decay of coherences (coherence, after all, is in the eye of the beholder, which we call our choice of basis!) but instead the magnification of probabilities into certainties: i.e. there are certainly mechanisms for turning a mixed state into a deterministic classical state, but is there a generic method for doing this with the correct probabilites from a generic decohered mixed state? I want dynamics which generically takes p|0><1| (1-p) percent of the time. How can this be? Certainly not from any single superoperator. But maybe if the system which does the decohering to this mixed state is also involved in this process, then we have a situation which is not a superoperator (we do not start in a seperable state) so perhaps there is some method?

(2) Certainly we can already think of the universe as evolving only due to adaptive measurements from recent quantum computing results. It would be fun to formulate these in an infinitesimal version to see how continous measurement can arise. But now we have some craziness: the measurements are adaptive. Is the information used for the adaptive measurements locally transmitted? What is neat about this however is that it might give rise to a well defined arrow of time: the information being used in the adaption being disregarded gives a direction of entropy increase (?)

I’ll go check out the PRL. but can we not observe the behavior of reduced density matrices? Is that not what we’re doing when we observe any system correlated with something from without?

We can, after all, see the decoherence effect in action.

OK. I just checked out the PRL. Am I correct to understand that when you say decoherence does note explain the projection postulate, you mean that it does not explain the origin of probabilities? I can agree with that. However, the observation of an effective projection onto an eigenstate of the measurement operator can be explained by decoherence (interpreted as the diagonalization of the reduced density matrix). That’s uncontroversial and observed, right?

Do you think you could explain further or point to a cite regarding “One of the insights to come out of quantuum computing in the past few years is that any unitary dynamics can be simulated by measurements alone.” BTW? Thanx.

Aaron: “However, the observation of an effective projection onto an eigenstate of the measurement operator can be explained by decoherence (interpreted as the diagonalization of the reduced density matrix). That’s uncontroversial and observed, right?”

The argument I’m making is that (in standard accounts) the reduced density matrix doesn’t have any physical meaning, unless one assumes as background all the usual apparatus associated with measurements in quantum mechanics. (So far as I’m aware, no one contests this statement.)

With this in mind, it’s impossible to derive the projection postulate by looking at reduced density matrices of the system being measured, since we don’t have any physical reason for connecting the reduced density matrix of that system to the actual state of the system.

So “derivations” of projections onto eigenstates are really just nice consistency checks of the theory; they are not truly derivations at all.

I hope that’s clear.

Zurek’s recent paper tries to provide an alternative argument for why you’d use the reduced density matrix to describe part of a quantum system, not relying on the apparatus of quantum measurement. I don’t think there’s any real consensus yet on whether he succeeded. If he has, then that’d reduce my argument to rubble.

Aaron: “but can we not observe the behavior of reduced density matrices? Is that not what we’re doing when we observe any system correlated with something from without?”

I’m certainly not trying to argue that the reduced density matrix isn’t the correct way of describing these sorts of systems. It is, it’s been observed, etc. Heck, I guess I’ve even been involved in experiments where measuring the reduced density matrix was important.

I’m just talking about the kind of argument some people make when they try to fill in the implication in:

Unitary evolution + some kind of interpretation => projection postulate with probabilities and all that other good stuff.

I’m probably belabouring the point, but what I’m trying to point out is that the reduced density matrix is motivated, in the standard account, by the fact that it’s the unique object giving rise to the correct measurement statistics for part of the system. That is, it’s motivated by presupposing the projection postulate. So any attempt to derive the projection postulate that makes use of the reduced density matrix is not going to get very far.

I’m not sure I agree with what you’re saying if I understand you correctly. There is an real physical effect here, ie, entanglement. We can observe the entanglement of a microscopic system with a mesoscopic system (IIRC — I can’t remember the experimental ref anymore — it could be in Omnes) and then disentangle it later. We can see the interference patterns disappear and reappear when we do this. The physicality of the reduced density matrix is a separate issue from the existence of this physical effect.

Forgot to put references into the earlier post, on unitary evolution / quantum computation by measurement alone. There are two approaches to doing this.

The first was proposed by Raussendorf and Briegel:

http://www.arxiv.org/abs/quant-ph/0010033

Readers of this blog know that I think very highly of this paper. Indeed, much of my current research is related to it, although I’m not thinking about trying to derive quantum measurement from it!

The second approach is in a note I published:

http://www.arxiv.org/abs/quant-ph/0108020

This is a very simple paper, basically a corollary to earlier papers by myself and Ike Chuang, and Ike and Dan Gottesman. In it I point out that the standard teleportation protocol can easily be modified to give a scheme for quantum computing by measurement alone.

I’m not sure where (if at all) we disagree, Aaron. Certainly, I agree with everything in your last comment!

All I’m saying is that the massively entangled state might as well be treated as being projected onto the eigenstate of the measuring apparatus.

All I’m saying is that the massively entangled state might as well be treated as being projected onto the eigenstate of the measuring apparatus.

OR not the eigenstate, right?

I mean, what if the answer is simply “there’s perfectly fine unitary evolution and now the state is entangled with the measurement apparatus, too, and “projection” is simply calculating the reduced density matrix where we sum over all of the states of the measuring apparatus?

Can’t we at least derive reduced density matrices from our unitary evolution of a bigger system? Aren’t we done, then?