Thanks to all those people who commented on the significance of the Planck length. Putting it all together, one plausible argument for the significance of the Planck length seems to be something like this.

First, suppose we have a particle of mass whose wavefunction is localized within a length , in each co-ordinate. For each co-ordinate the standard deviation in position satisfies:

The uncertainty principle tells us that the uncertainty in each momentum co-ordinate satisfies:

It follows that:

where now is the average of the square of the total momentum, i.e., not a single co-ordinate. Using the usual formula connecting energy and momentum for a free particle in special relativity, we obtain . If we assume that and can be identified then we obtain:

In short, strong localization in position forces a large momentum, which creates a large energy density.

It’s plausible that if we make the energy density large enough, we’ll create a black hole. Equating to for some notional black hole mass M and seeing at what radius that is, we get the Planck length, up to a small constant. I haven’t put the details in because this stage of this argument is even more hokey than the rest: energy density depends on what frame you’re in. However, it is at least plausible.

If you take the “no hair” theorem of general relativity seriously, then you’d believe that such a black hole should have no internal structure. But the detailed wavefunction would seem to be such an internal structure, and so we have a problem, which seems likely to signify a breakdown in one or both of general relativity or quantum mechanics. How that breakdown would manifest itself, I don’t know.

There’s lots wrong with this argument: the sign ambiguity for E, the use of the free particle formula for energy, and so on. No doubt many of these difficulties disappear in some more sophisticated approaches to quantum gravity; I’ve just been using undergrad quantum and GR.

Nonetheless, this argument does seem suggestive that particle having wavefunctions with structure on the Planck scale would, indeed, be very interesting objects, and that it is at least somewhat likely that general relativity, quantum mechanics, or both, would break down at that level.

From → General

The argument you gave can be covariantized, replace energy density by energy-momentum tensor, length by spacetime element etc.

There is no need to think about the source as a single particle wave-function, rather it is about whatever constitutes the energy momentum tensor. When this quantity strongly fluctuates do does the metric, in response, and then any geometrical notion is fuzzy in the usual QM sense. This is what people mean, I think, by breakdown of classical spacetime. Note that it is the variance of the em tensor, not it’s mean, that is important for this.

Finally, the point about the no-hair thm. is really interesting, one similar point is that black holes obey quantum statistical mechanical laws, and carry (huge) entropies. Those presumably come from the “internal structure” of whatever created the black holes.

Contrasting this with the no-hair thm. was for me always a significant evidence for the incompleteness of GR. It is easier to blame this on GR than on QM, since the latter is really hard to modify in any sensible way, whereas GR is almost by default a low energy approximation, to be modified in regimes not experimentally tested.

As regards the modification of QM, what about the many “spontaneous collapse” models that have been proposed? I realize that in many ways these are not terribly interesting — they lack experimental support or motivation, so far as I’m aware — but they do at least provide toy examples of theories that are just slight modifications of quantum mechanics.

I’m also curious as to how one makes a covariant analogue of the uncertainy-principle based argument that I made earlier. Or did you have something else in mind?

Hi,

I am sure you know more than myself about attempts to modify QM, I am only somewhat familiar with Weinberg’s non-linear QM. My impression that any small modification of QM is either indinstiguishable from QM or is internally inconsistent, I would be happy to hear your assesment.

As for the uncertainty relations, given the energy-time relation, one can at least write things covariantly. Canonical quantization usually involves choosing a time, but that choice should not matter- Lorenz invariance is there, just not manifest in the formalism. In fact I think there is a formalism to canonically quantize in a manifestly covariant way, which is useful for some purposes. There is also Feynman’s “spacetime” approach, which is more manifestly covariant.

The point about the Planck scale is covariant though. One usually works in the approximation where matter is quantized and the metric is not. But of course the metric responds to the matter, so this is inherently inconsistent. Planck scale is where this inconsistency becomes of order one. This is cleanest (and most manifestly covariant) in path integral, where things becomes strongly quantum mechanical when the actions involved are of order hbar. If you look at Einstein action you see that happens when the curvature is of order the Planck scale, and this is a covariant statement.

4. Regarding theories “near” to QM, many have been proposed, most of which seem to be based on some notion of occasional stochastic collapse in some basis. Many, but not all, of these have the undesirable feature that they don’t conserve energy, beside generally just looking like toy models.

So far as I’m aware the energy-conserving theories are just fine (i.e., don’t suffer internal contradictions), and can be tuned so they are not inconsistent with known observation. Of course, this means they look an awful lot like quantum mechanics in the regimes we can presently probe.

I’ve heard at least a couple of semi-serious experimental proposals to probe such theories in regimes where they may be substantially different from QM. In particular, I believe Penrose has got several groups interested in his gravitational collapse model, and I seem to recall some noise being made (about 10-15 years ago) about neutron interferometry being used to test the Ghirardi-Rimini-Weber model. I have not followed either line of development.

Regarding Weinberg’s nonlinear quantum mechanics, I read an article years ago by Peres claiming it violated the Second Law. This certainly seems plausible, although I don’t recall the argument.

Overall I certainly think there’s some room to construct alternative theories of QM that are close to QM in present experimental regimes, but may differ substantially in other regimes. Of course, finding a well-motivated candidate is the hard part. (Though probably not as hard as verifying it experimentally.)

5. Philip Pearle, who proposed a modification of QM called continuous spontaneous localization (CSL) that is similar to the GRW model, has quite a few ideas on how to experimentally test for the stochastic/non-linear behaviour he postulates causes the wave function collapse. He gave a couple of lectures on CSL at the Perimeter Institute as part of a course on interpretations of Quantum Mechanics, and apparently experiments performed so far are not precise enough to confirm the predictions of CSL, but they do not falsify it either. The experiments were conceptually quite simple, so maybe a definitive answer will be coming out soon enough.

Personally I find these proposals based on stochastic noise quite interesting, especially given Nelson’s work which, if my primitive understanding is correct, rederives Bohmian Mechanics from the idea that all particles experience brownian motion. Adding non-linear terms based on stochastic variables to the evolution of quantum systems would not seem so strange if one thinks in terms of Nelson’s work.

I think the problem of not having well-motivated candidates to replace/suplement QM is due to the fact that we have not pushed QM to the (experimental) limit yet, or maybe we have and just don’t understand failures of QM as such yet.