It’s probably surprising if you don’t already know it, but in the standard “quantum circuit” model quantum computers are actually quite similar to ordinary computers. A quantum computation is built up out of quantum gates that perform quantum logic operations on quantum bits. All of this proceeds pretty much by analogy with classical computers, where gates perform logical operations on bits. There are some technical differences, but the broad picture is pretty similar.

Mark Dowling, Mile Gu, Andrew Doherty and I have recently developed a rather different *geometric* approach to quantum computation. (Here’s the link to the abstract, and here’s the link to the full text at *Science*. Jonathan Oppenheim has also written a nice perspective piece (sorry, I don’t have the full text).)

Our result is pretty simple: we show that finding the best (read smallest) quantum circuit to solve a particular problem is equivalent to finding the shortest paths between two points in a particular curved geometry. Intuitively, this problem is like an orienteer or hiker trying to find the shortest path between two points in a hilly landscape, although the space we are working in is harder to visualize. There’s some technical caveats to the result, but that’s the general gist.

What use is this? At the moment it’s difficult to say – unfortunately, we don’t yet have any killer applications of the result. But our result does mean that problems in quantum computation can be viewed in terms of equivalent problems in the field known as Riemannian geometry. This opens up the possibility of using some of the deep ideas of Riemannian geometry to solve problems in quantum computing. And who knows: maybe ideas from quantum computing will have a useful stimulating effect, injecting new ideas into the study of geometry!

How was your five months of silence? đź™‚

Is the paper going to be posted in arxiv and if so approximately when?

R. R. Tucci: Science policy requires a gap of 6 months between when the paper appears, and when it can be posted on the ArXiv. So, in about 6 months.

Dave: Very nice! There was no particular intent to stop posting, I just didn’t have a whole lot to say for that time, and didn’t pressure myself to say anything.

Hm, so fix a metric with a large penalty (4^n).

Then the geodesic distance d(I,U) is a lower bound on the number of gates required to exactly compute U.

And also a polynomial in the geodesic distance is an upper bound on the number of gates required to approximate U (to any fixed constant).

This isn’t an equivalence, since it is possible that the number of gates required to approximate U is much less, even exponentially less, than the number of gates required to compute U exactly.

(Getting a lower bound on the number of gates to approximate U requires taking the minimum of geodesic lengths over all U’ in a neighborhood of U. And it is possible that a U’ which is nearby in terms of norm is very far away according to the geodesic — exponentially far since the penalty is exponentially large?)

Is there any way known to avoid this caveat? If you could prove an equivalence for a metric with a small penalty, would that suffice? What’s the correct intuition here?

(I haven’t thought about this closely, and probably don’t understand it at all, so am posting anonymously.)

Hey Michael, is that a new policy at Science? Their FAQ still says posting on the arXiv is okay.

Anonymous: The technical caveats I mention in my post are (in part) those you refer to. Of course, an estimate of the Riemannian distance will still enable us to both prove lower bounds on the gate complexity for exact computation, and upper bounds for approximate gate complexity, in both cases for a non-uniform model with no workspace. So I don’t think I’m being terribly sloppy in calling this an equivalence.

Your question about whether it’s possible to avoid this caveat is an interesting one. It turns out to be pretty easy to show that any bound obtained using these types of geometric techniques must necessarily relate to approximate complexity in one direction, and exact complexity in the other. The details will appear as a (very minor) part of a paper that’s just now being finished up.

Dave: Wow. I just looked again at Science’s policy, and you appear to be right. I’m not sure how I arrived at this erroneous impression. Looks like we may be able to post after all!

It seems trivially easy to give a Riemann surface (unbounded genus) for which any 3SAT formula has a satisfying assignment if and only if the length of the geodesic between a corresponding pair of points in the surface is at least 10. A priori, why is there any reason to expect this representation to be useful?

I think your article in Science does not address this, except to say that there are “many powerful tools” in Riemannian geometry. Of course this is true, but on the other hand it is notoriously hard to understand geodesics even in much simpler geometries than the one you propose…

Anonymisto: Since I don’t know what representation you’re referring to, I can’t say why or whether it’s useful or not.

As regarding why techniques from Riemannian geometry (and more generally, control theory and the calculus of variations) might be useful, I discuss this at more length in my earlier paper in Quantum Information and Computation. Of course, the reasons given there are heuristics. I find them compelling – they are what motivated me to work on this in the first place – but I expect that some people will be skeptical. I hope, however, that other people – especially those who know some geometry and / or control theory – will find this connection between quantum computation and geometry interesting, and be inspired to see what can be said on the geometric side.

Our UW quantum system engineering group finds geometric ideas to be very useful. Basically, we regard a quantum state as a fluid that we steer via measurement and control operators.

This approach has lead us to focus on the following “sparsification” problem, which relates to the possibility (or not) of finding compressed representations of quantum state trajectories.

Let $\{H_i\}$ be a finite set of Hermitian matrices. Is there a unitary

matrix $S$ such that each matrix $\{SH_{i}S^\dagger\}$ is sparse?

This “sparsification” problem turns out to be a natural not only in physics, but also in cryptography, in the sense that key distribution schemes can be constructed in which the matrix $S$ the private key.

In practical cases (both in physics and cryptography), the matrices $H_i$ have dimension ~2048×2048, with about 45000 of the entries being nonzero (i.e., one percent of the entries).

So is the sparsification problem generically NP-hard, or not? Pointers to the literature would be very welcome. AFAICT, determining whether a given problem is NP-hard, is itself plausibly NP-hard, and reading the cryptographic literature is certainly NP-hard!

It appears that the application of algebraic topology to problems in distributed computing is becoming well established, as indicated by the awarding of the 2004 Godel Prize to The topological structure of asynchronous computability (PDF here):

Also note that Prakash Panangaden (of McGill University) combines work on quantum computation and the theory of distributed computing, and has shown a strong interest in formal connections of the latter with causal structure in spacetime, ie, in Riemannian geometry with a causal metric structure.