My major scientific interest for the past few months has been in using ideas from differential geometry to develop insight into quantum computation. I’ve just posted a paper (pdf) at the preprint archive on the subject. This post is a somewhat technical overview of what’s done in the paper. The post will, I hope, make a fair bit of sense to people who know something about the technical details of quantum computation, but latter parts will be hard going for other people.

I’ve also posted a talk (Powerpoint) I gave a couple of weeks ago on the subject.

Update (2005/02/19): Some typos in my talk fixed, with thanks to Wim van Dam.

One of the most notoriously difficult problems in computation, either quantum or classical, is to figure out the minimal circuit that can be used to compute some function.

Even the easier problem of proving a lower bound on the size of the minimal circuit seems to be intractable for most interesting functions – this is why famous problems like P vs. NP are so hard.

Inspired by geometric control theory and ideas from Riemannian and Finsler geometry, I’ve been spendng my time trying to develop a general approach to the proof of lower bounds on circuit size.

The technical details are lengthy, but the basic motivating ideas are fairly simple.

The starting point is the observation that what we’re really trying to do is minimize a function (number of gates) over the space of quantum circuits computing the desired function.

As any high school maths student can tell you, minimizing a discrete function like “number of gates” is a heck of a lot harder than minimizing a smooth function, since we can use the powerful tools of the differential calculus to solve the latter problem.

The first thing done in the paper is therefore to replace the discrete problem of finding a minimizing quantum circuit by a more smoothly formulated problem, which I call the Hamiltonian control problem. Basically, we embed the (discrete) circuit optimization problem inside a smoothed version which lets us use the tools of the calculus.

This smoothed problem turns out to be essentially a problem of finding “shortest paths” on some suitable geometric space, in this case, the space of n-qubit unitary operations.

The upshot is that we can get a lower bound on the minimal circuit size by computing the length of the minimal geodesic between the desired unitary quantm computation, U, and the identity operation, I, where length is defined by some suitable metric structure.

(An interesting technical point is that the right type of metric structure to use is a so-called Finsler metric structure, which generalizes the Riemannian metrics familiar to physicists by replacing quadratic forms by a more general norm function.)

Now the calculus comes into play. By using the calculus of variations, it’s possible to derive a second order differential equation (the “geodesic equation”) describing the geodesics of the metric. Because the geodesic equation is second order, once an initial position and velocity are set, the remainder of the geodesic is completely determined.

This is in sharp contrast with the usual case in circuit design, either classical or quantum, where being given part of an optimal circuit does not obviously assist in the design of the rest of the circuit.

This contrast makes me think this approach is really worth pursuing as an approach to quantum circuit lower bounds.

That’s the broad conceptual picture of the program. I’ve also made some technical progress on implementing the program, which I won’t try to describe here, beyond quoting the second half of the abstract:

In this paper we construct several Finsler metrics whose minimal length geodesics provide lower bounds on quantum circuit size, and give a procedure to compute the corresponding geodesic equation. We also construct a large class of solutions to the geodesic equation, which we call Pauli geodesics, since they arise from isometries generated by the Pauli group. For any unitary U diagonal in the computational basis, we show that: (a) provided the minimal length geodesic is unique, it must be a Pauli geodesic; (b) finding the length of the minimal Pauli geodesic passing from I to U is equivalent to solving an exponential size instance of the closest vector in a lattice problem (CVP); and (c) all but a doubly exponentially small fraction of such unitaries have minimal Pauli geodesics of exponential length.

From → General

1. Dear Michael Nielsen!

I graduated at St.Lucia in 1984, a long time ago.
Anyway, I feel I can give you certain information about the boundary conditions you will encounter in any application of the metricated scenarios in your paper.

This is more general, but derives certain standards in the Standard Models of both Particle Physics and Cosmology via the modular duality of M-Theory.
Knowing those and the demetricated nature of the underpinning plenum of the vacuum, might help you greatly to finetune your models.

Best of SDcience to you Tony B.

Michael,

My intitial observation is that you don’t need to reinvent the wheel, or in this case, something like K-Maps (Karnaugh?sp?) and Logic Minimization.

Try, “Design of Computers and Other Complex Digital Devices”, Sunggu Lee, 1999 for a pretty good overview that mixes the logic and fundamentals. Also the always handy, “Photonics Rules of Thumb”. The design bibles and CAD stuff is standard for the most part (like physics texts, as opposed to letters or journals that cover a specific area). These and the IEEE sites should be enough to chew on engin-wise.

Otherwise there’s NIST, of course (& EEEL) , but also NTSC might give you some surprises. The tech support sites the high end suppliers maintain are famously coy, but some might be worth digging through.

Here’s a link to a good NIST FAQ to mull over (metrics and such).

http://www.boulder.nist.gov/timefreq/phase/Properties/main.htm

http://www.boulder.nist.gov/timefreq/phase/Properties/ten.htm

And here’s the list of papers:

http://www.boulder.nist.gov/timefreq/ion/qucomp/papers.htm#x_or

Interestingly, my main recommendation is that you think about the basic problems with less subtlety and ‘smarts’, which kind of requires explaining. The catch-22 of technical design is that to be seriously productive, often one has to be dumb and obvious and literal. You’re racing TOO far ahead of the actual experimental tech, and thinking TOO abstractly, which is why I’m accusing you of being too smart.

If we’re talking rf ion traps aping XOR truth tables at the atomic level, ok. This relates to specific hardware and experimental apparatus running differently in various labs and set ups. The metrology guys (EEEL) use custom devices and their own algorithms to process the signals (see IEEE, “Banishing quasiparticles from Josephson-junction qubits: why and how to do it”). The NIST tech is custom designed and fabricated, and highly tweaked. So in practice the use of the term “quantum” in relation to computing, electronics, gates, or logic is just a phrase that’s replacing ‘photonics’, ‘fiber optics’, photo-optics’ etc.

Be dumb with me for a second. Ben Franklin messed up the + and – poles, right? Newton tried to measure spectral ratio with a shadow from a strand of his hair. If you had to build the standard model from scratch, and were ignorant of the requirements of both relativity and QM, where would you begin? Hypothetically speaking?