Quantum computing, one step at a time
For anyone interested in when we’ll have large-scale quantum computers, there was a very striking paper released today, by Manny Knill.
To explain Knill’s paper, I need to give a little background.
Just a little over ten years ago, quantum computing suddenly took off when Peter Shor announced his fast algorithm for quantum factoring.
In 1994, large-scale quantum computing looked like a pipe dream. Several people said publicly (and far more said it privately): “you’ll never ever be able to build a device like that”.
The most common criticisms ran something like this: “to quantum compute, you need to do Y and Z. At the moment in the lab, you can’t even do A and B. Therefore, you’ll never be able to quantum compute.”
The tempting response is to say “well, we’re just about to do C and D, and E and F look pretty likely as well in a few years time. So maybe X and Y aren’t so unrealistic.”
That’s a tempting response, and it’s how I used to respond to this sort of criticism. But it’s turned out that neither the criticism nor the response is accurate.
What’s wrong with the argument is not (or not only) that it doesn’t take account of technological progress from A to B to C and so on.
No, the main thing wrong with the argument has turned out to be that it doesn’t take enough account of theoretical progress.
Sure, over the years experimentalists have moved from A to B to C to D, and they’re getting on to E and F.
But in the meantime, theorists have shown that, actually, you only need to do J and K to quantum compute.
(Alphabet not to scale. At least, not as far as I know.)
In short, lots of discussions of the future of quantum computing are framed as though it’s a fixed target, like building a teraflop laptop.
But it’s not. It’s a fluid target, and pure theory can move us a lot closer to the target, without technology improving a whit.
What’s this all got to do with Knill’s paper?
To go back to 1994 again, one of the first responses to Shor’s paper was numerous claims, some of them public, that quantum computing would never be possible because of the effects of noise.
Roughly speaking, the argument was that quantum states are analogue beasts, and it’s very difficult or impossible to protect analogue information against the effects of noise. Therefore, quantum computers will inevitably be overwhelmed by the effects of noise.
In late 1995 and early 1996, Peter Shor and Andrew Steane independently showed that quantum information, despite appearances to the contrary, actually behaves much more like digital information. In particular, it turns out that the analogue continuum of errors apparently afflicting quantum states can effectively be digitized, and this enables error-correction techniques to be applied to protect against the effects of noise.
I’d like to emphasize, by the by, that this ability to digitize is a deep result, not at all obvious, and depends critically on certain special properties of quantum mechanics. It’s difficult to give a short pat explanation, even to experts on quantum mechanics, and I won’t try to explain it here.
The error-correction techniques were quickly extended to an entire theory of fault-tolerant quantum computation. I won’t try and name names here, as a very large number of people were involved. Indeed, the development of fault-tolerance meant that 1996 was, perhaps the one year in the last ten in which progress toward quantum computing really did seem rapid!
Roughly speaking, the takeaway message from all this work went something like this, circa the end of 1996:
“Suppose I can build individual quantum computer elements so they work with an accuracy of 99.9999% – the so-called threshold value. Without error-correction, if I put 10 million together, there will probably be a few total failures that screw everything up. But I can use error-correction to reduce my error rate as low as I like, provided all my elements work better than that threshold. Whatsmore, this is true no matter how large the computation.”
This idea of a threshold for quantum computing is an incredibly important one, and there’s been quite a bit of work on trying to improve that number (99.9999%).
Knill’s paper is the latest in that line of work, improving the threshold.
What’s Knill’s value?
His threshold value is 97%. Or 99%, with more modest resource requirements in terms of number of gates, memory steps, etcetera.
To state the obvious, having to do things to an accuracy of 97% (or 99%) is a far more encouraging state of affairs than 99.9999%.
There’s a lot of caveats. The 97 / 99% number is for a rather artificial error model. Real physical systems don’t behave like Knill’s model. Indeed, it’s not really clear what that number translates into in terms of actual physical parameters – heating rates, dephasing times, and so on. And Knill uses a model satisfying some assumptions that may not be such a good approximation in real physical systems; indeed, they may even fail completely. (This is true of all the threshold results, although there’s been extremely encouraging recent progress on this front too.) Finally, Knill’s approach has a rather high resource overhead.
All the same, no matter how you look at it, moving from 99.9999% to 97 / 99% is pretty darn encouraging, despite the caveats. And I do wonder just how high the threshold can go.