## Doubling up

As luck would have it, I submitted two papers to the quant-ph preprint archive today. They should both show up in a day or two. Both contain what I unabashedly think is some pretty neat stuff, so I’ll describe them here.

The first (joint work with Chris Dawson, Henry Haselgrove, Andrew Hines, Duncan Mortimer, and Tobias Osborne) paper relates quantum computing to something that, on the surface, you’d think had absolutely nothing to do with quantum computing: counting solutions to sets of polynomial equations.

In particular, we show that the problem of determining the output of a quantum computation is actually equivalent to the problem of being able to estimate the number of solutions to certain sets of polynomial equations, which we explicitly construct.

This is really quite odd. After all, it is believed pretty likely that quantum computers can be used to simulate any physical system. If you believe this, then what this result tells you is that simulating nature can be boiled down to counting solutions to polynomial equations.

What the heck does the simulation of physical systems have to do with counting solutions to polynomial equations? A priori, you wouldn’t think they’d be all that related. Yet the two problems turn out to be the same, in some sense. I find this really quite remarkable.

I should say, by the way, that other people have shown some related results in the past. I won’t give an account of that history here, though, as we have a detailed discussion in the paper, and I won’t do it justice in this small space.

The second paper (joint with Denes Petz, who has been visiting us at UQ), is a short expository note. It’s a simple elementary account of what may be just about the most important result in quantum information theory – the strong subadditivity inequality for entropy, first proved by Lieb and Ruskai in 1973. Almost everything we know about entropy comes from this inequality.

In the 1980s Denes found a simple (and, I think, extremely beautiful) proof of this result. Our note gives an account of this proof that is (a) short; (b) easy-to-understand (we assume only basic linear algebra and quantum mechanics); and (c) where you can learn something interesting at pretty much every step, if you pay attention. That’s the kind of proof I love to read, myself, which is why I’m so excited about this note!

Published
Categorized as General

## Hiatus

I’m heading off shortly for two weeks, for a many-body physics conference in Santa Fe, and a quantum information conference in Tokyo. Should be fun, especially as I’ve never visited Japan before. In the meantime, posting is likely to be light to non-existent, i.e., pretty much the same as the last couple of weeks.

Published
Categorized as General

## What’s wrong with those quantum cryptosystems?

One of the most exciting and beautiful ideas in modern cryptography is the technique of “quantum key distribution” (QKD). QKD provides a method by which two parties (Alice and Bob) can communicate secretly with one another. The beautiful thing is that the security of their communication is guaranteed by the laws of nature.

This method has been demonstrated over the past fifteen years through an ingenious series of experiments by several research groups, and it is now possible to purchase off-the-shelf systems for doing QKD.

In this post I’m going to play Devil’s Advocate, explaining what I think are some problems with QKD, and some likely failure points. In particular, I want to explain what I think is a reasonable scenario which could lead to public claims of having broken QKD.

This is a long post, but it’s not technical, and you don’t need to know anything about quantum mechanics (or cryptography) to understand it.

Background

QKD is conceptually quite simple. The basic problem is for Alice and Bob to share a secret key which only they know, without the necessity of meeting or using some kind of trusted courier; all communication is done openly, in a way that may be susceptible to interception. This key can then be used to send secret messages. They want to do this in a way which is secure against an eavesdropper, Eve, who has access to all their communication.

To accomplish this task, Alice sends Bob a string of specially prepared quantum systems. These systems could be photons (particles of light), or some other quantum system. Bob then performs measurements on those systems in order to accomplish two things:

* First, Bob chews up some fraction of his systems simply checking whether an eavesdropper is trying to listen in on his communications with Alice. It is a basic principle of quantum mechanics that to gain information about a system you have to disturb it. This principle lets Bob detect the presence of an eavesdropper.

* If Bob determines that with high probability no eavesdropper is listening in, then Alice and Bob can obtain some secret key bits from the remaining systems.

There are some subtleties which this description misses, but that’s the basic idea: to exploit the principles of quantum mechanics to detect eavesdroppers.

The idea was proposed in 1984 by Charles Bennett and Gilles Brassard, building on earlier ideas of Stephen Wiesner. Since that time a string of papers by many different research groups has provided increasingly strong proofs of security, proofs based not on any cryptographic assumptions, but rather on the laws of nature themselves.

Principles of security and insecurity

On the face of it the situation for quantum cryptography looks pretty good. Anyone breaking a quantum cryptosystem won’t just get to read someone else’s mail; they’ll also win a free trip to Stockholm, and lasting fame as a Nobel prizewinner. Why not get started with the construction of the quantum internet? At the least, shouldn’t banks and governments be using this technology?

Things aren’t quite as simple as that.

One of the basic principles of ordinary (classical) cryptography is that the algorithms used to do encryption and decryption should be public.

Not everyone follows this principle. Some companies have been known to practice “security by obscurity” (a nice explanation is here), wherein the keep their algorithms private, often using special proprietary algorithms.

Security through obscurity is a bad idea. This has been demonstrated through many many well-publicised cases wherein a company or group of companies release a product containing a proprietary cryptosystem which they refuse to divulge the details of, often making wild claims about it being “unbreakable”. Several months later some clever hacker reverse engineers the system, finds the cryptosystem to be laughably weak, and posts a method to break it on the net.

This isn’t just a theoretical possibility. Spectacular recent examples include the proprietary DVD encryption algorithm, broken in 1999 by Norwegian teenager Jon Johansen, and the GSM cellular phone encryption code, broken in 2003 by academics Elad Barkan, Elih Biham and Nathan Keller.

The solution is to make your encryption and decryption algorithms public, and then to subject them to the most rigorous possible peer review. A cryptosystem which survives a decade or two of attacks is likely pretty good, while one which relies for its security on being obscure is not likely to be as strong, and is always vulverable to being reverse engineered.

On the face of it, quantum cryptosystems don’t fall into the trap of security by obscurity. Anyone can read the original paper of Bennett and Brassard describing QKD, or any of the later proposals. And anyone can read the various papers describing proofs of security.

Admittedly, the community of people both able and interested in understanding those papers is small, but it isn’t so small that the number is negligible. It’s certainly in the thousands, and maybe the tens of thousands. That’s a pretty good community for peer review.

What I want to argue, however, is that in an important sense quantum cryptography does rely on security by obscurity. The fact is that while anyone can read about quantum cryptography, only a tiny number of people can actually use a real live quantum cryptosystem.

The effect is that peer review is confined to theoretical constructs, not to real systems. And as a wise person once noted, while in theory there is no difference between theory and practice, in practice there is.

This difference is much less important classically. When a new classical cryptosystem is developed, anyone can write some code implementing it, and post it on the internet. It can then be passed around the world, and attacks made. The key point is that implementations of classical cryptosystems are cheap and easily distributed.

By contrast, implementations of quantum cryptosystems are expensive and hard to distribute. This means that while the details of a cryptosystem may be public, that doesn’t mean that a lot of people can actually use it in practice.

Why is this important? The problem is that while a cryptosystem may be secure in some theoretical sense, real implementations inevitably deviate somewhat from the ideal theoretical description. Sometimes these deviations result in “back doors” that enable a cryptosystem to be broken.

A spectacular example of this is the so-called timing attack on RSA. RSA is one of the best (and most secure) of the classical cryptosystems. At a theoretical level it’s survived nearly 30 years of attacks.

But at the implementation level, a few years ago someone noticed a way of compromising RSA. In particular, by carefully timing certain operations, it turns out that it may be possible to determine the keys used in RSA, and so break the system!

The reason this attack works is because in a real implementation of RSA, different operations may take different amounts of time, and this can be used against the system. This fact was originally ignored in theoretical analyses. It’s a great example of a “back door” arising in an actual implementation, which departs from the theoretical ideal.

The reason we still have a lot of confidence in RSA is that (a) at a theoretical level, it’s survived all attacks; and (b) few back doors have been found in real implementations, and those back doors have been plugged.

What’s the connection to QKD? I would not be surprised if similar back doors exist in QKD systems that come onto the market over the next few years. In fact, I’d be much less surprised by such back doors than I was by the back door in RSA. Why? Because only a very small number of people have actually had access to a real live quantum cryptosystem, and most of those people have had far more interest in developing the system, rather than breaking it.

“Breaking” QKD

Let me sketch out a scenario I think is reasonably likely.

At some point over the next five years, some dedicated person buys themself a commercial QKD system. They take it apart, and figure out exactly how it ticks. They note carefully all the ways the real implementation differs from the theoretical ideal. And then they figure out a back door, a way to break that implementation.

Let me go a little further in the Devil’s Advocate role and sketch out two possible chains of events that might occur at that point.

First, maybe they’re a researcher of a sensationalist frame of mind. They write up their results in a paper entitled “Quantum cryptography broken”, and send it to Nature, where it causes a huge storm of publicity, and sets back confidence in QKD by five or ten years.

Second, maybe they have more nefarious purposes, and don’t publish their results at all, merely using them to covertly break systems in actual use.

Are these things to worry about? Given the extraordinary claims being made – proponents of QKD advertise “unbreakable security” – I think it’s very reasonable to worry about these eventualities, even if they are extremely remote. And, personally, I’m not so sure that they’re all that remote.

Solutions

I propose that the principle of avoiding security by obscurity needs to be recast in a more strict quantum version. In particular, we need to obey the following principles:

To build the best possible cryptosystems we need to ensure that (a) the theoretical basis is entirely public; (b) actual implementations should be widely publicly available; and (c) there should be extended periods of testing of real implementations.

Systems that don’t obey these principles are unlikely to be secure. Of course, just because a system obeys these principles doesn’t mean a system is secure, but I believe they are a precondition for claims of security to be taken seriously. At the moment, I don’t think QKD satisfies all these principles; in particular, (b) and (c) have not been followed.

How can these principles be followed in the case of QKD?

Here’s one way: funding agencies ought to fund installations which are accessible to multiple groups. One or more of those groups is funded to develop a cryptosystem. And several groups, operating independently, are funded to attack that cryptosystem, by whatever means is necessary.

Maybe something like this is already done. If so, I haven’t heard about it. But I certainly think something like it is necessary to justify confidence in quantum cryptography.

Published
Categorized as General