Via Illuminating Science, Google now does unit conversions. Google for “convert 17 pounds to kilograms”, and back it’ll come. More ambitiously, “convert 3 furlongs per fortnight to parsecs per second”, and back it’ll come.
Category: General
Doubling up
As luck would have it, I submitted two papers to the quant-ph preprint archive today. They should both show up in a day or two. Both contain what I unabashedly think is some pretty neat stuff, so I’ll describe them here.
The first (joint work with Chris Dawson, Henry Haselgrove, Andrew Hines, Duncan Mortimer, and Tobias Osborne) paper relates quantum computing to something that, on the surface, you’d think had absolutely nothing to do with quantum computing: counting solutions to sets of polynomial equations.
In particular, we show that the problem of determining the output of a quantum computation is actually equivalent to the problem of being able to estimate the number of solutions to certain sets of polynomial equations, which we explicitly construct.
This is really quite odd. After all, it is believed pretty likely that quantum computers can be used to simulate any physical system. If you believe this, then what this result tells you is that simulating nature can be boiled down to counting solutions to polynomial equations.
What the heck does the simulation of physical systems have to do with counting solutions to polynomial equations? A priori, you wouldn’t think they’d be all that related. Yet the two problems turn out to be the same, in some sense. I find this really quite remarkable.
I should say, by the way, that other people have shown some related results in the past. I won’t give an account of that history here, though, as we have a detailed discussion in the paper, and I won’t do it justice in this small space.
The second paper (joint with Denes Petz, who has been visiting us at UQ), is a short expository note. It’s a simple elementary account of what may be just about the most important result in quantum information theory – the strong subadditivity inequality for entropy, first proved by Lieb and Ruskai in 1973. Almost everything we know about entropy comes from this inequality.
In the 1980s Denes found a simple (and, I think, extremely beautiful) proof of this result. Our note gives an account of this proof that is (a) short; (b) easy-to-understand (we assume only basic linear algebra and quantum mechanics); and (c) where you can learn something interesting at pretty much every step, if you pay attention. That’s the kind of proof I love to read, myself, which is why I’m so excited about this note!
Hiatus
I’m heading off shortly for two weeks, for a many-body physics conference in Santa Fe, and a quantum information conference in Tokyo. Should be fun, especially as I’ve never visited Japan before. In the meantime, posting is likely to be light to non-existent, i.e., pretty much the same as the last couple of weeks.
Illuminating Science
Physics at UQ has started a blog – Illuminating Science. It’s being run by Joel Gilmore, and Joel already has a bundle of interesting posts up, including a post on Cassini’s discovery of new moons for Saturn, a post on the current status of the famous “Gravity Probe B”, intended to test Einstein’s general theory of relativity, and a post on who owns what in space. Check it out.
What’s wrong with those quantum cryptosystems?
One of the most exciting and beautiful ideas in modern cryptography is the technique of “quantum key distribution” (QKD). QKD provides a method by which two parties (Alice and Bob) can communicate secretly with one another. The beautiful thing is that the security of their communication is guaranteed by the laws of nature.
This method has been demonstrated over the past fifteen years through an ingenious series of experiments by several research groups, and it is now possible to purchase off-the-shelf systems for doing QKD.
In this post I’m going to play Devil’s Advocate, explaining what I think are some problems with QKD, and some likely failure points. In particular, I want to explain what I think is a reasonable scenario which could lead to public claims of having broken QKD.
This is a long post, but it’s not technical, and you don’t need to know anything about quantum mechanics (or cryptography) to understand it.
Background
QKD is conceptually quite simple. The basic problem is for Alice and Bob to share a secret key which only they know, without the necessity of meeting or using some kind of trusted courier; all communication is done openly, in a way that may be susceptible to interception. This key can then be used to send secret messages. They want to do this in a way which is secure against an eavesdropper, Eve, who has access to all their communication.
To accomplish this task, Alice sends Bob a string of specially prepared quantum systems. These systems could be photons (particles of light), or some other quantum system. Bob then performs measurements on those systems in order to accomplish two things:
* First, Bob chews up some fraction of his systems simply checking whether an eavesdropper is trying to listen in on his communications with Alice. It is a basic principle of quantum mechanics that to gain information about a system you have to disturb it. This principle lets Bob detect the presence of an eavesdropper.
* If Bob determines that with high probability no eavesdropper is listening in, then Alice and Bob can obtain some secret key bits from the remaining systems.
There are some subtleties which this description misses, but that’s the basic idea: to exploit the principles of quantum mechanics to detect eavesdroppers.
The idea was proposed in 1984 by Charles Bennett and Gilles Brassard, building on earlier ideas of Stephen Wiesner. Since that time a string of papers by many different research groups has provided increasingly strong proofs of security, proofs based not on any cryptographic assumptions, but rather on the laws of nature themselves.
Principles of security and insecurity
On the face of it the situation for quantum cryptography looks pretty good. Anyone breaking a quantum cryptosystem won’t just get to read someone else’s mail; they’ll also win a free trip to Stockholm, and lasting fame as a Nobel prizewinner. Why not get started with the construction of the quantum internet? At the least, shouldn’t banks and governments be using this technology?
Things aren’t quite as simple as that.
One of the basic principles of ordinary (classical) cryptography is that the algorithms used to do encryption and decryption should be public.
Not everyone follows this principle. Some companies have been known to practice “security by obscurity” (a nice explanation is here), wherein the keep their algorithms private, often using special proprietary algorithms.
Security through obscurity is a bad idea. This has been demonstrated through many many well-publicised cases wherein a company or group of companies release a product containing a proprietary cryptosystem which they refuse to divulge the details of, often making wild claims about it being “unbreakable”. Several months later some clever hacker reverse engineers the system, finds the cryptosystem to be laughably weak, and posts a method to break it on the net.
This isn’t just a theoretical possibility. Spectacular recent examples include the proprietary DVD encryption algorithm, broken in 1999 by Norwegian teenager Jon Johansen, and the GSM cellular phone encryption code, broken in 2003 by academics Elad Barkan, Elih Biham and Nathan Keller.
The solution is to make your encryption and decryption algorithms public, and then to subject them to the most rigorous possible peer review. A cryptosystem which survives a decade or two of attacks is likely pretty good, while one which relies for its security on being obscure is not likely to be as strong, and is always vulverable to being reverse engineered.
On the face of it, quantum cryptosystems don’t fall into the trap of security by obscurity. Anyone can read the original paper of Bennett and Brassard describing QKD, or any of the later proposals. And anyone can read the various papers describing proofs of security.
Admittedly, the community of people both able and interested in understanding those papers is small, but it isn’t so small that the number is negligible. It’s certainly in the thousands, and maybe the tens of thousands. That’s a pretty good community for peer review.
What I want to argue, however, is that in an important sense quantum cryptography does rely on security by obscurity. The fact is that while anyone can read about quantum cryptography, only a tiny number of people can actually use a real live quantum cryptosystem.
The effect is that peer review is confined to theoretical constructs, not to real systems. And as a wise person once noted, while in theory there is no difference between theory and practice, in practice there is.
This difference is much less important classically. When a new classical cryptosystem is developed, anyone can write some code implementing it, and post it on the internet. It can then be passed around the world, and attacks made. The key point is that implementations of classical cryptosystems are cheap and easily distributed.
By contrast, implementations of quantum cryptosystems are expensive and hard to distribute. This means that while the details of a cryptosystem may be public, that doesn’t mean that a lot of people can actually use it in practice.
Why is this important? The problem is that while a cryptosystem may be secure in some theoretical sense, real implementations inevitably deviate somewhat from the ideal theoretical description. Sometimes these deviations result in “back doors” that enable a cryptosystem to be broken.
A spectacular example of this is the so-called timing attack on RSA. RSA is one of the best (and most secure) of the classical cryptosystems. At a theoretical level it’s survived nearly 30 years of attacks.
But at the implementation level, a few years ago someone noticed a way of compromising RSA. In particular, by carefully timing certain operations, it turns out that it may be possible to determine the keys used in RSA, and so break the system!
The reason this attack works is because in a real implementation of RSA, different operations may take different amounts of time, and this can be used against the system. This fact was originally ignored in theoretical analyses. It’s a great example of a “back door” arising in an actual implementation, which departs from the theoretical ideal.
The reason we still have a lot of confidence in RSA is that (a) at a theoretical level, it’s survived all attacks; and (b) few back doors have been found in real implementations, and those back doors have been plugged.
What’s the connection to QKD? I would not be surprised if similar back doors exist in QKD systems that come onto the market over the next few years. In fact, I’d be much less surprised by such back doors than I was by the back door in RSA. Why? Because only a very small number of people have actually had access to a real live quantum cryptosystem, and most of those people have had far more interest in developing the system, rather than breaking it.
“Breaking” QKD
Let me sketch out a scenario I think is reasonably likely.
At some point over the next five years, some dedicated person buys themself a commercial QKD system. They take it apart, and figure out exactly how it ticks. They note carefully all the ways the real implementation differs from the theoretical ideal. And then they figure out a back door, a way to break that implementation.
Let me go a little further in the Devil’s Advocate role and sketch out two possible chains of events that might occur at that point.
First, maybe they’re a researcher of a sensationalist frame of mind. They write up their results in a paper entitled “Quantum cryptography broken”, and send it to Nature, where it causes a huge storm of publicity, and sets back confidence in QKD by five or ten years.
Second, maybe they have more nefarious purposes, and don’t publish their results at all, merely using them to covertly break systems in actual use.
Are these things to worry about? Given the extraordinary claims being made – proponents of QKD advertise “unbreakable security” – I think it’s very reasonable to worry about these eventualities, even if they are extremely remote. And, personally, I’m not so sure that they’re all that remote.
Solutions
I propose that the principle of avoiding security by obscurity needs to be recast in a more strict quantum version. In particular, we need to obey the following principles:
To build the best possible cryptosystems we need to ensure that (a) the theoretical basis is entirely public; (b) actual implementations should be widely publicly available; and (c) there should be extended periods of testing of real implementations.
Systems that don’t obey these principles are unlikely to be secure. Of course, just because a system obeys these principles doesn’t mean a system is secure, but I believe they are a precondition for claims of security to be taken seriously. At the moment, I don’t think QKD satisfies all these principles; in particular, (b) and (c) have not been followed.
How can these principles be followed in the case of QKD?
Here’s one way: funding agencies ought to fund installations which are accessible to multiple groups. One or more of those groups is funded to develop a cryptosystem. And several groups, operating independently, are funded to attack that cryptosystem, by whatever means is necessary.
Maybe something like this is already done. If so, I haven’t heard about it. But I certainly think something like it is necessary to justify confidence in quantum cryptography.
Blogroll
Several new entries on the blogroll, worth checking out: Aporetic, Daniel Lemire, Electron Blue, the GeomBlog, and View from the Corner of the Room. a
Principles of Effective Research: Part XI
This is the final installment in my series on the principles of effective research.
This installment was, in some ways, the most fun to write: it’s some thoughts on the difficulties involved in solving the most important research problems, and how to address those difficulties.
Now, I’ve had less success than I would like in this regard (I doubt I’m alone), so this should all be taken with a big grain of salt. Nonetheless, like so many other research scientists, I’d like to solve some big problems, and this post is an attempt to describe a bit of a model for how to go about doing so.
Enjoy!
Working on important problems
It’s important that you work towards being able to solve important problems. This sounds silly, but people don’t do this for any number of reasons. I want to talk a little about those reasons, and how to avoid them.
Reason 1: Lack of self-development. Many people don’t spend enough time on self-development. If you stop your development at the level which resulted in your first paper, it’s unlikely you’ll solve any major problems. More realistically, for many people self-development is an incidental thing, something that happens while they’re on the treadmill of trying to solve problems, generate papers, and so on, or while teaching. While such people will develop, it’s unlikely that doing so in such an ad hoc way will let them address the most important problems.
Reason 2: The treadmill of small problems. Social factors such as the need to publish, get grants, and so on, encourage people to work only on unimportant problems, without addressing the important problems. This can be a difficult treadmill to get off.
My belief is that the way to start out in a research career is by working primarily on small and relatively tractable problems, where you have a good chance of success. You then continue the process of self-development, gradually working up to more important problems (which also tend to be more difficult, although, as noted above, difficulty is most emphatically not the same as importance). The rare exception is important problems that are also likely to be technically easy; if you’re lucky you may find such a problem early in your career, or be handed one. If so, solve it quickly!
Even later on, when you’ve developed to the point that you can realistically expect to be able to attack important problems, it’s still useful to tackle a mixture of more and less important problems. The reason is that tackling smaller problems ensures that you make a reasonable contribution to science, and that you continue to take an active part in the research community. Even Andrew Wiles continued to publish papers and work on other problems during his work on Fermat’s Last Theorem, albeit at a rather low rate. If he had not, he would have lost contact with an entire research community, and losing such contact would likely have made a significant negative difference to his work on Fermat’s Last Theorem.
Reason 3: The intimidation factor. Even if people have spent enough time on self-development that they have a realistic chance of attacking big problems, they still may not. The reason is that they have a fear of working on something unsuccessfully. Imagine Andrew Wiles feeling if he had worked on Fermat’s Last Theorem for several decades, and completely failed. For most people, the fear of ending up in such a situation is enough to discourage them from doing this.
The great mathematician Andrei Kolmogorov described an interesting trick that he used to get around this problem. Rather than investing all his time and effort on attacking the problem, he’d put the problem into a larger context. He’d announce a seminar series in which he’d lecture on material that he thought would be related to the problem. He’d write a set of lecture notes (often turning into a book) on material related to the problem. That way, he lowered the psychological pressure on himself. Rather than investing all his effort in an attack on the problem – which might ultimately be a complete waste of time – he knew that he’d produce something of value. By making the research process part of a larger endeavour, he ensured that the process was a success no matter how it came out, even if he failed to solve the problem, or was scooped by someone else. It’s a case of not putting all of one’s psychological eggs in one basket.
Richard Feynman described a related trick. He said that when he started working on a problem he would try to convince himself that he had some kooky insight into the problem that it was very unlikely anybody else had. He admitted that very often this belief was erroneous, or that, even if original, his initial insight often wasn’t very good. But he claimed that he found that he could fool himself into thinking that he had the “inside track” on the problem as a result, and this was essential to getting up the forward momentum necessary to really make a big dint in a difficult problem.
Committing to work on an important problem: For the difficult problems, I think commitment is really a process rather than a moment. You may decide to prepare a lecture to talk about a problem. If that is interesting, you enjoy it, and you feel like you have some insight, you might decide to prepare a few lectures. If that goes well, perhaps you’ll start to prepare more lectures, write a review, and maybe make a really big contribution. It’s all about building up more and more insight. Ideally, you’ll do this as part of some larger process, with social support around you.
People who only attack difficult problems: There is a converse to the problem I’ve been talking about, which is people who are only interested in attacking problems that are both difficult and important. This affliction can affect people at any stage of their career, but it manifests itself in somewhat different ways at different stages.
In the case of the beginner, this is like a beginning pole vaulter insisting on putting the bar at 5 meters from the time they begin, rather than starting at some more reasonable height. Unless exceptionally pigheaded, such a person will never learn to vault 5 meters successfully, simply because they will never learn anything from failure at a more realistic starting height. This sounds
prima facie ridiculous, but I have seen people burn out by following exactly this strategy.
The case of the more experienced researcher is more difficult. As I’ve emphasized, once you’ve reached an appropriate level of development I think it’s important to spend some time working on the most important problems. But if that’s all you do, there are some very significant drawbacks. In particular, by attacking only the most important and most difficult problems an experienced researcher (a) takes themselves out of circulation, (b) stops making ongoing contributions, (c) loses the habit of success, and (d) risks losing morale, which is so important to research success. I think the solution is to balance one’s work on the more and less important problems: you need to schedule time to do the more important stuff, but should also make sure that you spend some time on less high-risk activities.
In both cases, the explanation is often, at least in part, intellectual macho. Theorists can be a pretty judgmental lot about the value of their own work, and the work of others. This helps lead some into the error of only working on big problems, and not sometimes working on little problems, for the fun of it, for the contact it brings with colleagues, and for the rewarding and enjoyable sense of making a real contribution of some significance.
Hawking, Preskill, and information loss
Stephen Hawking’s disavowal of his belief that black holes destroy information has received a lot of coverage, both in the press and the blogosphere.
I don’t have a lot to add over what has been said elsewhere, especially Sean Caroll’s recent series of posts (including a transcript of Hawking’s talk) on his blog. But I will point to John Preskill’s nice discussion of what was at stake.
PhDs
A little advertising: on Thursday September 23 and Friday September 24 the University of Queensland Physics Department is going to be holding a Postgraduate Information and Recruitment Day. Australian and New Zealand students who might be interested in doing a PhD in physics are strongly encouraged to apply. See the above links for more information about research activities within the Department.
Successful applicants will be flown to Brisbane to participate in activities on Thursday afternoon and Friday; we will also provide accommodation for Saturday and Sunday, if you should wish to stay and explore Brisbane and surrounds. I think the event last year was a great deal of fun for all concerned, and am looking forward to it again this year.
Principles of Effective Research: Part X
The penultimate installment in my series. This one is rather short, and something of a placeholder; the final installent is rather more substantive, I promise! It may be a few days before I post the final installment, as I’m heading off to Fraser Island for the weekend, and I may not get it posted before I go.
This one is short in large part because it’s about something most scientists receive a lot of training in: problem-solving, albeit not necessarily of the kind that so often arises in research – the kind where figuring out what the appropriate formulation of the problem is may be half the battle. So I wanted to make a few remarks in the essay about the necessity of finding and keeping forward momentum, to keep the research fog at bay.
One final comment before getting into today’s installment. An unexpected consequence of writing this essay has been the emergence of a theme I neither planned, nor was fully conscious of before starting the essay. That theme is the tension that exists between the short- and the long term. This is not an easy tension to resolve, but I believe it lies at the heart of many difficulties in research.
A quote of Lois Bujold, that I put earlier on the blog, comes to mind again:
All great human deeds both consume and transform their doers. Consider an athlete, or a scientist, or an artist, or an independent business creator. In service of their goals they lay down time and energy and many other choices and pleasures; in return, they become most truly themselves. A false destiny may be spotted by the fact that it consumes without transforming, without giving back the enlarged self.
Oppenheimer spoke in a letter to his brother about the paramount importance of discipline in his life. I didn’t understand this emphasis when I first read it, but I now think that Oppenheimer was speaking of the same tension that I have seen emerge in this essay, and how one must resolve it.
On with today’s installment…
The skills of the problem-solver
As I’ve already said, our technical training as physicists focuses a lot more on problem-solving than problem-creation, so I’m not going to say a lot about the skills needed to be a problem-solver. But I will make a few general remarks that I find helpful.
Clarity, goals, and forward momentum: In my opinion, there is little that is more important in research than building forward momentum. Being clear about some goal, even if that goal is the wrong goal, or the clarity is illusory, is tremendously powerful. For the most part, it’s better to be doing something, rather than nothing, provided, of course, that you set time aside frequently for reflection and reconsideration of your goals. Much of the time in research is spent in a fog, and taking the time to set clear goals can really help lift the fog.
Have multiple formulations: One of the most common mistakes made by researchers is to hold on very closely to a particular problem formulation. They will stick closely to a particular formulation of a problem, without asking if they can achieve insights on related problems. The important thing is to be able to make some progress: if you can find a related problem, or reformulate a problem in a way that permits you to move forward, that is progress.
Spontaneous discovery as the outcome of self-development: For me this is one of the most common ways of making discoveries. Many people’s basic research model is to identify a problem they find interesting, and then spend a lot of time working on just that problem. In fact, if you keep your mind open while engaging in exploration, and are working at the edge of what is known, you’ll often see huge opportunities open wide in front of you, provided you keep developing your range of skills.