Maps

Second update and warning: Apparently there was a bug in the analysis described below. Please disregard, and look instead at my later post.

Michael Gastner, Cosma Shalizi, and Mark Newman have developed some astonishing maps of the US election results. There really are two Americas, but it’s not the two people talk about. Rather, US Counties divide up into two sets:

1. Counties (about 400, for a total of 6 million people) where essentially everyone votes Democrat.

2. The remaining Counties, which follow a more or less typical Bell curve, with the mean County about 60% Republican, and 40% Demcrat.

Their results are so stark that I have to wonder if there’s something wrong with their data. (Might we be seeing gerrymandered counties?) Assuming there’s nothing wrong, that’s an awful lot of Democrats who will never meet a Republican!

Update: So why does this happen? Maybe it is all gerrymandered counties, or maybe it’s not. If not, what else is going on? Why are those results that way?

Published
Categorized as General

Ranking Schmanking

My last post was about the ever-controversial – at least, among academics – subject of University rankings. I thought I’d add a few comments about why this subject is controversial, and what point (if any) there is in such rankings.

Many people are disappointed by such rankings, often disparaging them as unobjective, or as being “not scientific”.

In my opinion, this is to start off with the wrong perspective. A ranking of the top 200 Universities in the world is no more likely to be objective or scientific than is a list of the top 200 movies, or the top 200 albums. Everybody’s got a different ideal, and so different people come up with different lists.

In short, the devil is in the details of the criteria used. I’ve often heard people decry the criteria used to produce a certain list as “obviously wrong” in some way. The difficulty is that it’s rare to find two people who agree on what ways those criteria are obviously wrong. This is particularly true if you take a diverse range of people – High School students, undergrads, postgrads, postdocs, admin staff, academics, bureaucrats and politicians all have radically different ideas of a what a University ought to be.

So what’s the point of lists like the one mentioned? Is it all just personal opinion? Is it all just a waste of time?

In my opinion, no, it’s not all just a waste of time. The lists are worthwhile, provided they’re not taken too seriously. Here’s a few ways they’re worthwhile:

1. They focus our attention on a very interesting question: what is it that makes a good University? To what should one aspire? These are things well worth thinking about for anyone associated with a University, and they often get lost in the humdrum of the everyday.

2. They can help people at all levels make decisions. This might mean a High School student entering University, or a Government bureaucrat making multimillion dollar funding decisions. This is only a good thing, of course, if people are conscientious, look at the criteria being used in constructing a list, and making sure they’re appropriate for the type of decision being made.

3. They can change the way we look at academia. For example, ETH Zurich was just barely on my radar before looking at this list. The fact that it came in at number 10 certainly got my attention.

4. Most people associated to a University in any way have their own little informal version of this list, based on their own personal criteria. Looking at a list like this causes us to re-evaluate. I must admit, my own personal list is far more dominated by American Universities than the list I pointed to, and there’s no way six Australian Universities feature in the top 50. Seeing this list has caused me to ask myself some questions about whether my former evaluation was wrong.

Published
Categorized as General

Rankings

The Times Higher Education Supplement recently published an interesting set of University Rankings. These are always rather subjective – how do you assign a single number to a University – but interesting nonetheless.

The top ten were Harvard, Berkeley, MIT, Caltech, Oxford, Cambridge, Stanford, Yale, Princeton and ETH Zurich.

The University of Queensland came in at 49.

The top-ranked Australian University was the Australian National University, at 16.

Published
Categorized as General

Posting referee reports

Update (Nov 6, 2004): Lance Fortnow comments on this issue. Doron Zeilberger has recently independently commented on some related issues, from a rather different point of view. Finally, Seb gives another update.

Seb Paquet posts about the public posting of referee reports on blogs.

I’ve contemplated doing this several tmes in the past, but have held off because I don’t have satisfactory answers to key questions: will this help make referees more accountable? What are the upsdes? What are the downsides?

What do readers think?

On a related note, Cosma Shalizi has posted a whole bunch of his student evaluations online. He has my sincere admiration for posting the first set.

Published
Categorized as General

Kevin Drum rocks

For those liberals who (like me) are not feeling so hot, definitely go and look at Kevin Drum today. Besides being probably the best of the liberal bloggers, in my opinion, Kevin’s remaining relatively upbeat and focused on postive action.

Published
Categorized as General

Foolish predictions

Kerry to win by 6% over Bush in the popular vote. No prediction on the Electoral College, other than a Kerry win.

Probably wishful thinking on my part. We’ll see.

I hope to resume more substantive blogging soon. Too many other things happening.

Published
Categorized as General

Quantum computing, one step at a time

For anyone interested in when we’ll have large-scale quantum computers, there was a very striking paper released today, by Manny Knill.

To explain Knill’s paper, I need to give a little background.

Just a little over ten years ago, quantum computing suddenly took off when Peter Shor announced his fast algorithm for quantum factoring.

In 1994, large-scale quantum computing looked like a pipe dream. Several people said publicly (and far more said it privately): “you’ll never ever be able to build a device like that”.

The most common criticisms ran something like this: “to quantum compute, you need to do Y and Z. At the moment in the lab, you can’t even do A and B. Therefore, you’ll never be able to quantum compute.”

The tempting response is to say “well, we’re just about to do C and D, and E and F look pretty likely as well in a few years time. So maybe X and Y aren’t so unrealistic.”

That’s a tempting response, and it’s how I used to respond to this sort of criticism. But it’s turned out that neither the criticism nor the response is accurate.

What’s wrong with the argument is not (or not only) that it doesn’t take account of technological progress from A to B to C and so on.

No, the main thing wrong with the argument has turned out to be that it doesn’t take enough account of theoretical progress.

Sure, over the years experimentalists have moved from A to B to C to D, and they’re getting on to E and F.

But in the meantime, theorists have shown that, actually, you only need to do J and K to quantum compute.

(Alphabet not to scale. At least, not as far as I know.)

In short, lots of discussions of the future of quantum computing are framed as though it’s a fixed target, like building a teraflop laptop.

But it’s not. It’s a fluid target, and pure theory can move us a lot closer to the target, without technology improving a whit.

What’s this all got to do with Knill’s paper?

To go back to 1994 again, one of the first responses to Shor’s paper was numerous claims, some of them public, that quantum computing would never be possible because of the effects of noise.

Roughly speaking, the argument was that quantum states are analogue beasts, and it’s very difficult or impossible to protect analogue information against the effects of noise. Therefore, quantum computers will inevitably be overwhelmed by the effects of noise.

In late 1995 and early 1996, Peter Shor and Andrew Steane independently showed that quantum information, despite appearances to the contrary, actually behaves much more like digital information. In particular, it turns out that the analogue continuum of errors apparently afflicting quantum states can effectively be digitized, and this enables error-correction techniques to be applied to protect against the effects of noise.

I’d like to emphasize, by the by, that this ability to digitize is a deep result, not at all obvious, and depends critically on certain special properties of quantum mechanics. It’s difficult to give a short pat explanation, even to experts on quantum mechanics, and I won’t try to explain it here.

The error-correction techniques were quickly extended to an entire theory of fault-tolerant quantum computation. I won’t try and name names here, as a very large number of people were involved. Indeed, the development of fault-tolerance meant that 1996 was, perhaps the one year in the last ten in which progress toward quantum computing really did seem rapid!

Roughly speaking, the takeaway message from all this work went something like this, circa the end of 1996:

“Suppose I can build individual quantum computer elements so they work with an accuracy of 99.9999% – the so-called threshold value. Without error-correction, if I put 10 million together, there will probably be a few total failures that screw everything up. But I can use error-correction to reduce my error rate as low as I like, provided all my elements work better than that threshold. Whatsmore, this is true no matter how large the computation.”

This idea of a threshold for quantum computing is an incredibly important one, and there’s been quite a bit of work on trying to improve that number (99.9999%).

Knill’s paper is the latest in that line of work, improving the threshold.

What’s Knill’s value?

His threshold value is 97%. Or 99%, with more modest resource requirements in terms of number of gates, memory steps, etcetera.

To state the obvious, having to do things to an accuracy of 97% (or 99%) is a far more encouraging state of affairs than 99.9999%.

There’s a lot of caveats. The 97 / 99% number is for a rather artificial error model. Real physical systems don’t behave like Knill’s model. Indeed, it’s not really clear what that number translates into in terms of actual physical parameters – heating rates, dephasing times, and so on. And Knill uses a model satisfying some assumptions that may not be such a good approximation in real physical systems; indeed, they may even fail completely. (This is true of all the threshold results, although there’s been extremely encouraging recent progress on this front too.) Finally, Knill’s approach has a rather high resource overhead.

All the same, no matter how you look at it, moving from 99.9999% to 97 / 99% is pretty darn encouraging, despite the caveats. And I do wonder just how high the threshold can go.

Published
Categorized as General

Quantum games

Two interesting posts by Daniel Davies at Crooked Timber on a topic that is likely of some interest to many readers of this blog: quantum game theory.

In particular, Davies discussed whether protocols like those allowed in this paper by Cleve, Hoyer, Toner and Watrous can be said to involve communication or not. I’d comment at more length, but have to run.

Published
Categorized as General

Atiyah and Singer

Via Peter Woit, an excellent interview with Atiyah and Singer on the occasion of their receiving the Abel Prize. I’d post excerpts, but I’d probably end up posting the entire thing.

One thing I find fascinating is their belief in the fundamental importance of the connection between physics and mathematics. Most famously, in recent years, this has resulted in what is apparently an extremely fruitful cross-fertilization between the high-energy physics community (especially the string theorists) and mathematicians.

I’ve occasionally wondered to what extent similar connections may appear as a result of the ongoing work in quantum information science.

In that vein, let me mention two exciting recent papers, by Daftuar and Hayden, and by Klyachko.

These papers are ostensibly about a problem of great physical interest, namely, characterizing the possible relationships between the quantum state of a many-particle system, and the quantum state of the individual particles. This is certainly a problem of interest in quantum information, and the solution probably has implications in other areas of physics, like condensed matter physics.

Remarkably, these papers relate this physical problem to some quite sophisticated (and, occasionally, very recent) developments in mathematics. It seems pretty likely to me that the many problems of physical interest still open in this general area may in the future help stimulate the development of interesting new mathematics.

Published
Categorized as General