Something I must remember

When giving a talk, and I realize 10-15 minutes before the end that I’m not going to cover everything, don’t speed up. There’s this oddball temptation to “cover everything”, but it’s a chimera – if I speed up, chances are very good that most of my audience doesn’t follow, so what have I gained? Nothing.

The better stategy is to pause for moment to gather my thoughts, decide what the most important remaining point is, and then spend 5-10 minutes making that point in the clearest way possible. Hard to remember in the heat of the moment, with the illusion of completeness beckoning.

Published
Categorized as General

Michael Atiyah

Excerpts from a fascinating (and lengthy!) interview with the great mathematician Michael Atiyah, which appeared in the 1984 Mathematical Intelligencer, reprinted in the book “Mathematical Conversations”, edited by Robin Wilson and Jeremy Gray. The interviewer is Roberto Minio.

Q: How do you select a problem to study?

A: I think that presupposes an answer. I don’t think that’s the way I work at all. Some people may sit back and say, “I want to solve this problem” and they sit down and say, “How do I solve this problem?” I don’t. I just move around in the mathematical waters, thinking about things, being curious, interested, talking to people, stirring up ideas; things emerge and I follow them up. Or I see something which connects up with something else I know about, and I try to put them together and things develop. I have practically never started off with any idea of what I’m going to be doing or where it’s going to go. I’m interested in mathematics; I talk, I learn, I discuss and then interesting questions simply emerge. I have never started off with a particular goal, except the goal of understanding mathematics.

[…]

You can’t develop completely new ideas or theories by predicting them in advance. Inherently, they have to emerge by intelligently looking at a collection of problems. But different people must work in different ways. Some people decide that there is a fundamental problem that they want to solve, such as the resolution of singularities or the classificiation of finite simple groups. They spend a large part of their life devoted to working towards this end. I’ve never done that, partly because that requires a single-minded devotion to one topic which is a tremendous gamble.

It also requires a single-minded approach, by direct onslaught, which means you have to be tremendously expert at using technical tools. Now some people are very good at that; I’m not really. My expertise is to skirt the problem, to go around the problem, behind the problem … and so the problem disappears.

[…]

Q: It’s clear that you have a strong feeling for the unity of mathematics. How much do you think that is a result of the way you work and your own personal involvement in mathematics?

A: It is very hard to separate your personality from what you think about mathematics. I believe that it is very important that mathematics should be though of as a unity. And the way I work reflects that; which comes first is difficult to say. I find the interactions between different parts of mathematics interesting. The richness of the subject comes from this complexity, not from the pure strand and isolated specialization.

But there are philosophical and social arguments as well. Why do we do mathematics? We mainly do mathematics because we enjoy doing mathematics. But in a deeper sense, why should we be paid to do mathematics? If one asks for the justification for that, then I think one has to take the view that mathematics is part of the general scientici culture. We are contributing to a whole, organic collection of ideas, even if the part of mathematics which I’m doing now is not of direct relevance and usefulness to other people. If mathematics is an integrated body of thought, and every part is potentially useful to every other part, then we are all contributing to a common objective.

If mathematics is to be thought of as fragmened specializations, all going off independently and justifying themselves, then it is very hard to argue why people should be paid to do this. We are not entertainers, like tennis players. The only justification is that it is a real contribution to human thought. Even if I’m not directly working in applied mathematics, I feel that I’m contributing to the sort of mathematics that can and will be useful for people who are interested in applying mathematics to other things.

[…]

The more I’ve learned about physics, the more convinced I am that physics provides, in a sense, the deepest applications of mathematics. The mathematical problems that have been solved or techniques that have arisen out of physics in the past have been the lifeblood of mathematics. And it’s still true. The problems that physicists tackle are extremely interesting, difficult, challenging problems from a mathematical point of view. I think more mathematicians ought to be involved in and try to learn about some parts of physics; they should try to bring new mathematical techniques into conjuction with physical problems.

[…]

Q: Do you think the Fields Medals serve a useful function?

A: Well, I suppose in some minor way. I think it’s a good thing that Fields Medals are not like the Nobel Prizes. The Nobel Prizes distort science very badly, especially physics. The prestige that goes with the Nobel prizes, and the hooplah that goes with them, and the way universities buy up Nobel prizemen — that is terribly discontinuous. The difference between someone getting a prize and not getting one is a toss-up — it is a very artificial distinction. Yet, if you get a Nobel prize and I don’t, then you get twice the salary and you university builds you a big lab; I think that is very unfortunate.

But in mathematics the Field Medals don’t have any effect at all, so they don’t have a negative effect. They are given to young people and are meant to be a form of encouragement to them and to the mathematical world as a whole.

[…]

Q: When you’re working do you know if a result is true even if you don’t have a proof?

A: To answer that question I should first point out that I don’t work by trying to solve problems. If I’m interested in some topic, then I just try to understand it; I just go on thinking about it and trying to dig down deeper and deeper. If I understand it, then I know what is right and what is not right.

[…]

Q: Where do you get your ideas for what you are doing? Do you just sit down and say, “All right, I’m going to do mathematics for two hours?”

A: […] There are occasions when you sit down in the morning and start to concentrate very hard on something. That kind of acute concentration is very difficult for a long period of time and not always very successful. Sometimes you will get past your problem with careful thought. But the really interesting ideas occur at times when you have a flash of inspiration. Those are more haphazard by their nature; they may occur just in casual conversation. You will be talking with somebody and he mentions something and you think, “Good God, yes, that is just what I need … it explains what I was thinking about last week.” And you put the two things together, you fuse them and something comes out of it. Putting two things together, like a jigsaw puzzle, is in some sense random. But you have to have these things constantly turning over in your mind so that you can maximize the possibilities for random interaction. I think Poincare said something like that. It is a kind of probabilistic effect: ideas spin around in your mind and the frutiful interactions arise out of some random, fortunate mutation. The skill is to maximize this degree of randomness so that you increase the chances of a fruitful interaction.

From my point of view, the more I talk with different types of people, the more I think about different bits of mathematics, the greater the chance that I am going to get a fresh idea from someone else that is going to connect up with something I know.

Continuing positions in theoretical physics at UQ

The University of Queensland Department of Physics is advertising a position as either a Lecturer or Senior Lecturer in Theoretical Physics. We are encouraging applications in condensed matter and quantum information. See the description at the above link for more details.

(Just to translate the level of those positions: rough US equivalents are Assistant and Associate Professor.)

Note: To people who’ve seen this before, I posted this yesterday, but the link worked only with cookies set from my computer. My apologies to people who tried to click through, and got an error message.

Published
Categorized as General

The fundamental importance of emergence

Ben Powell, guest blogging at Illuminating Science, writes:

Recently I had a rather interest discussion with Andrew White. […]

The discussion/argument/whatever started out about the physics curriculum at UQ but quickly moved on to a discussion about what where the truly original contributions to physics in the twentieth century. Andrew claimed that there where only two. The theory of quantum mechanics and the theory of relativity. For the record I should say that many (perhaps most) other physicists would agree with Andrew. I don’t. I think that the existence of emergent phenomena is equally fundamental and probably more important than either quantum mechanics or relativity.

Working with the criteria of beng fundamental and important, rather than “truly original”, which I don’t understand, I’d still disagree, because (a) emergence wasn’t discovered in the 20th century; and (b) it’s not an empirical discovery, as such, but rather a property of many simple rule systems, not just the laws of physics. I’d place it more in the category of mathematics than of physics.

Furthermore, I can’t think of any specific example of an emergent phenomena that rivals the discovery of quantum mechanics or relativity in importance.

None of which, of course, is to say that the fact of emergence is not fantastically important and interesting.

Let me illustrate emergence with a very old example – time’s arrow. The so-called fundamental laws of physics (i.e. quantum mechanics and relativity) do not care about which way you run time. That is if you think of the world as a movie then, if I played the movie backwards everything should, according to these ‘fundamental’ laws, be the same. Clearly your everyday experience contradicts this prediction (you can’t make an omelet without breaking some eggs – but you certainly can’t make an egg by ‘un-breaking’ an omelet). So – if science is to be based of empirical evidence shouldn’t we reject these ‘fundamental’ laws.

The answer is that when we many particles acting together the begin to behave in new ways that we could never expect from studying a single particle. Such new behaviours are called emergent behaviours. In this case the emergent property is called entropy. Entropy is a measure of disorder – the more disordered a system is the higher its entropy. Something given the rather pompous name of ‘ the second law of thermodynamics‘ says that the entropy of the universe can never decrease. That is the universe as a whole is always getting more disorder. This is easy to misunderstand. Small parts of the universe can decrease their entropy, but then the entropy of the rest of the universe has to increase, so that the total entropy of the universe does not decrease. Actually as you’re sitting here reading this your body is busy decreasing its entropy, however all the body heat that is following out of you is disordering the rest of the universe and
increasing the entropy of the rest of the universe.

However, it is important to realise that when physicists first discovered entropy they did not derive it from a ‘fundamental’ theory, instead they found that, in they’re theories on many particles they had to include entropy to make the theory agree with nature. This century we found that when classical (or Newtonian) mechanics was replaced by quantum mechanics we still need to worry about the role entropy plays in large systems. In fact we can go further than that. We do not know how to derive the second law of thermodynamics from any ‘fundamental’ theory. And yet we believe it to be true. Einstein went so far as to say that “it is the only physical theory of universal content which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown.” So what made him so sure of this?

According to Ed Jaynes’ derivation of thermodynamics, the general applicability of the second law is a trivial consequence of adopting a Bayesian view of probabilities, together with the reversibility of the fundamental dynamical laws. Carl Caves has a nice explanation of this point of view. (The original papers by Jaynes are in the 1957 Physical Review).

Personally, I’m not entirely sure this approach gets at the whole physical content of the second law, but if you believe Einstein might have entertained similar thoughts, then it does give an appealing answer to the question “Why was Einstein so certain of the general applicability of the second law, even in new physical theories”. It’d be interesting to look through his other writings to see if there’s any evidence he did or did not hold these sorts of views.

The important thing to understand is the second law of thermodynamics is true regardless of the details of the ‘fundamental’ theory – be that classical physics, quantum physics or some future theory that we do not know about yet. Therefore Bob Laughlin (who won the 1998 physics Noble prize) and David Pines have called principles such as the second law of thermodynamics ‘higher organising principles’.

The second law of thermodynamics is just the best know of these ‘higher organising principles’, we know know that many physical phenomena can only be described in terms of such ‘higher organising principles’. Examples include superconductivity, Bose-Einstein condensation, the quantum Hall effect, protein folding, most of chemistry, all of biology and life to name a few.

Finally we come to my last point. There is a general acceptance in science, which I must point out is not shared by many philosophers, of a reductionist world view. That is to say the view that we can materials physics in terms of particle physics, chemistry in terms of materials physics, biology in terms of chemistry, psychology in terms of biology and the humanities in terms of psychology. It seems to have become increasingly clear, over the course of the twentieth century that, if this is true then these ‘explanations’ can only be made in terms of higher organising principles because all of the things begin explained are emergent phenomena.

Remember more is different.

Published
Categorized as General

Early and late adopters in research

In this post I describe some tentative ideas about two different styles of doing research that I call “early adoption” and “late adoption”. These styles are, of course, caricatures, but I think they’re interesting and enlightening enough to be worth some thought and discussion.

Early adopters are the people who get into new research fields very early on, before those fields are well known or established. They adopt or create a new narrative, one that is initially acknowledged by only a few others.

It’s not difficult to come up with examples of early adopters – consider Einstein on quantum mechanics, Minkowski on special relativity, or Freeman Dyson on quantum electrodynamics.

Such examples are perhaps somewhat misleading, since they create the impression that early adopters are uniformly successful. Of course, almost by definition the great majority of early adopters work in fields that fail and die quickly, and so remain anonymous.

Other more recent examples are the people who worked on quantum information prior to the explosion of interest in 1994, or the people who worked on social networks prior to the big boom in interest starting in the late 1990s.

Other people – the late adopters – get into research fields when those fields have become established, with an agreed upon basic narrative and set of fundamental problems. It’s a lot easier to find examples of late adoption: if you work in a University, then the chances are that ninety plus percent of your colleagues are late adopters, working in well-established research fields.

I’ve been using the loose term “narrative”, without explaining precisely what I mean. I’m actually bundling up several different things into the term:

  • The set of big picture problems that motivate a field, like “how did the Universe begin”, or “how do cells turn into bodies?”
  • The norms that define what it means to have a result in a field, i.e., what it means to make progress, including standards of evidence and of argument. Some consideration shows that even apparently very closely related fields (e.g. quantum optics and quantum information) can have quite different norms.
  • The justification for considering the field important – how it relates to the rest of science, what it can contribute to other fields, how it relates to society as a whole, and so on.

In short, the narrative is the story practitioners tell themselves and others about a field. It’s the first chapter or two of a good PhD thesis. It’s the paragraph at the beginning of the paper that goes “We know that field blah is important because of blah and blah. A major problem in the field is blah, which is important because of blah. We’re going to look at little problem blah, which will enable progress on our major problem.”

In established fields, the narrative is well-known and largely agreed upon by people within the field, although not necessarily without. Most (though not all) papers make only very limited changes to the narrative. By contrast, in new fields, the narrative itself is up for grabs, and early papers are often not only concerned with solving a specific problem, but also (often as a subtext) with constructing the field’s narrative.

Caveats

A few caveats about the dichotomy between early and late adopters are in order.

First, early adoption does not mean bringing some new set of technical tools into a field to help solve some class of problems, except insofar as those tools help transform the narrative of the field, i.e., modify in some essential way the set of basic questions motivating the field, or the context in which the field is understood, or the norms defining what it means to “make progress” in the field.

A second caveat is that, of course, one person can be both an early and a late adopter. Feynman is a good example, working in established fields such as low-temperature and high-energy physics, and in new-born fields such as the physics of information and nanotechnology. So, while I’ve referred to people as being early or late adopters, these terms really refer to roles that people play, and the same person may play multiple roles.

A third caveat is that to some extent all researchers are early adopters – to write new papers, we need to accept and work on novel problems. But most papers work on problems which are technical variations on previously solved problems, or involve only an infinitesimal change to the narrative, and this is not really what I’m talking about as early adoption.

(Incidentally, I expect that the difference between people who are habitually early or late adopters shows up particularly starkly in the refereeing process, with late adopters being much more resistant to papers that change the narrative of a field, or suggest a new narrative.)

Advantages and disadvantages

There are, of course, advantages and disadvantages to both research roles.

As an early adopter, you can play around with fundamental questions, with a high likelihood of making progress. You often don’t need a lot of background before you can begin doing interesting work. Indeed, I suspect that a good way of finding new fields that are likely to flourish is to look for fields where PhD students are doing much of the best work. Finally, as an early adopter you don’t need to worry so much about being scooped

Of course, there are also many disadvantages of being an early adopter. First, it’s obviously not trivial to pick a research direction that is truly novel and is likely to be of long-term interest. For every field that goes on to become important and well established, there’s a dozen other “promising” nascent fields that fizzle and go out.

Perhaps a more significant disadvantage is that you lose some of the advantages of working in an established field: a large pool of colleagues, conferences, community, grants, jobs, recognition, and all the things entailed.

Outside the bounds of the dichotomy

To finish off, I want to briefly mention two closely related roles that to some extent fall outside the bounds of the dichotomy I’ve set up.

Pioneers: Essentially an extreme version of early adopters. These are the people who lay down the foundations for the narrative of a new field. Think of Shannon’s papers on information theory, Turing’s paper on computation, or Deutsch’s paper on quantum computing. In each case, a very important function of the paper or papers was to outline, in a primitive form, a narrative for a new way of doing science. (All the papers also solved scientific problems, but I think only in Shannon’s case was that the most important function.) This new approach is then taken up and the narrative is fleshed out by other early adopters, until it matures into a full-fledged research subfield.

Solvers of major problems: The most immediate kudos in research usually go to people who solve longstanding problems acknowledged to be of importance in some established field. This sounds like an example of late adoption, and sometimes it is – Andrew Wiles’ proof of Fermat’s last theorem did not, so far as I know, change the basic reasons why people do number theory, or what they consider important in number theory. However, sometimes the solution of such a big problem requires that radically new ideas be employed, and this can change the entire field. For example, to solve Hilbert’s entscheidungsproblem, Turing had to introduce a rigorous mathematical definition for the computer, paving the way for an entire new scientific discipline.

Published
Categorized as General

Twenty-first century science

Dave Bacon writes:

One often hears biologists say that biology is the “physics of the 21st century.” When they say this, I think the main motive is to indicate that great scientific advances will be coming out of biology in the next century.

I’ve never actually heard a biologist say this, perhaps because I know relatively few biologists. I have heard several physicists say it, presumably that class of physicist who wishes they went into molecular biology, or perhaps made billions in the .com boom.

My own opinion is the physics is going to be the physics of the twenty-first century.

I have two broad sets of reasons. First, there are a bundle of really important fundamental questions that we don’t know the answer to:

  • How can quantum mechanics and gravity be put into a single theory, preferably one integrating the usual standard model of particle physics?
  • What’s up with quantum mechanics and measurement? The fact that we don’t properly understand our most successful scientific theory always seems to me like something of an embarrassment.
  • How did our Universe start? How will it end? What is its structure?
  • There are many other puzzles – dark matter, the cosmological constant, the Pioneer anomaly, and others – which we don’t understand. It’s possible and maybe even probable that some of these are unimportant. Still, it seems pretty likely that one or more of these is the tip of a really big iceberg.

Progress on any of these is likely to come from within physics; it will certainly affect physics, and if past history is any guide, it will probably profoundly affect the rest of science and technology as well. Of course, it may take decades to make real progress on these problems, and I suspect this is where some of the attitude Dave refers to comes from – a feeling that the grass is greener on the biological side.

My second set of reasons are more applied, although I suspect they will greatly impact the fundamental questions as well:

  • Nanotech. Yes, there’s lots of hype. My guess is that in the short run, this will turn out to have been over-the-top, but in the long run, it’ll seem incredibly restrained. A self-replicating assembler, even one with extremely limited capabilities, is likely to have astonishing consequences.
  • Complex quantum systems. I think we’ll see a revolution as people assimilate the idea that whole new types of complexity can arise in quantum systems, going entirely beyond what is possible in conventional classical systems. My guess is that phenomena like superconductivity and the fractional quantum Hall effect are the tip of the iceberg.
  • Quantum nanoscience and quantum information. These are really two sides of the same coin: leveraging the power of complex quantum systems to accomplish tasks (either material tasks, or information processing) impossible or impractical in the classical world.

I could, of course, easily be wrong about any of these things, and there’s undoubtedly a lot that I’m missing. But these are all reasons why I’m very optimistic about the role physics will play in twenty-first century science.

Published
Categorized as General

Porting to WordPress

Thanks to Peter Rohde for porting my blog and webpage to WordPress. There will be some tinkering over the next few days, as I settle on a style I’m happy with. Comments are welcome.

Published
Categorized as General

The OTHER Millennium Prizes

As is well known, the Clay Mathematics Foundation is offering seven million dollar “Millennium Prizes” for the solution of some of the most important open problems in mathematics.

The Australian Mathematical Society is running an interesting series of articles in its Gazette (see here, and look in the March 2005 and Nov 2004 issues) proposing an eight, ninth, etcetera, problem, all the way up to Hilbert’s number of 23 problems, presumably to be published about 6 years from now.

Would-be millionaires be warned, however, as the Gazette comments that “Due to the Gazette’s limited budget, we are unfortunately not able to back these up with seven-figure prize monies, and have decided on the more modest 10 Australian dollars instead.”

Published
Categorized as General

Research Fellowship

The Quantum Information Science group at the University of Queensland is looking to appoint an outstanding researcher to a Research Fellowship for between 3 and 5 years.

A detailed description of the position and application procedure is available here. I encourage strong applicants in quantum information science and related areas to consider applying. (Note that the level of the position is somewhat higher than the positions described in my previous post.)

The closing date for applications is April 15, 2005.

Please pass a link to this message on to any parties you believe may be interested. The URL is:

http://www.qinfo.org/people/nielsen/blog/archive/000184.html

Published
Categorized as General

Postdoctoral Fellowships available

Each year the University of Queensland offers a limited number of postdoctoral fellowships for qualified applicants. Several members of the quantum information science group have been past recipients.

These are nice fellowships. They are typically awarded for three years, have a small grant attached to allow the recipient to travel and host visitors, and afford a fair measure of independence, since they are awarded by the University, not by any individual Faculty member.

The 2006 call for applications is now available, and I encourage strong candidates [*] in quantum information science or a closely related area to consider applying. Applications close April 29, 2005.

Please contact me if you’re interested in applying.

Note that applications are to be made directly to the University, not to me.

Please pass a link to this message on to any parties you believe may be interested. The URL is:

http://www.qinfo.org/people/nielsen/blog/archive/000183.html

[*] In practice, this usually means having at least several published papers in refereed journals of high standing.

Published
Categorized as General