Polymath wiki logo

Many people have contributed striking logos for the Polymath wiki. It seems to me that there’s now enough suggestions to have a good conversation about which logo to use, and (perhaps) how the logos could be improved, if that’s what people want. I suggest having that conversation at the talk page for the logo.

Speaking about open science

Over the next few months, I’ll be giving talks to help raise awareness of open science in many cities in North America and Europe: what open science is, what the benefits are, what the obstacles are, and how we can overcome those obstacles.

If you’re interested in having me speak in your city, I’d like to hear from you. Please drop me an email at mn@michaelnielsen.org.

As a sampler of the kind of talk I can give, see my talk at TEDxWaterloo. That talk was for a general audience – I’m also interested in speaking to audiences of scientists in all disciplines, to librarians, to people in technology companies and organizations, to people in government. I’d also love to meet people everywhere who are working on open science projects!

My speaking is being supported through a generous grant from the Information Program of the Open Society Institute, with assistance from York University.

As a result of this support, there will be no speaker’s fee. Furthermore, if your organization does not have a budget to support travel, that should not be a barrier.

Open science

The following talk gives a short introduction to open science, and an explanation of why I believe it’s so important for our society. The talk is intended for a general audience, and was given at the very stimulating and enjoyable TEDxWaterloo event held in the Waterloo region, just outside Toronto, in March of 2011.

TEDxWaterloo

I’m speaking today about open science at the TEDxWaterloo event, just outside Toronto. I’m really looking forward to the event, and to all the talks – I’m especially excited to see Abby Sunderland and Roberta Bondar. There will be live streaming for the event. I’m speaking around 1:20 pm (Canadian EST), if you’re interested. I don’t seem to be able to find a schedule for the other speakers.

“A change of perspective is worth 80 IQ points”

I was recently asked to prepare a two minute talk on a topic of my choice, for a small audience of about 10 people. Here’s what I came up with.

This is a one dollar bill [holds one up].

[Picking out two people in the audience] Alice and Bob – later in this talk I’m going to use the word “points”.

Can I ask you to pay close attention to what I’m saying, and when you hear me say “points”, stand up from your seat?

Will you do that for me?

Thanks!

Just to make it a bit competitive, I’ll give the first of you to stand up the dollar bill.

Many of us, myself included, often think of a person’s intellectual capacity as something that’s fixed, a feature of their innate makeup.

Intellectually, we may know that this is not so, but we take it so much for granted that it’s built into our language. We say “she’s very clever” or “he’s a bright guy” to describe people who we believe measure up when it comes to intellectual capacity.

A very different point of view has been put forward by the computer scientist Alan Kay. Kay’s saying is this: “A change of perspective is worth 80 IQ points.”

[We have a winner! Gives out the dollar to Alice]

Hmm. “A change of perspective is worth 80 IQ points.”

This is a saying that repays thought.

I just showed you in a very small way that it’s true: by changing Alice and Bob’s perspective on my talk, I’m betting they paid much closer attention to what I was saying. It’s not an 80 IQ point boost, true, but it’s still magical: a tiny shift in perspective can help us focus better. [* – but see footnote below, added in response to feedback]

It tells us that intellectual capacities aren’t innate, they can be dramatically changed by shifts in our perspective. And we can consciously develop strategies to shift our perspective. I don’t have time to review strategies for doing this, but I can mention one meta strategy, due to the musician Brian Eno and the artist Peter Schmidt. They made up a card deck of oblique strategies. It’s a deck of blank cards on which they’ve written many different strategies for solving problems. Most of the strategies are ways of changing perspective: “What would your closest friend do”; “work at a different speed”, etc. When stuck on a problem you can draw out a card, and get a a new perspective.

I think we should all make up our own decks of oblique strategies that we can use to get new perspectives, and to give our own intelligence an occasional boost.

[*] A commenter on Hacker News makes the good point that offering a dollar may cause some people to screen out everything except the word “points” – they may end up effectively stupider. Unfortunately, I can’t ask my audience members “Alice” and “Bob” if this is the case, because after preparing the talk I was asked instead to give an extemporaneous talk. But the talk can be modified to take account of this observation. Suppose instead that I’d offered a dollar to whoever provided a better summary of the talk at the end of my talk. I’ve been in analogous situations in the past, and know that it made me focus a lot better.

The Third & The Seventh

I’ve watched the video below more than a dozen times over the past year. It’s a nearly fully computer graphics piece, entitled “The Third & The Seventh”, made by one person, Alex Roman (AKA Jorge Seva):

It is, I think, the most beautiful piece of art I’ve seen that was made during my lifetime. It’s not just the technical merit, although there’s plenty that’s jaw dropping. But even if was not CG, I would think it astonishing. He has an eye for beauty, and an ability to show things that I, for one, otherwise wouldn’t see. Amazing.

It’s worth watching in full screen, with headphones on. If you have a projector and a good sound system, I recommend watching it that way. Here’s the vimeo link, where he has links to other videos which share some information about how he made it. There’s lots of commentary about the video around on the web: I found this interview with Roman interesting.

Awesome grants

Beginning this month, every month the Toronto Awesome Foundation will give away $1,000 for someone to do something that’s, well, awesome.

Here’s how to apply.

There’s no reporting, no strings, no oversight. Just $1,000 to the winner to do whatever they think is worth doing.

Deadline for submissions is February 15.

Many questions are addressed in the links at the bottom of the announcement post.

There are other chapters of the Awesome Foundation around the world, and here are a few of the awesome things people have done: grown food to help keep their local food pantry stocked; helped get a Fab Lab started in Washington DC, and made invisible musical instruments. There are many more.

It’s going to happen every month, and you can keep track of announcements at the Toronto Awesome blog. I won’t post updates on my blog every month, but I will on twitter.

The money comes from 10 Trustees (I’m one), who each put up $100 a month. I think it’ll be fun, and it’s a good way of supporting people doing interesting things.

(Yes, it’s the projects funded that are supposed to be Awesome, not the trustees. Hah hah.)

What should a reasonable person believe about the Singularity?

In 1993, the science fiction author Vernor Vinge wrote a short essay proposing what he called the Technological Singularity. Here’s the sequence of events Vinge outlines:

A: We will build computers of at least human intelligence at some time in the future, let’s say within 100 years.

B: Those computers will be able to rapidly and repeatedly increase their own intelligence, quickly resulting in computers that are far more intelligent than human beings.

C: This will cause an enormous transformation of the world, so much so that it will become utterly unrecognizable, a phase Vinge terms the “post-human era”. This event is the Singularity.

The basic idea is quite well known. Perhaps because the conclusion is so remarkable, almost outrageous, it’s an idea that evokes a strong emotional response in many people. I’ve had intelligent people tell me with utter certainty that the Singularity is complete tosh. I’ve had other intelligent people tell me with similar certainty that it should be one of the central concerns of humanity.

I think it’s possible to say something interesting about what range of views a reasonable person might have on the likelihood of the Singularity. To be definite, let me stipulate that it should occur in the not-too-distant future – let’s say within 100 years, as above. What we’ll do is figure out what probability someone might reasonably assign to the Singularity happening. To do this, observe that the probability [tex]p(C)[/tex] of the Singularity can be related to several other probabilities:

[tex] (*) \,\,\,\, p(C) = p(C|B) p(B|A) p(A). [/tex]

In this equation, [tex]p(A)[/tex] is the probability of event [tex]A[/tex], human-level artificial intelligence within 100 years. The probabilities denoted [tex]p(X|Y)[/tex] are conditional probabilities for event [tex]X[/tex] given event [tex]Y[/tex]. The truth of the equation is likely evident, and so I’ll omit the derivation – it’s a simple exercised in applying conditional probability, together with the observation that event [tex]C[/tex] can only happen if [tex]B[/tex] happens, and event [tex]B[/tex] can only happen if [tex]A[/tex] happens.

I’m not going to argue for specific values for these probabilities. Instead, I’ll argue for ranges of probabilities that I believe a person might reasonably assert for each probability on the right-hand side. I’ll consider both a hypothetical skeptic, who is pessimistic about the possibility of the Singularity, and also a hypothetical enthusiast for the Singularity. In both cases I’ll assume the person is reasonable, i.e., a person who is willing to acknowledge limits to our present-day understanding of the human brain and computer intelligence, and who is therefore not overconfident in their own predictions. By combining these ranges, we’ll get a range of probabilities that a reasonable person might assert for the probability of the Singularity.

Now, before I get into estimating ranges, it’s worth keeping in mind a psychological effect that has been confirmed over many decades: the overconfidence bias. When asked to estimate the probability of their opinions being correct, people routinely overestimate the probability. For example, in a 1960 experiment subjects were asked to estimate the probability that they could correctly spell a word. Even when people said they were 100 percent certain they could correctly spell a word, they got it right only 80 percent of the time! Similar effects have been reported for many different problems and in different situations. It is, frankly, a sobering literature to read.

This is important for us, because when it comes to both artificial intelligence and how the brain works, even the world’s leading experts don’t yet have a good understanding of how things work. Any reasonable probability estimates should factor in this lack of understanding. Someone who asserts a very high or very low probability of some event happening is implicitly asserting that they understand quite a bit about why that event will or will not happen. If they don’t have a strong understanding of the event in question, then chances are that they’re simply expressing overconfidence.

Okay, with those warnings out of the way, let’s start by thinking about [tex]p(A)[/tex]. I believe a reasonable person would choose a value for [tex]p(A)[/tex] somewhere between [tex]0.1[/tex] and [tex]0.9[/tex]. I can, for example, imagine an artificial intelligence skeptic estimating [tex]p(A) = 0.2[/tex]. But I’d have a hard time taking seriously someone who estimated [tex]p(A) = 0.01[/tex]. It seems to me that estimating [tex]p(A) = 0.01[/tex] would require some deep insight into how human thought works, and how those workings compare to modern computers, the sort of insight I simply don’t think anyone yet has. In short, it seems to me that it would indicate a serious overconfidence in one’s own understanding of the problem.

Now, it should be said that there have, of course, been a variety of arguments made against artificial intelligence. But I believe that most of the proponents of those arguments would admit that there are steps in the argument where they are not sure they are correct, but merely believe or suspect they are correct. For instance, Roger Penrose has speculated that intelligence and consciousness may require effects from quantum mechanics or quantum gravity. But I believe Penrose would admit that his conclusions relies on reasoning that even the most sympathetic would regard as quite speculative. Similar remarks apply to the other arguments I know, both for and against artificial intelligence.

What about an upper bound on [tex]p(A)[/tex]? Well, for much the same reason as in the case of the lower bound, I’d have a hard time taking seriously someone who estimated [tex]p(A) = 0.99[/tex]. Again, that would seem to me to indicate an overconfidence that there would be no bottlenecks along the road to artificial intelligence. Sure, maybe it will only require a straightforward continuation of the road we’re currently on. But, maybe some extraordinarily hard-to-engineer but as yet unknown physical effect is involved in creating artificial intelligence? I don’t think that’s likely – but, again, we don’t yet know all that much about how the brain works. Indeed, to pursue a different tack, it’s difficult to argue that there isn’t at least a few percent chance that our civilization will suffer a major regression over the next one hundred years. After all, historically nearly all civilizations last no more than a few centuries.

What about [tex]p(B|A)[/tex]? Here, again, I think a reasonable person would choose a probability between [tex]0.1[/tex] and [tex]0.9[/tex]. A probability much above [tex]0.9[/tex] discounts the idea that there’s some bottleneck we don’t yet understand that makes it very hard to bootstrap as in step [tex]B[/tex]. And a probability much below [tex]0.1[/tex] again seems like overconfidence: to hold such an opinion would, in my opinion, require some deep insight into why the bootstrapping is impossible.

What about [tex]p(C|B)[/tex]? Here, I’d go for tighter bounds: I think a reasonable person would choose a probability between [tex]0.2[/tex] and [tex]0.9[/tex].

If we put all those ranges together, we get a “reasonable” probability for the Singularity somewhere in the range of 0.2 percent – one in 500 – up to just over 70 perecent. I regard both those as extreme positions, indicating a very strong commitment to the positions espoused. For more moderate probability ranges, I’d use (say) [tex]0.2 < p(A) < 0.8[/tex], [tex]0.2 < p(B|A) < 0.8[/tex], and [tex]0.3 < p(C|B) < 0.8[/tex]. So I believe a moderate person would estimate a probability roughly in the range of 1 to 50 percent. These are interesting probability ranges. In particular, the 0.2 percent lower bound is striking. At that level, it's true that the Singularity is pretty darned unlikely. But it's still edging into the realm of a serious possibility. And to get this kind of probability estimate requires a person to hold quite an extreme set of positions, a range of positions that, in my opinion, while reasonable, requires considerable effort to defend. A less extreme person would end up with a probability estimate of a few percent or more. Given the remarkable nature of the Singularity, that's quite high. In my opinion, the main reason the Singularity has attracted some people's scorn and derision is superficial: it seems at first glance like an outlandish, science-fictional proposition. The end of the human era! It's hard to imgaine, and easy to laugh at. But any thoughtful analysis either requires one to consider the Singularity as a serious possibility, or demands a deep and carefully argued insight into why it won't happen. My book “Reinventing Discovery” will be released in 2011. It’s about the way open online collaboration is revolutionizing science. Many of the themes in the book are described in this essay. If you’d like to be notified when the book is available, please send a blank email to the.future.of.science@gmail.com with the subject “subscribe book”. You can subscribe to my blog here, and to my Twitter account here.

10th Anniversary Edition of “Quantum Computation and Quantum Information”

I’m pleased to say that a 10th anniversary edition of my book with Ike Chuang on quantum computing has just been released by Cambridge University Press (Amazon link).

Apart from expressing some authorial pleasure, the point of this post is to let people who already have a copy of the book know that the book hasn’t been substantially revised. Please don’t buy another copy under the impression that it’s an all-new edition. If you actually see a copy in “real life”, the publisher has gone to some effort to make it clear that the changes to the book are largely cosmetic – we have a new foreword, and afterword, and several people kindly contributed new endorsements, and the cover has changed a bit. But I’d hate to think that someone who already owns a copy and who orders their books online will buy a copy under the impression that the book has changed a lot. It hasn’t.

Why release a 10th Anniversary edition? The suggestion came from the publisher. So far as I understand publishing (which is not well), CUP was keen because it lets them make a renewed push on the book with booksellers. Keep in mind that the publisher’s primary customer is the booksellers, not the reader, and it’s to buyers at the booksellers that they are actually selling. I don’t understand the dynamics of sales to booksellers, but it seems that releasing an edition like this, with new marketing materials and endorsements, does result in an uptick in sales. Rather pleasantly, it also drops the recommended retail price substantially (from US 100 to US 75), so if you’ve been put off by the price, now is a good time to buy. (Admittedly, the way Amazon discounts books, it ends up not making that much difference if you buy from Amazon.)