The Third & The Seventh

I’ve watched the video below more than a dozen times over the past year. It’s a nearly fully computer graphics piece, entitled “The Third & The Seventh”, made by one person, Alex Roman (AKA Jorge Seva):

It is, I think, the most beautiful piece of art I’ve seen that was made during my lifetime. It’s not just the technical merit, although there’s plenty that’s jaw dropping. But even if was not CG, I would think it astonishing. He has an eye for beauty, and an ability to show things that I, for one, otherwise wouldn’t see. Amazing.

It’s worth watching in full screen, with headphones on. If you have a projector and a good sound system, I recommend watching it that way. Here’s the vimeo link, where he has links to other videos which share some information about how he made it. There’s lots of commentary about the video around on the web: I found this interview with Roman interesting.

Awesome grants

Beginning this month, every month the Toronto Awesome Foundation will give away $1,000 for someone to do something that’s, well, awesome.

Here’s how to apply.

There’s no reporting, no strings, no oversight. Just $1,000 to the winner to do whatever they think is worth doing.

Deadline for submissions is February 15.

Many questions are addressed in the links at the bottom of the announcement post.

There are other chapters of the Awesome Foundation around the world, and here are a few of the awesome things people have done: grown food to help keep their local food pantry stocked; helped get a Fab Lab started in Washington DC, and made invisible musical instruments. There are many more.

It’s going to happen every month, and you can keep track of announcements at the Toronto Awesome blog. I won’t post updates on my blog every month, but I will on twitter.

The money comes from 10 Trustees (I’m one), who each put up $100 a month. I think it’ll be fun, and it’s a good way of supporting people doing interesting things.

(Yes, it’s the projects funded that are supposed to be Awesome, not the trustees. Hah hah.)

What should a reasonable person believe about the Singularity?

In 1993, the science fiction author Vernor Vinge wrote a short essay proposing what he called the Technological Singularity. Here’s the sequence of events Vinge outlines:

A: We will build computers of at least human intelligence at some time in the future, let’s say within 100 years.

B: Those computers will be able to rapidly and repeatedly increase their own intelligence, quickly resulting in computers that are far more intelligent than human beings.

C: This will cause an enormous transformation of the world, so much so that it will become utterly unrecognizable, a phase Vinge terms the “post-human era”. This event is the Singularity.

The basic idea is quite well known. Perhaps because the conclusion is so remarkable, almost outrageous, it’s an idea that evokes a strong emotional response in many people. I’ve had intelligent people tell me with utter certainty that the Singularity is complete tosh. I’ve had other intelligent people tell me with similar certainty that it should be one of the central concerns of humanity.

I think it’s possible to say something interesting about what range of views a reasonable person might have on the likelihood of the Singularity. To be definite, let me stipulate that it should occur in the not-too-distant future – let’s say within 100 years, as above. What we’ll do is figure out what probability someone might reasonably assign to the Singularity happening. To do this, observe that the probability [tex]p(C)[/tex] of the Singularity can be related to several other probabilities:

[tex] (*) \,\,\,\, p(C) = p(C|B) p(B|A) p(A). [/tex]

In this equation, [tex]p(A)[/tex] is the probability of event [tex]A[/tex], human-level artificial intelligence within 100 years. The probabilities denoted [tex]p(X|Y)[/tex] are conditional probabilities for event [tex]X[/tex] given event [tex]Y[/tex]. The truth of the equation is likely evident, and so I’ll omit the derivation – it’s a simple exercised in applying conditional probability, together with the observation that event [tex]C[/tex] can only happen if [tex]B[/tex] happens, and event [tex]B[/tex] can only happen if [tex]A[/tex] happens.

I’m not going to argue for specific values for these probabilities. Instead, I’ll argue for ranges of probabilities that I believe a person might reasonably assert for each probability on the right-hand side. I’ll consider both a hypothetical skeptic, who is pessimistic about the possibility of the Singularity, and also a hypothetical enthusiast for the Singularity. In both cases I’ll assume the person is reasonable, i.e., a person who is willing to acknowledge limits to our present-day understanding of the human brain and computer intelligence, and who is therefore not overconfident in their own predictions. By combining these ranges, we’ll get a range of probabilities that a reasonable person might assert for the probability of the Singularity.

Now, before I get into estimating ranges, it’s worth keeping in mind a psychological effect that has been confirmed over many decades: the overconfidence bias. When asked to estimate the probability of their opinions being correct, people routinely overestimate the probability. For example, in a 1960 experiment subjects were asked to estimate the probability that they could correctly spell a word. Even when people said they were 100 percent certain they could correctly spell a word, they got it right only 80 percent of the time! Similar effects have been reported for many different problems and in different situations. It is, frankly, a sobering literature to read.

This is important for us, because when it comes to both artificial intelligence and how the brain works, even the world’s leading experts don’t yet have a good understanding of how things work. Any reasonable probability estimates should factor in this lack of understanding. Someone who asserts a very high or very low probability of some event happening is implicitly asserting that they understand quite a bit about why that event will or will not happen. If they don’t have a strong understanding of the event in question, then chances are that they’re simply expressing overconfidence.

Okay, with those warnings out of the way, let’s start by thinking about [tex]p(A)[/tex]. I believe a reasonable person would choose a value for [tex]p(A)[/tex] somewhere between [tex]0.1[/tex] and [tex]0.9[/tex]. I can, for example, imagine an artificial intelligence skeptic estimating [tex]p(A) = 0.2[/tex]. But I’d have a hard time taking seriously someone who estimated [tex]p(A) = 0.01[/tex]. It seems to me that estimating [tex]p(A) = 0.01[/tex] would require some deep insight into how human thought works, and how those workings compare to modern computers, the sort of insight I simply don’t think anyone yet has. In short, it seems to me that it would indicate a serious overconfidence in one’s own understanding of the problem.

Now, it should be said that there have, of course, been a variety of arguments made against artificial intelligence. But I believe that most of the proponents of those arguments would admit that there are steps in the argument where they are not sure they are correct, but merely believe or suspect they are correct. For instance, Roger Penrose has speculated that intelligence and consciousness may require effects from quantum mechanics or quantum gravity. But I believe Penrose would admit that his conclusions relies on reasoning that even the most sympathetic would regard as quite speculative. Similar remarks apply to the other arguments I know, both for and against artificial intelligence.

What about an upper bound on [tex]p(A)[/tex]? Well, for much the same reason as in the case of the lower bound, I’d have a hard time taking seriously someone who estimated [tex]p(A) = 0.99[/tex]. Again, that would seem to me to indicate an overconfidence that there would be no bottlenecks along the road to artificial intelligence. Sure, maybe it will only require a straightforward continuation of the road we’re currently on. But, maybe some extraordinarily hard-to-engineer but as yet unknown physical effect is involved in creating artificial intelligence? I don’t think that’s likely – but, again, we don’t yet know all that much about how the brain works. Indeed, to pursue a different tack, it’s difficult to argue that there isn’t at least a few percent chance that our civilization will suffer a major regression over the next one hundred years. After all, historically nearly all civilizations last no more than a few centuries.

What about [tex]p(B|A)[/tex]? Here, again, I think a reasonable person would choose a probability between [tex]0.1[/tex] and [tex]0.9[/tex]. A probability much above [tex]0.9[/tex] discounts the idea that there’s some bottleneck we don’t yet understand that makes it very hard to bootstrap as in step [tex]B[/tex]. And a probability much below [tex]0.1[/tex] again seems like overconfidence: to hold such an opinion would, in my opinion, require some deep insight into why the bootstrapping is impossible.

What about [tex]p(C|B)[/tex]? Here, I’d go for tighter bounds: I think a reasonable person would choose a probability between [tex]0.2[/tex] and [tex]0.9[/tex].

If we put all those ranges together, we get a “reasonable” probability for the Singularity somewhere in the range of 0.2 percent – one in 500 – up to just over 70 perecent. I regard both those as extreme positions, indicating a very strong commitment to the positions espoused. For more moderate probability ranges, I’d use (say) [tex]0.2 < p(A) < 0.8[/tex], [tex]0.2 < p(B|A) < 0.8[/tex], and [tex]0.3 < p(C|B) < 0.8[/tex]. So I believe a moderate person would estimate a probability roughly in the range of 1 to 50 percent. These are interesting probability ranges. In particular, the 0.2 percent lower bound is striking. At that level, it's true that the Singularity is pretty darned unlikely. But it's still edging into the realm of a serious possibility. And to get this kind of probability estimate requires a person to hold quite an extreme set of positions, a range of positions that, in my opinion, while reasonable, requires considerable effort to defend. A less extreme person would end up with a probability estimate of a few percent or more. Given the remarkable nature of the Singularity, that's quite high. In my opinion, the main reason the Singularity has attracted some people's scorn and derision is superficial: it seems at first glance like an outlandish, science-fictional proposition. The end of the human era! It's hard to imgaine, and easy to laugh at. But any thoughtful analysis either requires one to consider the Singularity as a serious possibility, or demands a deep and carefully argued insight into why it won't happen. My book “Reinventing Discovery” will be released in 2011. It’s about the way open online collaboration is revolutionizing science. Many of the themes in the book are described in this essay. If you’d like to be notified when the book is available, please send a blank email to the.future.of.science@gmail.com with the subject “subscribe book”. You can subscribe to my blog here, and to my Twitter account here.