Communication can stunt innovation

Fascinating article earlier this year in “Current Directions in Psychological Science”, examining how the structure of communication in a group can affect the rate at which innovations occur. To quote the EurekAlert! story:

Good ideas can have drawbacks. When information is freely shared, good ideas can stunt innovation by distracting others from pursuing even better ideas, according to Indiana University cognitive scientist Robert Goldstone.

“How do you structure your community so you get the best solution out of the group?” Goldstone said. “It turns out not to be effective if different inventors and labs see exactly what everyone else is doing because of the human tendency to glom onto the current ‘best’ solution.”

[…]

This study used a virtual environment in which study participants worked in specifically designed groups to solve a problem. Participants guessed numbers between 1 and 100, with each number having a hidden value. The goal was for individuals to accumulate the highest score through several rounds of guessing. Across different conditions, the relationship between guesses and scores could either be simple or complex. The participants saw the results of their own guesses and some or all of the guesses of the others in their group.

In the “fully connected” group, everyone’s work was completely accessible to everyone else — much like a tight-knit family or small town. In the “locally connected” group, participants primarily were aware of what their neighbors, or the people on either side, were doing. In the “small world” group, participants also were primarily aware of what their neighbors were doing, but they also had a few distant connections that let them send or retrieve good ideas from outside of their neighborhood.

Goldstone found that the fully connected groups performed the best when solving simple problems. Small world groups, however, performed better on more difficult problems. For these problems, the truism “The more information, the better” is not valid.

“The small world network preserves diversity,” Goldstone said. “One clique could be coming up with one answer, another clique could be coming up with another. As a result, the group as a whole is searching the problem space more effectively. For hard problems, connecting people by small world networks offers a good compromise between having members explore a variety of innovations, while still quickly disseminating promising innovations throughout the group.

The original article is behind a publisher paywall, but here’s a link for those with access.

Many thoughts immediately arise:

  • Mark Newman has shown that the structure of many scientific collaboration networks is a small world network, at least as of 2000.
  • The “small world” classification is pretty coarse. It’d be helpful to say more precisely which network structures give rise to or inhibit innovation.
  • The problems posed by Goldstone and collaborators were pretty artificial. What happens for more realistic problems?
  • How does problem-solving effectiveness scale with the size of the group? I expect that the point of diminishing returns is hit pretty quickly even with very difficult problems. A good understanding of this would potentially have implications for science funding.
  • Certain periods of intellectual history were especially fertile. What’s the pattern of collaborations look like for the founders of quantum mechanics (or insert your favourite topic)? Is it special?
  • More generally, what predictive power does the pattern of collaborations (or citations, or any other type of linkage) have?
Published

My theoretical charity of choice: George W. Bush

The last few weeks I’ve been experimenting with stickk.com. What stickk.com lets you do is pick a commitment – say, to exercise for 3 hours per week – and then sign a contract saying that if you don’t meet the commitment, a certain sum of money will be given to the anti-charity of your choice.

My anti-charity is the Bush Memorial Library. If I should miss making a commitment, I’ll be making an automatic 50 dollar donation. It’s a good choice for me personally, since even though I dislike Bush I won’t feel too bad if I strike out (it’s just a library, not a re-election fund), but find that thinking about Bush really motivates me to keep my commitments.

To keep you honest, you get to nominate an independent referee and supporters.

It works surprisingly well. When I get up in the morning, and just can’t stand the thought of going to the treadmill, I simply bring an image of Bush to mind. It gets me energized every time. So far I’m 5 from 5 – I’ve met every one of my commitments. I’m tempted to put a poster of the man up somewhere.

Psychologically, the effect is weird. In economic terms, I seem to apply a pretty steep discount factor to my future time. That is, the site exploits the fact that when I’m making my commitment, I put a much lower value on my future time than I actually do in the moment. It’s a clever trick.

One other oddity is that the effect seems to last. I’ve found that even when I take a week off, it seems to be a lot easier to keep the commitments anyway.

Published

Science and Wikipedia

Harvard’s amazing Berkman Center for Internet and Society had their tenth anniversary celebration last week. During one of the talks, the founder of the Berkman Centre, Charles Nesson, asked the
following question about the relationship between Universities and Wikipedia:

Wikipedia is the instantiation of the building of the knowledge commons. Why didn’t it come out of a university?

I think it’s an important question. It bothers me a lot that Wikipedia didn’t come out of a University. Academics make a big song and dance about their role in advancing human knowledge, yet they’ve played only a bit part in developing one of the most important tools for the advancement of human knowledge built in my lifetime.

Anyway, here’s my response to the question, excerpted from a draft of the first chapter of my book:

Given that Wikipedia’s stated vision is to give “every single person in the world free access to the sum of all human knowledge”, you might guess it was started by scientists eager to collect all of human knowledge into a single source. In fact, in the early days very few professional scientists were involved. To contribute would arouse suspicions from your colleagues that you were wasting time that could be spent on more “useful” things, like teaching, or writing papers and grants. Even today, contributing to Wikipedia is regarded as a low-value activity by most professional scientists.

Some scientists reading this will object that contributing to Wikipedia isn’t really science. And that’s certainly true if you take a narrow view of what science is, if you believe it’s about doing research in the ivory tower, and publishing in specialized scientific journals. However, if you take a broader view of what science is, if you believe that it’s about discovering how the world works, and sharing that understanding with the rest of humanity, then the lack of early scientific support for Wikipedia looks like an opportunity lost.

[…] It’s not that scientists disapprove of Wikipedia; indeed, many find it an incredibly valuable resource, not as the final word on a topic, but rather as a starting point and reference work. It’s that within the culture of science there are no incentives to contribute to Wikipedia, and so contributing is a low-status and therefore low-priority activity.

It’s important to appreciate how astonishing this state of affairs is. Wikipedia is one of the most important intellectual innovations of our time. It or its descendants may one day rank alongside innovations such as the Great Library of Alexandria or the US Library of Congress. Yet scientists, supposedly the fount of innovation in our society, not only played virtually no role in setting up Wikipedia, contribution was actually actively discouraged within the scientific community. The early stages of development were instead due to an ad hoc group of people, most from outside science; founder Jimmy Wales had a background in finance and as a web developer for an “erotic search engine”, not science. Nowadays, Wikipedia’s success has to a limited extent legitimized contribution within the scientific community, but the lack of early involvement by scientists is still remarkable.

I don’t see a complete solution to this problem, or how to prevent a repeat as more tools of the same order of importance as Wikipedia are created. A partial solution is to build credible tools for measuring contributions outside the conventional journal-citation system, contributions that are presently considered unconventional.

Published
Categorized as Wikipedia

Biweekly links for 05/19/2008

Click here for all of my del.icio.us bookmarks.

Published

Biweekly links for 05/16/2008

Click here for all of my del.icio.us bookmarks.

Published

Social software and simplicity

Great interview with Clay Shirky in the Wall Street Journal. Shirky makes a particularly interesting comment about social software:

It’s almost universally the case with social software that the software that launches with the fewest features is the stuff that takes off. The shift is from thinking about the computer as a box to thinking of the computer as a door, and nobody wants a door with 37 handles. Twitter has six features, and it launched with only one. A brutally simple mental model of the software that’s shared by all users turns out to be a better predictor of adoption and value than a completely crazy collection of features that ends up being slightly different for every user.

Empirically, Shirky seems to be right. Email, Facebook, Usenet, Twitter, wikis, Blogger, Flickr, Friendster, del.icio.us – all were incredibly simple when they launched. They certainly had a “brutally simple mental model of the software that’s shared by all users”.

But I’m not sure I believe this is true, and I certainly don’t know why it’s true, if it is.

Maybe a partial explanation is that having a simple shared mental model makes network effects much more powerful. When we think about social software as a user, we don’t just think about the software, we also think about the network of other users, and it’s important to be confident that we have a shared understanding with those other users. If we’re not confident of that shared understanding, we won’t connect, and the value of the software will diminish.

Update: In comments, Clay Shirky replies:

“If we’re not confident of that shared understanding, we won’t connect, and the value of the software will diminish.”

I don’t think this is a partial explanation. I think this is *the* explanation. Given the competition all social software has, a simple and shared mental model is essential to elevating the eventual leaders over the competition.

Published

Request for comments on “The Future of Science”

Update: Thankyou to everyone who has replied. I’m no longer looking for test readers.

I completed a first draft of my book “The Future of Science” last year. I was happy with much of it, but the draft was inadequate in important ways. I put it aside to let the ideas gestate. I took it up again a couple of months back and have since been hard at work on a second draft. This time around I’m much happier, and I’m now looking for a few people willing to comment on the second draft of chapter 1.

The book is aimed at a wide audience, and so I’m interested in feedback from a wide cross-section of people. I already know many people in the hard sciences, but there’s many other groups I’d also like to reach. For that reason I’m particularly interested in getting feedback from programmers, entrepeneurs, biologists, social scientists, students (both undergrad and grad), and from non-scientists.

If you’re in one or more of these groups, and interested in reading and providing comments, please let me know (mnielsen at perimeterinstitute dot ca). Of course, if you’re in the hard sciences and keen to read, I’d also like to hear from you! Unfortunately, this is not a paying gig, unless you count my thanks in the acknowledgements.

A few bits and pieces about what sorts of commentary would be especially helpful:

  • Which are the boring parts? Elmore Leonard has said the secret of good writing is to leave out the boring parts. Unfortunately, I find it hard to spot the boring parts in my own work, so comments from sympathetic and perceptive readers help. One trick I find useful is to score all my sentences or paragraphs: 1 = boring, 2 = okay, 3 = interesting. Eliminating, compressing or changing the 1’s and 2’s inevitably strengthens the piece.
  • Where is the book unclear? Where do I write like a specialist – a physicist, a geek, or an academic?
  • What important ideas are missing? Is anything flat-out wrong? What’s unconvincing?
  • How can I improve the impact of the writing? Simple comments – “this paragraph is flat”, “you could use a more active verb here” – can be incredibly helpful.

(You might ask why I don’t just blog the drafts. Certainly, this is becoming pretty common, and it’s something I’m keen to do. However, I’m yet to sign a contract with a publisher, and I’d like to get my future publisher’s endorsement before blogging huge swathes of the book.)

Published

US Presidential Candidates on FriendFeed

All three of the remaining candidates have accounts on FriendFeed –Obama, Clinton, McCain. They’re an interesting contrast. Obama comes across as the most engaged, making effective use of things like twitter and Flickr. John McCain is making surprisingly aggressive use of YouTube, but otherwise is mostly quiet online.

Published

Wiki set points

Imagine putting the Feynman Lectures on Physics up for public editing on a wiki (Feynmanpedia). Would they get better or worse?

My immediate gut instinct is “worse”. However, when I posed this question to a colleague I greatly respect he asked me pointedly if I’d actually tried it. It’s a good question. There’s no doubt that with some wiki communities, perhaps most, the Feynman Lectures would rapidly deteriorate in quality. But maybe with the right community they’d improve.

For many wiki communities there’s a useful notion of a “set point”, a quality level that an article written by that community will converge to over time. For a poorly written article, most edits will tend to improve the article, and only a few will make it worse; thus, the article will improve over time. However, for a superb article, many of the edits, even well-intended ones, will make the article worse, and so the article will get worse over time. The set point is the quality level at which edits improving and worsening the article balance each other out.

For Wikipedia the level of the set point is moderately high. I’m pretty sure that if one took a section out of the Feynman lectures and put it up on Wikipedia, it would get worse. On the other hand, if the community started with a blank page on physics, it’d demonstrably get a lot better.

For other wiki communities the set point is different. I’m a fan of the TV show Lost, and there is an amazing fan-created wiki about the show called Lostpedia. The set point of Lostpedia is quite a bit higher than the Wikipedia set point.

The idea of a wiki set point is obviously imprecise. Indeed, any idea that deals with quality judgements and community action necessarily will be. The caveats that need to be applied include: the set point will be different for different articles; even for a given article it will vary over time as contributors change; what does it mean to speak about the quality of an article, anyway; surely it makes more sense to talk about a set quality range, rather than a single point; and so on.

Despite these caveats, I think the set point is a useful way of thinking about wikis, and stimulates many useful questions. What types of wiki community or wiki design increase the set point? What types decrease it? How high can the set point go? How could we design the wiki software and community so that the set point is above the level achievable by any single human being?

Published