The dangers of deliberation: suppressing rather than amplifying hidden knowledge

Continuing yesterday’s theme, Cass Sunstein’s “Infotopia” provides a remarkable example of the things that can go wrong when groups collaborate. From an experiment in simulated elections:

Information was parceled out to group members about three candidates for political office; if the information had been properly pooled, the group would have selected Candidate A, who was clearly the best choice. The experiment involved two conditions. In the first condition, each member of the four-person group was given most of the relevant information (66 percent of the information about each candidate). In that condition, 67 percent of the group members favored Candidate A before discussion, and 85 percent after discussion…

In the second condition, by contrast, the information that favored Candidate A was parceled out to various members of the group, rather than shared by all. As this condition was designed, the shared information favoured the two unambiguously inferior candidates, B and C. If the unshared information emerged through discussion and was taken seriously, Candidate A would be chosen.

In that condition, fewer than 25 percent of group members favored Candidate A before discussions, a natural product of the initial distribution of information. But (and this is the key result) the number of people favoring Candidate A actually fell after discussion, simply because the shared information had disproportionate influence on group members. In other words, groups did worse, not better, than individuals when the key information was not initially shared by group members. The commonly held information was far more influential than the unshared information, to the detriment of the group’s ultimate decision.

Sunstein gives many similar examples where groups behave worse than individuals; all failures of collective cognition. Of course, the examples are of limited import, because he’s talking about groups using rather unstructured processes. With good facilitation, good process, or simply good tools, many of these collective cognitive biases can likely be avoided.

Why Augmenting Collective Intelligence is Easier than Augmenting Individual Intelligence

When I first heard about intelligence augmentation, I thought the idea was amazing – you could outsource cognitive tasks to your computer, effectively making you smarter.

At first, it’d be mundane stuff, multiplying numbers on a calculator, things like that. But as computers got more powerful, it’d be possible to outsource progressively more complex and interesting tasks. You’d be getting smarter, along with the progress of technology.

I heard about this in the early 1990s, before the web had taken off. At the time, the way I (and, I suspect, many other people who’d heard of it) looked at intelligence augmentation was primarily as a way of augmenting individual intelligence.

The way things have turned out, though, it seems to be a lot easier to augment collective intelligence than it is to augment individual intelligence. At the least, progress on augmenting collective intelligence has been spectacular over the past 15 years, while progress on augmenting individual intelligence has been slow. If I have to choose between giving up my calculator (or any other individual tool), and giving up Google, the calculator will be in the trash.

Perhaps part of the reason for my mistake was familiarity. For most of us, especially circa 1990, the intelligence of individuals was an everyday concept, but collective intelligence was, and to some extent remains, exotic.

Of course, with hindsight it’s not so strange that augmenting collective intelligence is easier than augmenting individual intelligence.

Collective intelligence requires us to externalize our thoughts, expressing them in symbols, so they can be communicated to others. This has the coincidental effect of making those thoughts (or, at least, their expression) accessible to computers in a way that our internal brain state is not. The more communication is taking place, the more opportunity there is for software to contribute.

Google demonstrates this vividly, extracting valuable information from the links between webpages, information that can then be fed back to make us smarter. I’ve long thought it’d be fun to do a controlled experiment in which two groups of people are given an IQ test, with the only difference between the groups being that one has access to the web, and the other does not.

You may object that I’m using the term “augmentation of collective intelligence” in a funny way. After all, Google is used by just a single person at a time. Of course, I’m using the term broadly, to mean tools for intelligence augmentation that build in an essential way upon collective intelligence. Maybe a more literal description would be “collective augmentation of intelligence”, or something similar. But the argument I’ve made holds equally true also in the narrow sense of literally augmenting collective intelligence, as shown by examples Kasparov versus the World, the Matlab programming competition, open source biology, Linux, and Wikipedia.

The unexpected cognitive benefits of linklogging

Since starting my twice-weekly linklog, I’ve noticed an unexpected benefit, namely, that quite a bit of what I post to my linklog gets retained in my long-term memory. Much more, I suspect, than I would have retained otherwise. Arguably, this isn’t always a good thing, but on the whole it’s pleasing.