Skip to content

The Future of Science

by Michael Nielsen on July 17, 2008

Building a better collective memory

In your High School science classes you may have learnt Hooke’s law, the law of physics which relates a spring’s length to how hard you pull on it. What your High School science teacher probably didn’t tell you is that when Robert Hooke discovered his law in 1676, he published it as an anagram, “ceiiinossssttuv”, which he revealed two years later as the Latin “ut tensio, sic vis”, meaning “as the extension, so the force”. This ensured that if someone else made the same discovery, Hooke could reveal the anagram and claim priority, thus buying time in which he alone could build upon the discovery.

Hooke was not unusual. Many great scientists of the age, including Leonardo, Galileo and Huygens, used anagrams or ciphers for similar purposes. The Newton-Leibniz controversy over who invented calculus occurred because Newton claimed to have invented calculus in the 1660s and 1670s, but didn’t publish until 1693. In the meantime, Leibniz developed and published his own version of calculus. Imagine modern biology if the human genome had been announced as an anagram, or if publication had been delayed thirty years.

Why were Hooke, Newton, and their contemporaries so secretive? In fact, up until this time discoveries were routinely kept secret. Alchemists intent on converting lead into gold or finding the secret of eternal youth would often take their discoveries with them to their graves. A secretive culture of discovery was a natural consequence of a society in which there was often little personal gain in sharing discoveries.

The great scientific advances in the time of Hooke and Newton motivated wealthy patrons such as the government to begin subsidizing science as a profession. Much of the motivation came from the public benefit delivered by scientific discovery, and that benefit was strongest if discoveries were shared. The result was a scientific culture which to this day rewards the sharing of discoveries with jobs and prestige for the discoverer.

This cultural transition was just beginning in the time of Hooke and Newton, but a little over a century later the great physicist Michael Faraday could advise a younger colleague to “Work. Finish. Publish.” The culture of science had changed so that a discovery not published in a scientific journal was not truly complete. Today, when a scientist applies for a job, the most important part of the application is their published scientific papers. But in 1662, when Hooke applied for the job of Curator of Experiments at the Royal Society, he certainly was not asked for such a record, because the first scientific journals weren’t created until three years later, in 1665.

The adoption and growth of the scientific journal system has created a body of shared knowledge for our civilization, a collective long-term memory which is the basis for much of human progress. This system has changed surprisingly little in the last 300 years. The internet offers us the first major opportunity to improve this collective long-term memory, and to create a collective short-term working memory, a conversational commons for the rapid collaborative development of ideas. The process of scientific discovery – how we do science – will change more over the next 20 years than in the past 300 years.

This change will not be achieved without great effort. From the outside, scientists currently appear puzzlingly slow to adopt many online tools. We’ll see that this is a consequence of some major barriers deeply embedded within the culture of science. The first part of this essay is about these barriers, and how to overcome them. The second part of the essay illustrates these ideas, with a proposal for an online collaboration market where scientists can rapidly outsource scientific problems.

Part I: Toward a more open scientific culture

How can the internet benefit science?

How can the internet improve the way we do science? There are two useful ways to answer this question. The first is to view online tools as a way of expanding the range of scientific knowledge that can be shared with the world:



Many online tools do just this, and some have had a major impact on how scientists work. Two successful examples are the physics preprint arXiv, which lets physicists share preprints of their papers without the months-long delay typical of a conventional journal, and GenBank, an online database where biologists can deposit and search for DNA sequences. But most online tools of this type remain niche applications, often despite the fact that many scientists believe broad adoption would be valuable. Two examples are the Journal of Visualized Experiments, which lets scientists upload videos which show how their experiments work, and open notebook science, as practiced by scientists like Jean-Claude Bradley and Garrett Lisi, who expose their working notes to the world. In the coming years we’ll see a proliferation of tools of this type, each geared to sharing different types of knowledge:



There is a second and more radical way of thinking about how the internet can change science, and that is through a change to the process and scale of creative collaboration itself, a change enabled by social software such as wikis, online forums, and their descendants.

There are already many well-known but still striking instances of this change in parts of culture outside of science [1]. For example, in 1991 an unknown Finnish student named Linus Torvalds posted a short note in an online forum, asking for help extending a toy operating system he’d programmed in his spare time; a volunteer army responded by assembling Linux, one of the most complex engineering artifacts ever constructed. In 2001 another young unknown named Larry Sanger posted a short note asking for help building an online Encyclopedia; a volunteer army responded by assembling the world’s most comprehensive Encyclopedia. In 1999, Garry Kasparov, the greatest chessplayer of all time, played and eventually won a game of chess against a “World Team” which decided its moves by the votes of thousands of chessplayers, many rank amateurs; instead of the easy victory he expected, he got the most challenging game of his career, a game he called “the greatest game in the history of chess”.

These examples are not curiosities, or special cases; they are just the leading edge of the greatest change in the creative process since the invention of writing.

Science is an example par excellence of creative collaboration, yet scientific collaboration still takes place mainly via face-to-face meetings. With the exception of email, few of the new social tools have been broadly adopted by scientists, even though it is these tools which have the greatest potential to improve how science is done.

Why have scientists been so slow to adopt these remarkable tools? Is it simply that they are too conservative in their habits, or that the new tools are no better than what we already have? Both these glib answers are wrong. We’ll resolve this puzzle by looking in detail at two examples where excellent online tools have failed to be adopted by scientists. What we’ll find is that there are major cultural barriers which are preventing scientists from getting involved, and so slowing down the progress of science.

A failure of science online: online comment sites

Like many people, when I’m considering buying a book or electronic gadget, I often first browse the reviews at amazon.com. Inspired by the success of amazon.com and similar sites, several organizations have created comment sites where scientists can share their opinions of scientific papers. Perhaps the best-known was Nature’s 2006 trial of open commentary on papers undergoing peer review at Nature. The trial was not a success. Nature’s final report terminating the trial explained:

There was a significant level of expressed interest in open peer review… A small majority of those authors who did participate received comments, but typically very few, despite significant web traffic. Most comments were not technically substantive. Feedback suggests that there is a marked reluctance among researchers to offer open comments.

The Nature trial is just one of many attempts at comment sites for scientists. The earliest example I’m aware of is the Quick Reviews site, built in 1997, and discontinued in 1998. Physics Comments was built a few years later, and discontinued in 2006. A more recent site, Science Advisor, is still active, but has more members (1139) than reviews (1008). It seems that people want to read reviews of scientific papers, but not write them [2].

The problem all these sites have is that while thoughtful commentary on scientific papers is certainly useful for other scientists, there are few incentives for people to write such comments. Why write a comment when you could be doing something more “useful”, like writing a paper or a grant? Furthermore, if you publicly criticize someone’s paper, there’s a chance that that person may be an anonymous referee in a position to scuttle your next paper or grant application.

To grasp the mindset here, you need to understand the monklike intensity that ambitious young scientists bring to the pursuit of scientific publications and grants. To get a position at a major University the most important thing is an impressive record of scientific papers. These papers will bring in the research grants and letters of recommendation necessary to be hired. Competition for positions is so fierce that 80 hour plus work weeks are common. The pace relaxes after tenure, but continued grant support still requires a strong work ethic. It’s no wonder people have little inclination to contribute to the online comment sites.

The contrast between the science comment sites and the success of the amazon.com reviews is stark. To pick just one example, you’ll find
approximately 1500 reviews of Pokemon products at amazon.com, more than the total number of reviews on all the scientific comment sites I described above. The disincentives facing scientists have led to a ludicrous situation where popular culture is open enough that people feel comfortable writing Pokemon reviews, yet scientific culture is so closed that people will not publicly share their opinions of scientific papers. Some people find this contrast curious or amusing; I believe it signifies something seriously amiss with science, something we need to understand and change.

A failure of science online: Wikipedia

Wikipedia is a second example where scientists have missed an opportunity to innovate online. Wikipedia has a vision statement to warm a scientist’s heart: “Imagine a world in which every single human being can freely share in the sum of all knowledge. That’s our commitment.” You might guess Wikipedia was started by scientists eager to collect all of human knowledge into a single source. In fact, Wikipedia’s founder, Jimmy Wales, had a background in finance and as a web developer for an “erotic search engine”, not in science. In the early days few established scientists were involved. Just as for the scientific comment sites, to contribute aroused suspicion from colleagues that you were wasting time that could be spent writing papers and grants.

Some scientists will object that contributing to Wikipedia isn’t really science. And, of course, it’s not if you take a narrow view of what science is, if you’ve bought into the current game, and take it for granted that science is only about publishing in specialized scientific journals. But if you take a broader view, if you believe science is about discovering how the world works, and sharing that understanding with the rest of humanity, then the lack of early scientific support for Wikipedia looks like an opportunity lost. Nowadays, Wikipedia’s success has to some extent legitimized contribution within the scientific community. But how strange that the modern day Library of Alexandria had to come from outside academia.

The challenge: achieving extreme openness in science

These failures of science online are all examples where scientists show a surprising reluctance to share knowledge that could be useful to others. This is ironic, for the value of cultural openness was understood centuries ago by many of the founders of modern science; indeed, the journal system is perhaps the most open system for the transmission of knowledge that could be built with 17th century media. The adoption of the journal system was achieved by subsidizing scientists who published their discoveries in journals. This same subsidy now inhibits the adoption of more effective technologies, because it continues to incentivize scientists to share their work in conventional journals, and not in more modern media.

The situation is analogous to the government subsidies for corn-based ethanol in the United States. In the early days these seemed to many people to be a good idea, encouraging the use of what people hoped would be a more efficient fuel. But now we understand that there are more energy-efficient alternatives, such as grass-based cellulose ethanol. Unfortunately, the subsidies for corn-based ethanol are still in place, and now inhibit the adoption of the more efficient technologies.

We should aim to create an open scientific culture where as much information as possible is moved out of people’s heads and labs, onto the network, and into tools which can help us structure and filter the information. This means everything – data, scientific opinions, questions, ideas, folk knowledge, workflows, and everything else – the works. Information not on the network can’t do any good.

Ideally, we’ll achieve a kind of extreme openness. This means: making many more types of content available than just scientific papers; allowing creative reuse and modification of existing work through more open licensing and community norms; making all information not just human readable but also machine readable; providing open APIs to enable the building of additional services on top of the scientific literature, and possibly even multiple layers of increasingly powerful services. Such extreme openness is the ultimate expression of the idea that others may build upon and extend the work of individual scientists in ways they themselves would never have conceived.

The challenge of achieving a more open culture is also being confronted in popular culture. People such as Richard Stallman, Lawrence Lessig, Yochai Benkler, Cory Doctorow, and many others have described the benefits openness brings in a networked world, and developed tools such as Creative Commons licensing and free and open source software to help promote a more open culture, and fight the forces inhibiting it. As we have seen, however, science faces a unique set of forces that inhibit open culture – the centuries-old subsidy of old ways of sharing knowledge – and this requires a new understanding of how to overcome those forces.

How can we open up scientific culture?

To create an open scientific culture that embraces new online tools, two challenging tasks must be achieved: (1) build superb online tools; and (2) cause the cultural changes necessary for those tools to be accepted. The necessity of accomplishing both these tasks is obvious, yet projects in online science often focus mostly on building tools, with cultural change an afterthought. This is a mistake, for the tools are only part of the overall picture. It took just a few years for the first scientific journals (a tool) to be developed, but many decades of cultural change before journal publication was accepted as the gold standard for judging scientific contributions.

None of this is to discount the challenge of building superb online tools. To develop such tools requires a rare combination of strong design and technical skills, and a deep understanding of how science works. The difficulty is compounded because the people who best understand how science works are scientists themselves, yet building such tools is not something scientists are typically encouraged or well suited to do. Scientific institutions reward scientists for making discoveries within the existing system of discovery; there is little place for people working to change that system. A technologically-challenged Head of Department is unlikely to look kindly on a scientist who suggests that instead of writing papers they’d like to spend their research time developing general-purpose tools to improve how science is done.

What about the second task, achieving cultural change? As any revolutionary can attest, that’s a tough order. Let me describe two strategies that have been successful in the past, and that offer a template for future success.

The first is a top-down strategy that has been successfully used by the open access (OA) movement [3]. The goal of the OA movement is to make scientific research freely available online to everyone in the world. It’s an inspiring goal, and the OA movement has achieved some amazing successes. Perhaps most notably, in April 2008 the US National Institutes of Health (NIH) mandated that every paper written with the support of their grants must eventually be made open access. The NIH is the world’s largest grant agency; this decision is the scientific equivalent of successfully storming the Bastille.

The second strategy is bottom-up. It is for the people building the new online tools to also develop and boldly evangelize ways of measuring the contributions made with the tools. To understand what this means, imagine you’re a scientist sitting on a hiring committee that’s deciding whether or not to hire some scientist. Their curriculum vitae reports that they’ve helped build an open science wiki, and also write a blog. Unfortunately, the committee has no easy way of understanding the significance of these contributions, since as yet there are no broadly accepted metrics for assessing such contributions. The natural consequence is that such contributions are typically undervalued.

To make the challenge concrete, ask yourself what it would take for a description of the contribution made through blogging to be reported by a scientist on their curriculum vitae. How could you measure the different sorts of contributions a scientist can make on a blog – outreach, education, and research? These are not easy questions to answer. Yet they must be answered before scientific blogging will be accepted as a valuable professional scientific contribution.

A success story: the arXiv and SPIRES

Let’s look at an example illustrating the bottom-up strategy in action. The example is the well-known physics preprint arXiv. Since 1991 physicists have been uploading their papers to the arXiv, often at about the same time as they submit to a journal. The papers are made available within hours for anyone to read. The arXiv is not refereed, although a quick check is done by arXiv moderators to remove crank submissions. The arXiv is an excellent and widely-used tool, with more than half of all new papers in physics appearing there first. Many physicists start their day by seeing what’s appeared on the arXiv overnight. Thus, the arXiv exemplifies the first step for achieving a more open culture: it is a superb tool.

Not long after the arXiv began, a citation tracking service called SPIRES decided they would extend their service to include both arXiv papers and conventional journal articles. SPIRES specializes in particle physics, and as a result it’s now possible to search on a particle physicist’s name (example), and see how frequently all their papers, including arXiv preprints, have been cited by other physicists.

SPIRES has been run since 1974 by one of the most respected and highly visible institutions in particle physics, the Stanford Linear Accelerator Center (SLAC). The effort SLAC has put into developing SPIRES means that their metrics of citation impact are both credible and widely used by the particle physics community. It’s now possible for a particle physicist to convincingly demonstrate that their work is having a high impact, even if it has only been submitted to the arXiv, and has not been published in a conventional scientific journal. When physics hiring committees meet to evaluate candidates in particle physics, people often have their laptops out, examining and comparing the SPIRES citation records of candidates.

The arXiv and SPIRES have not stopped particle physicists from publishing in peer-reviewed journals. When you’re applying for jobs, or up for tenure, every ounce of ammunition helps, especially when the evaluating committee may contain someone from another field who is reluctant to take the SPIRES citation data seriously. Still, particle physicists have become noticeably more relaxed about publication, and it’s not uncommon to see a CV which includes preprints that haven’t been published in conventional journals. This is an example of the sort of cultural change that can be achieved using the bottom-up strategy. In the next part, we’ll see how far these ideas can be pushed in pursuit of new tools for collaboration.

Part II: Collaboration Markets: building a collective working memory for science

The problem of collaboration

Even Albert Einstein needed help occasionally. Einstein’s greatest contribution to science was his theory of gravity, often called the general theory of relativity. He worked on and off on this theory between 1907 and 1915, often running into great difficulties. By 1912, he had come to the astonishing conclusion that our ordinary conception of geometry, in which the angles of a triangle add up to 180 degrees, is only approximately correct, and a new kind of geometry is needed to correctly describe space and time. This was a great surprise to Einstein, and also a great challenge, since such geometric ideas were outside his expertise. Fortunately for Einstein and for posterity, he described his difficulties to a mathematician friend, Marcel Grossman. Grossman said that many of the ideas Einstein needed had already been developed by the mathematician Bernhard Riemann. It took Einstein three more years of work, but Grossman was right, and this was a critical point in the development of general relativity.

Einstein’s conundrum is familiar to any scientist. When doing research, subproblems constantly arise in unexpected areas. No-one can be expert in all those areas. Most of us instead stumble along, picking up the skills necessary to make progress towards our larger goals, grateful when the zeitgeist of our research occasionally throws up a subproblem in which we are already truly expert. Like Einstein, we have a small group of trusted collaborators with whom we exchange questions and ideas when we are stuck. Unfortunately, most of the time even our collaborators aren’t that much help. They may point us in the right direction, but rarely do they have exactly the expertise we need. Is it possible to scale up this conversational model, and build an online collaboration market [4] to exchange questions and ideas, a sort of collective working memory for the scientific community?

It is natural to be skeptical of this idea, but an extremely demanding creative culture already exists which shows that such a collaboration market is feasible – the culture of free and open source software. Scientists browsing for the first time through the development forums of open source programming projects are often shocked at the high level of the discussion. They expect amateur hour at the local Karaoke bar; instead, they find professional programmers routinely sharing their questions and ideas, helping solve each other’s problems, often exerting great intellectual effort and ingenuity. Rather than hoarding their questions and ideas, as scientists do for fear of being scooped, the programmers revel in swapping them. Some of the world’s best programmers hang out in these forums, swapping tips, answering questions, and participating in the conversation.

Innocentive

I’ll now describe two embryonic examples which suggest that collaboration markets for science may be valuable. The first is Innocentive, a service that allows companies like Eli Lilly and Proctor and Gamble to pose Challenges over the internet, scientific research problems with associated prizes for their solution, often many thousands of dollars. For example, one of the Challenges currently on Innocentive asks participants to find a biomarker for motor neuron disease, with a one million dollar prize. If you register for the site, it’s possible to obtain a detailed description of the Challenge requirements, and attempt to win the prize. More than 140,000 people from 175 countries have registered, and prizes for more than 100 Challenges have been awarded.

Innocentive is an example of how a market in scientific problems and solutions can be established. Of course, it has shortcomings as a model for collaboration in basic research. Only a small number of companies are able to pose Challenges, and they may do so only after a lengthy vetting process. Innocentive’s business model is aimed firmly at industrial rather than basic research, and so the incentives revolve around money and intellectual property, rather than reputation and citation. It’s certainly not a rapid-fire conversational tool like the programming forums; one does not wake up in the morning with a problem in mind, and post it to Innocentive, hoping for help with a quick solution.

FriendFeed

FriendFeed is a much more fluid tool which is being used by scientists as a conversational medium to discuss scientific research problems. What FriendFeed allows users to do is set up what’s called a lifestream. As an example, my lifestream is set up to automatically aggregate pretty much everything I put on the web, including my blog posts, del.icio.us links, YouTube videos, and several other types of content:



I also subscribe to a list of about one hundred or so “friends” (a few are listed on the right in the screenshot above) whose lifestreams I can see aggregated into one giant river of information – all their Flickr photos, blog posts, and so on. These people aren’t necessarily real friends – I’m not personally acquainted with my “friend” Barack Obama – but it’s a fantastic way of tracking a high volume of activity from a large number of people.

As part of the lifestream, FriendFeed allows messages to be passed back and forth in a lightweight way, so communities can form around common interests and shared friendships. In April 2008, Cameron Neylon, a chemist from the University of Southampton, used FriendFeed messaging to post a request for assistance in building molecular models. Pretty quickly Pawel Szczesny replied, and said he could help out. A scientific collaboration was now underway. The original request and discussion is shown here:



FriendFeed is a great service, but it suffers from many of the same problems that afflict the comment sites and Wikipedia. Lacking widely accepted metrics to measure contribution, scientists are unlikely to adopt FriendFeed en masse as a medium for scientific collaboration. And without widespread adoption, the utility of FriendFeed for scientific collaboration will remain relatively low.

The economics of collaboration

How much is lost due to inefficiencies in the current system of collaboration? To answer this question, imagine a scientist named Alice. Like most scientists, many of Alice’s research projects spontaneously give rise to problems in areas in which she isn’t expert. She juggles hundreds or thousands of such problems, re-examining each occasionally, and looking to make progress, but knowing that only rarely is she the person best suited to solve any given problem.

Suppose that for a particular problem, Alice estimates that it would take her 4-5 weeks to acquire the required expertise and solve the problem. That’s a long time, and so the problem is on the backburner. Unbeknownst to Alice, though, there is another scientist in another part of the world, Bob, who has just the skills to solve the problem in less than a day. This is not at all uncommon. Quite the contrary; my experience is that this is the usual situation. Consider the example of Grossmann, who saved Einstein what might otherwise have been years of extra work.

Do Alice and Bob exchange questions and ideas, and start working towards a solution to Alice’s problem? Unfortunately, nine times out of ten they never even meet, or if they meet, they just exchange small talk. It’s an opportunity lost for a mutually beneficial trade, a loss that may cost weeks of work for Alice. It’s also a great loss for the society that bears the cost of doing science, a loss that must run to billions of dollars each year in total. Expert attention, the ultimate scarce resource in science, is very inefficiently allocated under existing practices for collaboration.

An efficient collaboration market would enable Alice and Bob to find this common interest, and exchange their know-how, in much the same way eBay and craigslist enable people to exchange goods and services. However, in order for this to be possible, a great deal of mutual trust is required. Without such trust, there’s no way Alice will be willing to advertise her questions to the entire community. The danger of free riders who will take advantage for their own benefit (and to Alice’s detriment) is just too high.

In science, we’re so used to this situation that we take it for granted. But let’s compare to the apparently very different problem of buying shoes. Alice walks into a shoestore, with some money. Alice wants shoes more than she wants to keep her money, but Bob the shoestore owner wants the money more than he wants the shoes. As a result, Bob hands over the shoes, Alice hands over the money, and everyone walks away happier after just ten minutes. This rapid transaction takes place because there is a trust infrastructure of laws and enforcement in place that ensures that if either party cheats, they are likely to be caught and punished.

If shoestores operated like scientists trading ideas, first Alice and Bob would need to get to know one another, maybe go for a few beers in a nearby bar. Only then would Alice finally say “you know, I’m looking for some shoes”. After a pause, and a few more beers, Bob would say “You know what, I just happen to have some shoes I’m looking to sell”. Every working scientist recognizes this dance; I know scientists who worry less about selling their house than they do about exchanging scientific information.

In economics, it’s been understood for hundreds of years that wealth is created when we lower barriers to trade, provided there is a trust infrastructure of laws and enforcement to prevent cheating and ensure trade is uncoerced. The basic idea, which goes back to David Ricardo in 1817, is to concentrate on areas where we have a comparative advantage, and to avoid areas where we have a comparative disadvantage.

Although Ricardo’s work was in economics, his analysis works equally well for trade in ideas. Indeed, even were Alice to be far more competent than Bob, Ricardo’s analysis shows that both Alice and Bob benefit if Alice concentrates on areas where she has the greatest comparative advantage, and Bob on areas where he has less comparative disadvantage. Unfortunately, science currently lacks the trust infrastructure and incentives necessary for such free, unrestricted trade of questions and ideas.

An ideal collaboration market will enable just such an exchange of questions and ideas. It will bake in metrics of contribution so participants can demonstrate the impact their work is having. Contributions will be archived, timestamped, and signed, so it’s clear who said what, and when. Combined with high quality filtering and search tools, the result will be an open culture of trust which gives scientists a real incentive to outsource problems, and contribute in areas where they have a great comparative advantage. This will change science.

Further reading

The ideas explored in this essay are developed at much greater length in my book Reinventing Discovery: The New Era of Networked Science.

Subscribe to my blog here.

Acknowledgments

Based on a keynote talk by Michael Nielsen at the New Communication Channels for Biology workshop, San Diego, June 26 and 27, 2008. Thanks to Krishna Subramanian and John Wooley for organizing the workshop, and all the participants for an enjoyable event. Thanks to Eva Amsen, Jen Dodd, Danielle Fong, Peter Rohde, Ben Toner, and Christian Weedbrook for providing feedback that greatly improved early drafts of this essay.

Footnotes

[1] Clay Shirky’s “Here Comes Everybody” is an excellent book that contains much of interest on new ways of collaborating.

[2] An ongoing experiment which incorporates online commentary and many other innovative features is PLoS ONE. It’s too early to tell how successful its commentary will be.

[3] I strongly recommend Peter Suber’s Open Access News as a superb resource on all things open access.

[4] Shirley Wu and Cameron Neylon have stimulating blog posts where they propose ideas closely related to collaboration markets.

From → Social software

159 Comments
  1. Michael,

    You missed one way of effectively collaborating in a competing environment: The reproducible research “movement”: A definition of which is here:
    http://lcavwww.epfl.ch/reproducible_research/

    In short, at least in some advanced engineering and science, papers are OK but they just show scholarship on the part of the authors whereas sharing codes and data that allowed the production of the graphs in the papers is much more valuable to accelerate understanding of the concepts and their subsequent re-utilization.

    In the subject I am current interested in (Compressed Sensing) and blogging about, most authors are following this route and this is why I can make real-time reviews of what is happening in the field (i.e. I can test their algorithms and results)

    http://igorcarron.googlepages.com/cs

    the attendant blog is here:

    http://nuit-blanche.blogspot.com/search/label/CS

    For additional data that may be of interest:

    http://nuit-blanche.blogspot.com/2008/06/cs-community-top-ten-questions-you.html

    http://nuit-blanche.blogspot.com/2008/04/compressed-sensing-community-part-deux.html

    http://nuit-blanche.blogspot.com/2008/03/compressed-sensing-who-are-you-and-poll.html

    http://nuit-blanche.blogspot.com/2008/06/cs-nuit-blanche-effect-more-videos.html

    Cheers,

    Igor.

  2. Interesting posting about openness and collaboration, regarding other parts of future of science I wrote an entry related to future of scientific methods – http://amundblog.blogspot.com/2008/07/rebirth-of-confounding-and-theory-in.html

  3. Dear Michael, This is a very interesting and thought provoking post that I enjoyed very much reading. I find the question, for example, if blog discussions can be useful for scientific progress (in the narrow sense of leading to new dicoveries,) very interesting. Are you aware of substantial progress (say in TCS/math) that came from blog discussions?

    (I also think that even with these new horizons we should keep the cool, objective and skeptic nature of academy and science. E.g., the superlatives regarding Kasparov games against the crowd seems too good to be true.)

  4. Hi Gil – Thanks! In answer to your question, I’m certainly aware of many instances of minor progress coming from blogging. E.g., I recall that one of Scott Aaronson’s posts led someone else to solve an open problem, which was subsequently published. But I don’t know of any examples of really major progress made in this media.

    In my opinion, mathematics / theoretical CS is perhaps the field most likely to have this happen. With people like Terry Tao, Luca Trevisan, yourself, and others blogging in a serious research-oriented fashion, it seems like a matter of time.

    Let me turn the question around: are you aware of any substantial progress that came from blog discussions?

    Regarding the Kasparov game – the superlatives I used were just quotes of Kasparov himself. Not only did he say it was the greatest game in history, he also said that he expended more energy on that game than on any other in his career. I gather that other chess experts don’t rate the game quite that highly, but obviously Kasparov has a lot of credibility.

  5. Thank you for this highly interesting essay! I completely agree in your assessment of the current situation and you hint at potentially promising ways of changing the pretty idiosyncratic status quo.
    However, what makes the scientific community different from the other community you mentioned (Amazon, open source, etc.) is that we as scientists have high stakes in our ideas and plans. In principle, our ideas are what brings food on the table. And whenever the stakes are that high, secrecy pays off – and the science community is not alone in this. The shoe-salesman Bob may have high stakes in his business, but for Alice it is probably just one of many pairs of shoes. Analogies only go so far and the debate about, for instance, music sharing, or intellectual property in general appears to me to be equally pertinent.
    This means that right now there are incentives NOT to share ideas and criticism. This incentives, as you rightfully point out are overcome by social means: smalltalk over beers in a bar, etc. You also point to some potentially promising technologies to facilitate sharing and bring about the cultural change that clearly needs to happen. But I think you only alluded to a potential way of providing stronger incentives to share then there are now incentives not to share. At PLoS One we have often discussed how to get scientists to comment on papers. We have only been able to come up with very few ideas so far. One is journal clubs, where the individuals can hide behind a group moniker, which is basically the same as anonymity. Another is to have a public profile where comments and ideas, blog posts and peer-review are listed. In this way, any scientist can build reputation which can be quantified. However, for this, the technology exists only partially and is spread over thousands of journals and services. Maybe most importantly, a definite internet ID standard, while being currently considered, is still lacking.
    Of what other incentives to openly share and critique could one think? How can the clear incentives not to share be overcome?

  6. Hi Igor,

    Thanks for your comment.

    I wanted to keep the essay short, so wasn’t all that explicit about the many types of knowledge that scientists can usefully share. I certainly agree wholeheartedly that sharing enough information (code, data, etc) to make research truly reproducible is very important.

    The question “What tacit knowledge do scientists currently have that they don’t explicitly share?” is one of the best meta-questions I know for generating ideas for online tools for scientists. Almost every aspect of a scientist’s workflow suggests new possibilities for online tools.

  7. An excellent essay, thanks for the good read. I’ve written about similar things on my own blog and reached similar conclusions:
    href=”http://www.cshblogs.org/cshprotocols/2008/02/14/why-web-20-is-failing-in-biology/

    http://www.cshblogs.org/cshprotocols/2008/04/03/web-20-for-biologists-are-any-of-the-current-tools-worth-using/

    The big problems are a lack of time and incentive.

    Some responses to points made in your essay:

    On Wikipedia–You comment that the reason more scientists aren’t writing Wikipedia articles is, “that contributing to Wikipedia isn’t really science.” In some ways you’re right–being part of an anonymously authored group-posting isn’t something you’re likely to get career credit for. But there are other reasons as well, as anyone who has tried to contribute to Wikipedia can tell you, number one being the near impenetrable web of rules surrounding participation, and the overly zealous guardians of those rules. From my own experience, when trying to add a few facts or correct a few errors in a Wiki entry, every single attempt I made was immediately deleted, citing some obscure and inscrutable rule. Rather than try to challenge the gatekeepers, I simply gave up on trying to help out. As pointed out elsewhere, experts are not welcome on Wikipedia (noted here too), and scientists are likely to be experts in the field in which they’re writing. So, not only is there no incentive, they’re also likely to run into hostile opposition to their participation.

    On Extreme Openness–one of the problems with this concept is that it will inevitably lead to even more information overload than we currently face. Right now, we’ve at least got a layer of editorial oversight, vetting of articles before they’re published. If you instead just dump everything into a bin for the user to sort out, that’s a huge timesink. I know, I know, “the wisdom of the crowds” and all that, but still, someone has to read all of those articles. How much of your week are you willing to commit to reading the bottom 5% of the dregs of that bin? I’m not talking about the bottom 5% of Nature articles, I’m talking about the bottom 5% of articles so weak that they are unpublishable by even the lamest of journals. Someone has to read those papers and tag them as the dregs. Is it you? Is this a good way to spend your valuable time? Also, as you note, we need much better tools to sort through all this as well, and extreme openness certainly can’t work without them. As recently noted, a system like this leads to conformity and consensus (at least so far) rather than a widespread range of papers and opinions. And that’s a problem.

    On the OA movement–yes, they’ve certainly made great strides and are having a strong influence on the culture of science. That said, as recent reports point out, the economics of the OA movement have so far failed to show that they can be sustainable across a wide variety of publications. We may just be in early times of a long term movement, but I’d be hesitant to throw out the system we currently have for an unproven one.

    On quantifying one’s contribution–that’s the million dollar question here, how do we incentivize participation? Right now, all that matters is what influences funding committees, hiring committees and tenure committees. Can you really get these groups to give weight to a Slashdot style karma rating? If so, isn’t such a system open to gaming? A well-known blogger with lots of online friends who all link to his blog and papers would end up with a higher rating than someone without the social networking skills that does better, more groundbreaking science. Politics already plays way too big of a role in scientific success these days, and I fear a system based on one’s social networking skills would only exacerbate the problems (although you’d shift the power to a different group–perhaps that’s why those already using science social networks are so enthusiastic about them, as mass uptake would make them the real power brokers of science).

    Citations from arXiv–you mention the ranking of paper quality in arXiv by the number of citations received as a success story–but isn’t this the same metric that’s currently under fire from so many people? How is this any better than Thomson/ISI’s Impact Factors?

    On collaboration–first the mention of the level of discourse on open source programming forums–wouldn’t one argue that open source programmers are under different economic and career pressure than academic or industrial scientists? The very nature of an open source program is quite different from scientific results in a competitive environment with limited funding and limited job space. If you create a program that serves my needs, I can use it. If you get scientific results that are the same as those I’m working on, I can’t use those to further my career. Hence, I’m under pressure not to participate with helpful advice.

    On FriendFeed–this, like Twitter, is a tool that many are extremely enthusiastic about, but that I can’t see being adopted by many scientists. Simply put, who has the time? It’s one thing to ask for the time required to participate, to write blog posts, tag papers for del.icio.us, to make YouTube videos and to constantly alert people as to which Starbucks you’re sitting in and which type of latte you’re drinking. It’s quite another to ask people to follow hordes of others as they do the same thing. How many hours a day am I supposed to spend seeing what others are blogging about, tagging and drinking at Starbucks? I can barely keep up with my e-mail, now I’m supposed to read micro-blogs?

    On the Economics of collaboration–there’s a big difference between buying a pair of shoes and devoting your extremely valuable time and thousands, if not hundreds of thousands of dollars in reagents and equipment to a collaboration. Trust is established not by chatting with a new friend, but by following the publication record of your potential collaborator. What have they accomplished? Can they really do what they claim they can do? If so, they will have published it, or at least have a respectable record established. I don’t think you’ll ever see people meeting strangers in chat rooms and working together, at least not on a large scale. The stakes are too high. Would you be willing to roll the dice on a year of one of your graduate student’s careers on some stranger you met online?

  8. Hi David – Thanks for your thoughtful reply. A few responses:

    On Wikipedia: to clarify, my point was not that I personally believe contributing to Wikipedia isn’t science. It’s that in the early days of the Wikipedia project, the scientific community regarded contributions by scientists to Wikipedia as scientifically worthless. Personally, I disagree with this point of view, and believe high-quality contributions should be valued, essentially as an outreach or teaching activity. I hope this was clear from what I wrote.

    (I do to some extent agree with the rest of your comments about Wikipedia. But there are other wiki projects which are less rule-bound and more expert friendly, and they still suffer the same problems that contribution isn’t regarded as “real science”.)

    On information overload: I strongly agree that this has the potential to be a huge problem, and whether we’ll rise to meet the challenge is an open question. I find it hard to believe that we won’t at least match the signal to noise ratio in the current system – sorting by journal is a terrible way of filtering information.

    I disagree that someone has to read the bottom 5% of papers and tag them as the dregs. To point to the analogous case of the open web, nobody needs to read all the spam and zero-content pages on the web for Google to realize that they are worthless. Lack of affirmation from credible information sources is enough for those pages to disappear to the bottom of Google’s rankings.

    My (speculative) belief is that openness will provide an opportunity to improve the signal to noise ratio, and thus reduce the problem of information overload. The current relatively closed nature of the research literature means that it’s very difficult for third parties to find innovative new ways to filter research content. If the web had been closed in the same way, Google and all the rest would never have been created, and we’d be stuck with some kind of government-issued telephone directory that made the web unusable.

    Of course, this is just speculation. Whether it’s correct will only be determined if we actually do the test. It looks increasingly likely that that chance will come in the not-too-distant future.

    On OA: There are two types of OA – one based on self-archiving, and the other based on OA journals. The business models of the two are quite separate issues. Your comments only pertain to OA journals. The success of the physics arXiv shows that the self-archiving model can work exceptionally well. It is this self-archiving model that the NIH mandate I discuss in my essay addresses.

    OA for journals is a separate issue, with separate business concerns. I agree that we need to watch the sustainability of these business models very carefully. Timo Hannay and John Wilbanks offer thoughtful complementary perspectives here.

    On quantifying contributions: you’re absolutely right, there’s all kinds of questions here. It really needs a separate essay. I hope to come back to it at length later.

    On citations to the arXiv: you’ve misconstrued my point. I wasn’t trying to say SPIRES is a better system for measuring impact than citation services which only include traditional journals. I was simply saying that by building a reliable citation tracking service SPIRES has caused a substantial cultural shift in how particle physics is done. That’s certainly true. Whether it’s an improvement as a quality measure is debatable; my own opinion is that it’s a wash. The benefit, though, is that there are now important papers available which otherwise would never have seen the light of day. (I can provide many examples on request.) And so the net effect is positive.

    On FriendFeed, I didn’t say I believe mass adoption by scientists is likely. In fact, I explicitly say the reverse, and explain why. The argument I give is an abbreviated form of yours. (The argument only holds for purposes of collaboration; scientists may well adopt FriendFeed for other reasons, if it goes mainstream.)

    On the economics of collaboration: you write “I don’t think you’ll ever see people meeting strangers in chat rooms and working together, at least not on a large scale. The stakes are too high.” Linux kernel development works exactly this way. And the stakes are huge: I recently read that approximately 70 percent of Linux kernel development is done by people who are paid to do it.

  9. Michael, thanks for the further thoughts. A few quick responses.

    —I hope this was clear from what I wrote.—

    Yes, very much so, you were stating that many scientists don’t see it as an important career-advancing activity, which at this point, it’s not. The problem with having lots of Wikis is that it creates a different sort of overload, where you don’t know which one to contribute to, and each one ends up with some good material and lots of holes. If any one picks up steam and becomes dominant, it seems like the same sort of politics will spring up around it that have already sprung up around Wikipedia. Nature seems to abhor a vacuum and human beings seem to need to create hierarchies and rules where there are none. To me this is just replacing the current publishing hierarchy with a self-elected hierarchy, one that seems more interested in enforcing rules than in creating quality content.

    —sorting by journal is a terrible way of filtering information—

    In some ways yes, but I think (and I’m an editor, so I’m biased) that professional filtering is worth paying for. As the founders of Slashdot have noted, without oversight, mediocrity rises to the top of crowd rated systems (they use “Everybody Loves Raymond” on TV and “man gets hit in the crotch” on YouTube as their examples). If you ever get a chance to see the kinds of papers that are submitted to top journals, you’d see that this first level of filtering creates a huge difference.

    —I disagree that someone has to read the bottom 5% of papers and tag them as the dregs. To point to the analogous case of the open web, nobody needs to read all the spam and zero-content pages on the web for Google to realize that they are worthless. Lack of affirmation from credible information sources is enough for those pages to disappear to the bottom of Google’s rankings.—

    I don’t know, it’s fairly easy to filter a lot of those pages out along the same lines that we use to filter spam e-mails. It’s not so easy to do the same with scientific papers. Can you automatically filter for significance or originality? And also, Google rankings are a popularity-based system (who links to who as one example). Again, I worry that we should be striving for quality papers not for papers written by people who network well or have lots of friends.

    —My (speculative) belief is that openness will provide an opportunity to improve the signal to noise ratio, and thus reduce the problem of information overload.—

    This is really the hope, that semantic tools will help solve this problem. So far, we’re not there yet.

    —There are two types of OA – one based on self-archiving, and the other based on OA journals. The business models of the two are quite separate issues.—

    True, but self-archiving needs a business model as well. There are costs involved, and right now, most institutions are forcing those costs on their already burdened libraries.

    —On citations to the arXiv: you’ve misconstrued my point. I wasn’t trying to say SPIRES is a better system for measuring impact than citation services which only include traditional journals.—

    Okay, fair enough. Citation services are increasingly being seen as really poor ways to judge quality of papers, journals and researchers though. Either way, we need better measurements, both for arXiv and for journals.

    —On FriendFeed, I didn’t say I believe mass adoption by scientists is likely. In fact, I explicitly say the reverse, and explain why.—

    I guess I was hoping that someone would explain to me why people are so excited about it (and Twitter). Are they just folks with way too much time on their hands?

    —Linux kernel development works exactly this way. And the stakes are huge: I recently read that approximately 70 percent of Linux kernel development is done by people who are paid to do it.—

    But as I said, if you add something to Linux, I can use it for my own commercial purposes. It doesn’t really matter who does the work. If you discover something scientifically and publish/patent it, I can’t use it to further my career/business. So it does matter who does the work.

  10. Hi David,

    - On Wikipedia again: you wrote in your comment and on your blog: “You [Michael] comment that the reason more scientists aren’t writing Wikipedia articles is, “that contributing to Wikipedia isn’t really science.”” This will give your readers the wrong impression of what I think. I certainly think high-quality contributions to Wikipedia are valuable scientific contributions, and should be viewed as such.

    (I hope that’s clear).

    Moving on, your argument against Wikipedia is summed up in your statement: “To me this is just replacing the current publishing hierarchy with a self-elected hierarchy, one that seems more interested in enforcing rules than in creating quality content.”

    It seems you find very little value in Wikipedia. Obviously, it’s fine by me if that’s your opinion. But it’s clear a huge number of people find Wikipedia a useful, if sometimes flawed resource. In my opinion, the Wikipedians have built something extremely valuable, and that something was not achieved by the existing journal system. For the majority of people, it’s not just business-as-usual, as your comment implies.

    On journals and filtering: as a physicist, my best information source is the preprint arXiv, where moderation for the great majority of papers is done in seconds or minutes. The additional filtering provided by top-tier journals like Physical Review Letters adds only a little additional value, and that value is small compared with the value subtracted because of the time-delay before publication. Physicists have voted with their feet on this – everyone reads the arXiv, but few people pay as much attention even to the top-tier journals.

    On filtering in general: you write “I don’t know, it’s fairly easy to filter a lot of those pages out along the same lines that we use to filter spam e-mails. It’s not so easy to do the same with scientific papers. Can you automatically filter for significance or originality? And also, Google rankings are a popularity-based system (who links to who as one example). Again, I worry that we should be striving for quality papers not for papers written by people who network well or have lots of friends.”

    This doesn’t address my point: Google does a very good job of filtering out low quality webpages, without the need to tag those pages as bad. It does this based on link analysis, not the Bayesian filters used by the spam programs. The formal structure PageRank exploits is virtually identical to the structure available in a citation network of scientific papers. I think there’s every reason to believe that this approach will work well for the scientific literature, and that your original argument (no-one wants to tag bad papers) is a red herring.

    - On the self-archiving of OA, and the need for business models: Andrew Odlyzko has studied the economics of this, and concluded that the self-archiving model costs about two orders of magnitude less per page as a standard journal publication. I suspect that cancelling the bottom one percent or so of journal subscriptions and putting the rest towards repositories would be a no-brainer for many libraries.

    - On FriendFeed: That’s pretty dismissive language. When someone is doing something whose value I don’t understand, my first assumption is not that they are layabouts with too much time on their hands. I find FriendFeed useful, and I value my time greatly.

    - On Linux: I’m not following you here. When someone else makes a scientific discovery, of course I can reuse it and build upon it. Isn’t that the point?

  11. Hi Michael,

    This is a great essay! I’m looking forward to the book.

    Just some comments. You write

    “two challenging tasks must be achieved: (1) build superb online tools; and (2) cause the cultural changes necessary for those tools to be accepted. The necessity of accomplishing both these tasks is obvious”.

    I don’t think it is obvious to everybody. Science has worked very well without any of that and the inertia among academics is huge. People think we can go on doing research like we’ve always done. You know that I keep repeating we can’t, because the environment we live in has changed whether we like that or not, and we need to adapt. But I don’t think many people actually get that message. So science is falling behind.

    Further, regarding the question of incentives. I’ve recently written a brief post where I mentioned a suggestion that I’ve been thinking about for a while. Since it addresses the points you are discussing here, I’d be interested in hearing your opinion: The workload of scientists is too high. They are doing to many things, have too high pressure, and are constantly short on time. One result of that is that they look for shortcuts when fulfilling tasks or constantly forget their postdocs. One other result is increasing specialization. That specialization can turn into a problem if it leads to fragmentation and hinders communication.

    So, my suggestion was to make possible a specialization in task, not in field, such that one promotes structural diversity instead of specialization in topic. There are different tasks researchers can do besides their research. Public outreach for example is one of them. Teaching could be another. Refereeing is another task that I think today is completely underappreciated. It takes a lot of time, but one doesn’t get any credits. Why not make a certain percentage of people in a department ‘referees’ – as those to be preferably (not exclusively) addressed with referee requests (papers/proposals). They would have more time, have the opportunity to actually figure out how to do this best.

    Now to come to what you write about: one other task that is completely underappreciated today is community building. I will call it such in the lack of a better word, what I mean is there’s that kind of person who you can go to and say: I’m working on something like this, and he or she will come up with a list of names of people that might be interesting to talk to. It’s that kind of person who knows something about everything and everybody and who will make a connection from your work to other’s work. One needs such people to some extend – better online tools can help, but I don’t think every replace personal connections. Similarly, there is the kind of people who ‘open up’ science and so on who you mention in your post. A group leader should of course, when hiring people, try to put them together such that these tasks are all covered. But this very often doesn’t work because a) people don’t pay attention to it b) it is sometimes hard to know and c) the incentives are on producing output and many assume the straightest way to go there seems to hire people who produce much output (which completely neglects the importance of their work environment). Therefore my suggestion that researchers in some way could point out more clearly what their skills are (besides research I mean).

    Best,

    B.

  12. —On Wikipedia again: (I hope that’s clear)–

    Absolutely, you’re describing the prevailing opinion, not your own.

    —It seems you find very little value in Wikipedia.—

    I think Wikipedia is a brilliant concept, I’m just frustrated and disappointed with what it’s turned into. It’s pretty much become the petty fiefdom of around 500 people who decide what goes in and what doesn’t. I don’t see that as any great improvement over a slightly smaller editorial board. In fact, it’s a step down in some ways. A professional editor has strong motivation for striving for quality–if the publication’s quality drops, they lose their job and their family goes hungry. As far as I can tell the small group that controls Wikipedia does so because 1) they have the time to do so, 2) they enjoy doing it and 3) they enjoy making and enforcing rules. As such, the end result is one that strives for consensus and conformity to the rules, rather than the highest quality information. Expertise is frowned upon. That’s not what I want in a reference work.

    —On journals and filtering: as a physicist, my best information source is the preprint arXiv, where moderation for the great majority of papers is done in seconds or minutes. The additional filtering provided by top-tier journals like Physical Review Letters adds only a little additional value, and that value is small compared with the value subtracted because of the time-delay before publication. Physicists have voted with their feet on this – everyone reads the arXiv, but few people pay as much attention even to the top-tier journals.—

    Perhaps biology (my background) is just different. I would never expect to understand a paper in seconds or minutes. I would never peer review or cite a paper that I hadn’t spent hours working through. Sure, there are some that are obviously quickly dismissed, but the vast majority fall into the murky middle ground. As an editor, I usually have to go through 5 or 6 rejected requests for each peer reviewer who agrees to look at a paper, every rejecter stating that they don’t have the time to give the paper a fair shake. I worry that asking people to vastly increase their reviewing levels is not going to work, or you’re going to end up back at Wikipedia, with a smaller group of people with time to spare doing the lion’s share of the reviewing and in essence just replacing the current editorial hierarchy (with the motivational worries mentioned above).

    —This doesn’t address my point: Google does a very good job of filtering out low quality webpages, without the need to tag those pages as bad. It does this based on link analysis, not the Bayesian filters used by the spam programs. The formal structure PageRank exploits is virtually identical to the structure available in a citation network of scientific papers. I think there’s every reason to believe that this approach will work well for the scientific literature, and that your original argument (no-one wants to tag bad papers) is a red herring.—

    But Google is a flawed index at best. As you note, it ranks based on popularity, not quality. With popularity as a measure, one would assume Britney Spears (104,000,000 results) is a better musician than Beethoven (36,700,000 results). As an example, do a Google search for “Mark Twain”. What you get is a page of results about Mark Twain (the Wikipedia entry being the top ranked result), and you have to go way down the list to actually find something written by Twain himself. Which is more valuable to me, as a scholar researching Twain, the original works or commentary on the works? Also again, I worry that well-networked scientists will thrive over people doing better work with fewer friends, fewer links, fewer reviewers. A system like this selects for consensus, not for quality. A recent study shows that fewer and fewer papers are being cited despite access to more and more literature online. Either this means that only the best papers are being selected for citation or more likely, people just do what everyone else is doing. There’s a strong trend toward citing reviews of reviews of reviews, rather than the original literature, and that’s worrisome.

    — On the self-archiving of OA, and the need for business models: Andrew Odlyzko has studied the economics of this, and concluded that the self-archiving model costs about two orders of magnitude less per page as a standard journal publication. I suspect that cancelling the bottom one percent or so of journal subscriptions and putting the rest towards repositories would be a no-brainer for many libraries.—

    Here’s an essay discussing the economics and the drain on libraries, Open Access 2.0: Access to Scholarly Publications Moves to a New Phase.

    — On FriendFeed: That’s pretty dismissive language. When someone is doing something whose value I don’t understand, my first assumption is not that they are layabouts with too much time on their hands. I find FriendFeed useful, and I value my time greatly.—

    Sorry, my bad for the lame attempt to inject a little humor into the discussion. It does point out one of the real worries about science on the web. Language is imprecise, most scientists are not great writers, and readers are eager to take umbrage. I don’t want to read the literature of my field if it consists of the flamewars and researchers calling one another Hitler, as most internet arguments eventually lead to. But to restate my point–99% of the scientists I’ve spoken to all complain about not having enough time. I had lunch with a leading Drosophila neurobiologist last week who told me that if he spends the whole day on the computer, he feels incredibly guilty, that he wasted a day where he should have been working with his students getting research done. How useful is a tool like Twitter or FriendFeed for someone like that who doesn’t want to spend a lot of time online?

    —On Linux: I’m not following you here. When someone else makes a scientific discovery, of course I can reuse it and build upon it. Isn’t that the point?—

    The difference being that if you’re a commercial Linux developer, you can use someone else’s work in your product, the originator of the code doesn’t matter. You can still advance your career using it. As a scientist, the originator of the work does matter, they get career credit for it, and their discovery doesn’t directly boost your career. Sure, you can use it as a starting point for new work, but unlike Linux, you can’t repackage it and sell it as your own product.

  13. “Let me turn the question around: are you aware of any substantial progress that came from blog discussions?”

    I am not sure. In any case, it looks like a nice experiment which is why I enjoy being part of it.

  14. gregorylent permalink

    your thinking is too linear!! too much more of the same only more.

    science needs a whack on the head! the coffin of its beliefs is making it almost a joke when looked at from the edge. right up there with business and politics in the ossification sweepstakes.

    fortunately the internet and its ephemeralization is going to do the job. objectivity is going away, get use to it. unity is happening, and science (a lot like scythe) has nothing to offer, except for a bit of tech expertise.

    your intentions are good, but NOT radical enough

    next time shout and scream, it is necessary now, to bring about the change i know you know is needed

  15. Mike, if you are writing a book on the future of science, I hope you will include a section on the federative aspects of science … federation being comparably important (in practice) to the traditional virtues of truth and beauty.

    For example, from a federative point of view, what is particularly striking about quantum information theory is the very strong hierarchy of promises it makes about physical systems.

    These quantum promises are much stronger than the promises that nature makes about classical systems … this makes it likely that humanity’s ability to simulate and control quantum systems will soon exceed our ability to simulate and control classical systems.

    In consequence, the same simulation-based confidence that companies like Boeing enjoy in building the 787 are now becoming feasible for classical systems.

    The mathematical strength of these QIT promises, and the confidence engendered by them, is already (IMHO) creating substantial new federative opportunities in science and technology.

    Federative enterprises like the Digital Sky Survey, the Human Genome Project, the 787, Intel’s Silicon Cadence … these are just the beginning.

  16. Hi Bee – You’re right, not everyone sees science as something that might be improved through the right combination of cultural and technological changes. Science is a pretty insular culture, and scientists often seem to me to be surprisingly behind the times.

    On the workload of scientists: I completely agree with you that this has many, many negative consequences. I think the high workload is a consequence of two things: the competitive nature of science, and the architecture of scientific employment.

    As regards the competition, the basic problem is that there are too many people seeking not enough positions. While this is true, competition will be ferocious. The obvious thing to do is to either reduce the number of people, or increase the number of positions.

    As regards the architecture of scientific employment, what I’m referring too is the way Universities want their researchers to bring in grants, lead a group, do research, spend time with postdocs and students, teach undergrad classes, sit on committees that do governance, review grants and papers, do outreach, etc, preferably spending 10-20 hours per week on each task, resulting in a desired average working day of approximately 20 hours.

    As you say, a more realistic division of labour might produce great results.

    There is a widespread theory that a scientist’s best work is done when they are young, because their brains work best at that age. I don’t think it’s true. I think it’s because that’s the only time in their career most of them actually get to concentrate on science.

    On community building: Not much to add, beyond my agreement.

  17. Hi Michael,

    This is just intended to mention that as a result of being what is described by Bee as one of her resident Kaons, that I was lead to this recent post of yours. I just wanted to say how much I enjoyed and agreed with for the most part what you had to say. I also hope that this will form to be the basis of what you will bring to the table at the upcoming conference that you and she will be hosting at PI.

    Best,

    Phil

  18. Just to name a few books/articles on this topic that I have found especially interesting … MIT’s Kennet Oye (of SynBERC) for Cooperation Under Anarchy … UCB’s Paul Rabinow (also of SynBERC) for Anthropos Today: Reflections on Modern Equipment … David Mindell’s recent (excellent) Digital Apollo: Human and Machine in Spaceflight … Booton and Ramo’s The Development of System Engineering (article) … the Army/Marine manual FM3-24 Counterinsurgency (especially the sections on nation-building, justice, and morality) … Burgelman’s case study of Intel Strategy is Destiny … Frans de Waal’s Chimpanzee Politics … Jane Goodall’s Reason for Hope … the IAS’s Jonathan Israel for Radical Enlightenment: Philosophy and the Making of Modernity, 1650-1750 and Enlightenment Contested: Philosophy, Modernity, and the Emancipation of Man 1670-1752.

    These authors don’t cite one another much (except for Goodall and de Waal), and yet they are all grappling with a common theme … the roles of natural history, philosophy, mathematics, science, and engineering in coordinating human endeavors.

    No doubt, the “Future of Science” will be conditioned by the confluence of these disciplines.

    I would greatly welcome learning of other folks’ favorite authors.

  19. Gosh, I accidentally omitted from my list a reference that is too good to skip … John von Neumann’s 1955 essay Can We Survive Technology (which can be found by searching Google Books).

    Mike, this essay was von Neumann’s 1955 attempt to foresee the future of science and its implications for humanity. He succeeds remarkably well … which is fortunate for us, because if von Neumann couldn’t do it, who could? :)

  20. Hi Michael,

    I agree with what you say. My husband just mentioned this post

    Who comments on scientific papers – and why?

    Sorry if somebody already mentioned it, I thought you’d be interested.

    Hi Phil,

    Your attention span is at least the lifetime of the proton ;-) Best,

    B.

  21. Hi Bee – There’s a discussion of that blog post that you might find interesting going on at:

    http://friendfeed.com/e/d2d42c45-ad2b-8903-f229-1594560e4394/Euan-Adie-Who-comments-on-scientific-papers-and/

  22. Hi Michael. This is all true, including the economics of trade.

    As long as people will bear the cost of the current system, things will not change. The problem with what you propose (and I often make related proposals on my blog) is that many people have a strong incentive to keep the current system working.

    One issue is money. Researchers are able to grow in power because they have more grant money which they use to build larger teams. Open collaboration would wipe out their edge. Indeed, funding a large team gives them an advantage precisely because the members of this team cannot easily collaborate with outsiders. (This, in turn, leads to the overproduction of Ph.D.s, but that’s another issue.)

  23. John – Thanks for the references, I will check some of them out.

  24. Daniel – if things won’t change, how do you explain the success of the OA mandates, or the success of the arXiv-SPIRES and now arXiv-citebase in gradually (still going slowly, but visible) changing the culture of physics? I think these are very viable strategies for change, that avoid the pitfalls you describe.

  25. eye opening piece….Useful knowledge+shareable knowledge interchanged in the second cirle.

  26. Anand – I’m not sure what you mean. It looks okay to me.

  27. Dear Michael

    You compiled an impressive and thought provoking essay. You will hear more from my

    side, yet just let me challenge a notion of yours. You write:

    ““two challenging tasks must be achieved: (1) build superb online tools; and (2) cause the cultural changes necessary for those tools to be accepted. The necessity of accomplishing both these tasks is obvious”.”

    Isn’t it like the cultural change that would be required to convince people that

    driving eg heavily fuel consuming and polluting SUVs (sport utility vehicles) is bad

    while driving more ecological cars is better? A significant shift in thinking (and behavior) would be necessary and society or at least a considerable number of people could live longer and/or in a cleaner environment. Hence, the advantages would be obvious. Anlogously:

    The hampering of the science community and the stumbling of social interaction/know-how sharing platforms has, in my opinion, much to do with differing interests between a single scientist and the science community as such: If I do not share my knowledge with the community it does hurt the community insignificantly while I can profit significantly by the achieved information asymmetry. If course, if all scientists would behave like this, science would have a much bigger problem.

    As much as the world had if all chinese would drive SUVs.

    Hence the SUV driver and the scientist not willing to share his information or

    knowledge is somehow related I think. YOu get my point –

    This rationale is actually an instantiation of the famous “moral hazard” issue. See also here:

    http://en.wikipedia.org/wiki/Moral_hazard – you may consider that in your book.

    What you suggested is to change that attitude as you mention a necessary cultural

    shift. I would doubt that such a shift is realistic, at least not in useful time spans.
    If the world were completely polluted and millions of people died in front of your door and barrel prices doubled and doubled, then SUV buyers may reconsider and change their behavior. I cannot think at the moment of similar very obvious means in how scientists’ behavior could be changed. Changing SUV behavior is already challenging, while here the disadvantages become extremely obvious.

    There will be opportunists benefiting from more altruistic acting people as long as there are different people.

    Perhaps grounding on some personal neuroscience/biology experience acquired I would highly suggest taking into consideration human nature, its key drivers and core motivating factors driving scientists. Thinking of that we may come up with realistic and powerful strategies on how to shape the future of science.

    I am looking forward to your book and maybe even a reply of yours. Again, thank you for your great essay!
    Faithfully,

    Pascal

  28. Pascal – yes, it’s closely related to information asymmetry, although I don’t see much connection to moral hazard. In economic terms, it’s closer to the problems described by Mancur Olson in the Logic of Collective Action.

  29. I’m curious, Michael, how long did it take you to write all this. Very nice read. :D

  30. James – Thanks for the compliment. It took me almost a month of full-time work, including conference attendance and preparation of slides. I usually write far more quickly, but this was a particularly difficult piece to write.

  31. Michael — Kudos for the effort. I have subscribed. :)

  32. Thanks for this insightful essay Michael. I’ve been thinking about distributed co-creation within the cultural organisation context for some time now. Your essay points to the ways in which scientists and science communicators in museums might consider their practice to develop more participatory approaches to knowledge sharing. Do you have any thoughts on scientists in the cultural sector?
    Cheers

  33. Dear Michael,

    Thank you for your extensive research on this topic. We’d like to bring to your attention a relatively new (launched December 2007) website designed specifically to address the concepts of online scientific collaboration and communication for a special niche group of scientific, policy, and advocacy stakeholders – http://www.AltTox.org. This website provides content, events, news, and discussion forums for professionals working in the field of toxicity testing, specifically to promote communication and collaboration that will support the development and implementation of new, non-animal based methods for toxicity testing. The static content section and invited commentaries from experts in the various fields of toxicity testing are provided as background information and as material to stimulate forum discussions. There are a number of ways that stakeholders can participate in the website: forum discussions, providing articles or commentaries, nominating experts, submitting informational listings or events, completing the user feedback survey, etc.

    As you have observed in your posting, the scientists and other professionals are somewhat reluctant to participate in the forum discussions. Hundreds of readings of a post will occur without any written response. We are remaining optimistic that the comfort level of participants will increase over time. We are also providing the forum moderators with all of the suggestions we can think of to stimulate their support and participation in developing relevant discussions within their particular forum.

    You said: “To create an open scientific culture that embraces new online tools, two challenging tasks must be achieved: (1) build superb online tools; and (2) cause the cultural changes necessary for those tools to be accepted.” We have built what we think is a superb online tool, and we are working on and waiting patiently for the cultural changes. One challenge is that our stakeholders are mid to senior level professionals. The success of the site thus far, is therefore quite remarkable considering that our generation did not grow up with the Internet. We remain optimistic that AltTox.org will prove to be useful and add value to the work of the stakeholders, which is usually the prime motivational force in driving collaboration.

    We welcome your comments and insight.

    Sherry Ward, AltTox Contributing Editor
    Martin Stephens, AltTox Management Team
    George Daston, AltTox Management Team
    Loree Tally, WebMaster

  34. This was an excellent article that generated a lot of thoughtful comments. I am a scientist who specializes in teaching “How to Facilitate Successful Research Collaborations” to other scientists in both industry and academia. My experience is based in the real world. I headed up Immunex Corporation’s Extramural Research Program for four years. During that time my group oversaw some 2500 collaborations with over 1000 different research groups worldwide. In addition to these, I also set up some 150 collaborations during my bench research days. In the past year I have written a number of articles on a variety of collaboration subjects. I offer free reprints of these on my website; please go to http://www.lymanbiopharma.com/FreeReprints.php to request reprints of any of the following articles:
    “Must You Collaborate With Another Scientist?” published in Drug Discovery News Vol. 3 #12, 12 (2007).
    “Partnerships Expedite Product Advancement” published in Genetic Engineering and Biotechnology News Vol. 28 #4, 56-57 (2008).
    “Six Questions to Ask Before Setting Up a Scientific Collaboration” published in PharmaVoice March (2008).
    “When Collaborations Compete” published in The Scientist 22 #5 28 (2008).
    “Preventing Discord in Research Collaborations” published in Genetic Engineering and Biotechnology News Vol. 28 #13, 66-67 (2008).

    I certainly agree with the basic premise that collaborations can be a powerful tool in moving science forward. However, there are a number of potential problems that can crop up in a collaboration that will spoil even the best intentions of working together. I look forward to seeing whatever tools are developed on the Web to facilitate future research collaborations worldwide. Stewart Lyman

  35. Angelina – In a cultural context, the incentive structure as I understand it is usually rather different, so the whole story seems to play out in a different way, and the remedies I suggest wouldn’t necessarily directly apply. Analogous remarks may also be made about research in an industrial context. Still, many of the broader themes remain true: the mere fact that these tools would be in everyone’s collective interest to adopt does not mean that individual scientists will necessarily find it in their own self-interest.

    If you have time to outline your own thoughts on the subject of scientists working in a cultural context and using web 2.0 tools (either in comments or on your own blog), I’d be very interested to read them. This is something I’d really like to know more about!

  36. Sherry – best of luck with the site. I’ll be interested to hear how it goes.

  37. I was thinking about the role people like John Baez played on the USENET newsgroups sci.physics. They start to do something I would call Science 1.0. Maybe would be interenting for you to mention.

    Great essay

  38. Hi Michael
    I take your point about incentive – a difficult issue in cultural organisations at the best of times!
    As we develop our next conference (http://nlablog.wordpress.com/conference-2009/) some of the key questions we are considering include:
    - the role cultural institutions play in the development of general understandings of science
    - how is scientific knowledge framed, problematised, created and disseminated in the Web 2.0 environment.

    We question whether if the public knew more about scientific knowledge would they contribute to controversial debates and if so, how might this change or challenge science policy in the cultural sector?

    It questions how search engines, social networks, Web 2.0: will affect science in knowledge institutions, questioning whether they will extend or submerge them?

    I will be speaking with a number of people over the next little while to try and develop a broader understanding of these issues. I’d be interested in your thoughts.

  39. Sonali Bankal permalink

    Michael,

    Thank you for your interesting analysis on the impact of the Internet on scientific collaboration. My agency (PJA) and Bioinformatics LLC have been tracking how scientists use social media. Our latest report highlights how scientists are taking up the open access publishing model. The Ebook is available for download at http://www.lifesciencesocialmedia.com.

    Please feel free to visit our site, check out this report and leave us a comment.

    Thanks!

    Sonali

  40. Lin permalink

    Hello, Michael

    Open science is an excellent idea! And it is my dream to build a system to make scientific collaborations more efficient. I think I have an idea which can solve many of the problems you mentioned above and I’d really like to share this idea with you.

    Could you give me your email address so that I can send it to you by next weekend.(I need some time to gather my thoughts together. )

    My email address is t4gl.mail@gmail.com

    Lin

  41. Lin – My email address is in the “Contact” section of my “About” page, http://michaelnielsen.org/blog/?page_id=181

  42. Lin permalink

    Hi Michael

    I think Internet can not only give us a collective short-term working memory, it can also offers us a kind of long-term memory.

    The basic thing we need is a new well-designed internet system, just like the scientific journal system for that time(300 years before).

  43. Michael wrote:

    “Is it possible to scale up this conversational model, and build an online collaboration market [4] to exchange questions and ideas, a sort of collective working memory for the scientific community?”

    In 2001 I started out building the Research Cooperative website with a customised php bulletin board (http://www.researchco-op.co.nz)(still online but closed) created with the help of a friendly researcher with coding skills. This was a good experience for trying to stimulate online collaboration (between authors, editors and translators) in the preparation of scientific papers, but never gained steam. It was not as interactive as I wanted, and members who joined could not easily learn about eachother.

    Then along came a new breed of social networking system – ning.com, in 2007-08, with its generic platform for creating social networks for anything.

    I have used this to create a new platform for the Research Cooperative, and we currently have more than 600 members. BUT – there is still a distinct lack of steam. There are many personal and cultural barriers to open collaboration that are more significant than the technical barriers.

    I now see myself as trying to change the culture around research authors seeking (or not seeking) help from their peers, as well as from professional editors and translators.

    It is an uphill battle, and sponsorship for effective promotion could do wonders, but at the moment, I am happy to see how The Research Cooperative develops in slow motion before really attempting to speed up development. Running an experimental website is a bit like trying to fix a bicycle while riding it over the top of a hill in the dark. I would like to get the headlights fixed very soon!

    Best regards, Peter (National Museum of Ethnology, Japan)

  44. Dear Michael,

    good essay!

    I think in many ways Neil Sloane’s Online
    Encyclopedia of Integer Sequences (OEIS),
    ( http://www.research.att.com/~njas/sequences/ )
    has been a great example of what you are driving after, even although it started with anything else than “the
    superb online tools”: The original version was stored in the punched cards and was published in the book format.

    I think the great part of its success is the very liberal approach Sloane took as what to accept into his database. You don’t need no academic credentials, or that the sequence you wanted to submit were already known and published in some peer-reviewed journal, or that the sequence should conform to some preconceived notion what a
    combinatorially/mathematically important sequence should be. (E.g. having a known generating function.)

    Of course the downside of this is that the OEIS has been periodically plagued by obsessive-compulsive submitters with a lots of sequences of very questionable value. Apparently you cannot have the openness and
    very good signal/noise ratio at the same time. However, with more closed system you would probably lose much of the interesting signal as well. And fortunately, good search techniques help to separate the wheat from the chaff.

    The people who submit sequences, comments and corrections to OEIS range from Princeton professors and Fields medalists to enthusiastic students and amateurs (these latter often working in some area related
    to mathematics, e.g. computing or chemistry), and of course it also attracts those people from which it is very hard to tell whether they are crackpots or geniuses.

    But so far so good. If someone submits a very
    questionable sequence Axxxxxx to OEIS, it doesn’t
    take any value out from some established and important
    sequence like Jacobsthal numbers
    http://www.research.att.com/~njas/sequences/A001045

    Now, the greatest value of OEIS is probably not
    that anybody can suggest new sequences,
    but in that anybody can send *new comments*
    and *connections* between existing sequences.

    Consider that Jacobsthal sequence mentioned above.
    From the Wayback-machine snapshot taken at December 26 2001:
    http://web.archive.org/web/20011226130934/http://www.research.att.com/~njas/sequences/eisBTfry00027.txt

    I count that the entry A001045 had at that time 26 rows (in its internal format), of which one was a %F-line (for the formula), 3 were comment lines (%C), 7 references to printed publications (%D) and 4 to other
    internet sites (%H).

    Now, July 15 2009, seven and half years later, the entry
    http://www.research.att.com/~njas/sequences/?q=id%3aA001045&p=1&n=10&fmt=3
    contains 162 rows, with 39 formula lines, 49 comment lines, 16 references to the printed publications (%D) and 18 references to the net-publications (%H).

    So, in the essence, the OEIS is the collection of *answers*, as each sequence can be viewed as a solution to some combinatorial, number-theoretic or other mathematical
    problem. However, we don’t yet know all the *questions*
    to which any particular sequence is the answer!

    Especially in the combinatorics the same sequence
    can pop up in widely differing contexts (“widely differing”
    at least from our own limited perspective!), and that is
    certainly one of the things that fascinate me most when
    I am following the knowledge accumulating into OEIS.

    Yours,

    Antti Karttunen,
    student of mathematics,
    and a “part-time associate editor” of OEIS.

  45. Jack Miller permalink

    A new tool for collaboration, Google Wave, is just getting started. The many features include playback mode, which allows one to playback the project before or after his entrance. Check it out at Wave.Google.

  46. Jason permalink

    A couple of years ago, the usenet group sci.physics.research was very active,
    with a lot of high level discussions, but now, it’s dominated by amateurs and
    crackpots. Do you have any idea why?

    Similarly, many years back, there were a lot of physics blogs with a high
    ratio of deep scientific content to social commentary, politics, and other
    related stuff, but gradually, there was trend away from blogging on physics
    content.

  47. Jason – on sci.physics.research, I’ve never really followed it, and don’t have any insight to offer. It’s a general phenomenon, of course, that communities (online and offline) tend to deteriorate without continual creation by dedicated members. When people who are putting a lot of effort into making a community function reduce that effort, the community can crash. I’d guess that’s what happened on spr; certainly I’ve seen it in other communities.

    Regarding your second point, I’m not sure which blogs you’re referring to (“many years back”). Maybe you’re thinking of the blogs created by the high-energy physics community for the world year of physics (2005)? Those blogs were only intended to last for a year. For people interested in high-energy physics I imagine that the fact most of those blogs shut down at the end of 2005 probably seemed like a precipitous drop in quality in the physics blogosphere. From my point of view, the quality has actually improved a lot since then, but perhaps that’s because my interests are different, and I never followed the HEP blogs.

  48. What I find missing here is any hint of realism about the the social side of the social-technical architectures that we’ve been building over the last half century, just that I remember.

  49. Dear Michael and All:

    I share your concerns about the apparent failure of most scientific publishers and communities to do better than what you called “A failure of science online: online comment sites”.

    At the same time, however, I would like to point out that some large scientific publishers and communities have already made a lot of progress over the past decade and at large scales.

    Over the past decade, many thousand scientific comments have been posted in the interactive open access journal Atmospheric Chemistry and Physics (ACP, http://www.atmospheric-chemistry-and-physics.net) and over a dozen of sister journals published by the European Geosciences Union (EGU, http://www.egu.eu).

    Many of the interactive comments published by the authors, referees, and readers of scientific papers published in interactive open access journals have not only been read and considered but also formally cited by fellow scientists.

    The reasons why the interactive open access publishing approach of ACP and EGU works much better than most other trials of “peer commentary” in science are fairly straightforward and outlined in an excerpt from the article referenced at the end of this comment:

    “To summarise, the key features of the ACP interactive open access peer review
    system that help ensure maximum efficiency of scientific exchange and quality
    assurance are:
    1. Publication of discussion papers before full peer review and revision:
    free speech, rapid publication, and public accountability of authors
    for their original manuscript foster innovation and deter careless
    submissions.
    2. Integration of public peer review and interactive discussion prior to
    final publication: attract more comments than post-peer-review commenting,
    enhance efficiency and transparency of quality assurance,
    maximise information density of final papers.
    3. Optional anonymity for designated referees: enables critical comments
    and questions by referees who might be reluctant to risk
    appearing ignorant or disrespectful.
    4. Archiving, public accessibility and citability of every discussion
    paper and interactive comment: ensure documentation of controversial
    scientific innovations or flaws, public recognition of commentators’
    contributions, and deterrence of careless submissions.
    Combining all of the above features and effects is the basis for the great success
    of ACP and its sister journals. Missing out on one or more of these features
    is the main reason why most if not all alternative forms of peer review
    practised in other initiatives for improving scientific communication and
    quality assurance have been less successful (less commenting, lower impact/
    visibility, higher rejection rates, larger waste of refereeing capacities, etc.). For
    example, features 2 and 3 are not captured in most of the initiatives mentioned
    at the end of Sect. 3.

    For several reasons also the ‘open peer review trial’ of the Nature magazine in
    2006 was not a good example and measure for the engagement of scientists in
    interactive commenting and public peer review on the internet. In that experiment, neither the authors of an article nor their colleagues and readers had much of an incentive to participate in the public discussion. The authors had
    to accept that their article was exposed in parallel to public scrutiny as well
    as to a closed peer review process where the referee comments remain nonpublic
    and where most of submitted manuscripts are rejected not because
    of a lack of scientific quality but because they are not deemed sufficiently
    exciting for the interdisciplinary audience of the magazine (ca. 93% rejection
    rate)2. For the likely outcome that a manuscript would not pass the closed
    peer review, it was not clear whether and in which form the rejected manuscript
    and the public comments would remain publicly accessible. As one
    might have imagined beforehand, this is not a very attractive perspective for
    scientists trying to get recognition for their most exciting results. Similarly,
    colleagues and readers had little incentive to formulate and post substantial
    comments, because their contributions would just have been an addendum
    to the closed peer review proceeding in parallel and would likely disappear
    afterwards. Fortunately, the publishers of Nature seem to have realised that
    permanent archiving and citability are key features of scientific exchange,
    and they have launched a more promising initiative titled Nature Precedings.
    There manuscripts can be published, openly discussed and archived in a similar
    way as in the discussion forums of interactive open access journals3.

    Unfortunately, however, it seems that the paramount importance of archiving
    and citability of manuscripts and comments has not yet been fully recognised
    by scientific publishers and societies. Following up on the success and leadership
    of the EGU in interactive open access publishing and peer review, the
    American Geophysical Union (AGU) has recently also started an experiment
    with ‘open peer review’. Instead of building on the very positive experience
    and success of the European sister society, however, AGU seems to follow the
    tracks of the unsuccessful earlier trial of Nature. Specifically, AGU announced
    that the discussion paper and all interactive comments shall be deleted after
    completion of the peer review process and final acceptance or rejection of the
    revised manuscript (Albarede, 2009). If AGU were to continue this approach,
    they would largely miss out on the effects detailed under point 4 above, and
    it appears questionable that the perspective of deletion after a couple of
    months will attract substantial commenting from the scientific community.
    Hopefully, the proponents of the AGU experiment will realise that the deletion
    of scientific comments is not only a discouragement for potential commentators
    but also a regrettable underestimation of the value of scientific discussion
    and discourse in the history and progress of science.

    Experience and rational thinking suggest that interactive open access peer
    review should be applicable and beneficial for journal publications in most
    if not all disciplines of scientific research (STM as well as social sciences, economics
    and humanities). For consistency and traceability, discussion papers
    and interactive comments should generally remain archived and citable as
    published, and they should be regarded as proceedings-type publications.
    Due to the proceedings character of discussion papers, the authors of revised
    manuscripts that may not have been accepted for final publication in the
    interactive open access journal to which they had originally been submitted
    can still pursue review and publication in alternative journals. As indicated
    above, such aspects are particularly important with regard to highlight magazines
    or journals in which the review process is not only aimed at ensuring
    scientific quality but also at high selectivity with regard to interdisciplinary
    relevance and visibility, which entails low probability of acceptance even for
    manuscripts of high quality (see Nature trial).”

    For more information see the web pages and references listed below.

    With best regards,
    Uli Pöschl

    http://www.atmospheric-chemistry-and-physics.net/index.html

    http://www.atmospheric-chemistry-and-physics.net/general_information/public_relations.html

    http://www.atmospheric-chemistry-and-physics.net/pr_acp_poschl_liber_quarterly_2010_interactive_open_access_publishing.pdf

  50. I was thinking about the role people like John Baez played on the USENET newsgroups sci.physics. They start to do something I would call Science 1.0. Maybe would be interenting for you to mention.

Trackbacks and Pingbacks

  1. A Blog Around The Clock
  2. Bench Marks » Blog Archive » A Response to “The Future of Science”
  3. Plausible Accuracy » Blog Archive » A bit of an Open Access roundup
  4. What scientists are we talking about? « A Man With A Ph.D.
  5. What scientists are we talking about?
  6. me/dium » The Future of Science
  7. Michael Nielsen » Shirky’s Law and why (most) social software fails
  8. Michael Nielsens’ essay on the future of science « CogiDDo ergo sum
  9. FriendFeed: where the conversation happens « Freelancing science
  10. We Change Europe
  11. What’s on the web? (2 August 2008) « ScienceRoll
  12. Perspectives on Innovation » Blog Archive » Creating a Trust-based Collaboration Market
  13. A response to a repsonse « A Man With A Ph.D.
  14. elearnspace
  15. Robin Good's Latest News
  16. Aggregator of RSS feeds concerning web accessibility — Media Literacy: Making Sense Of New Technologies And Media by George Siemens - Aug 16 08
  17. Science in the Web 2.0 world « Social Media and Cultural Communication
  18. Media Literacy: Making Sense Of New Technologies And Media by George Siemens - Aug 16 08 | CNN-News
  19. The Future of IT and Science (1) « The Wobbling Mind
  20. Media Literacy: Making Sense Of New Technologies And Media by George Siemens - Aug 16 08 | Digg-it.info
  21. The future of science, gradical change, and tools for the people « I was lost but now I live here
  22. The future of science « Quantum Communications
  23. The Future of IT and Science (2): Accreditation and Validation Issues « The Wobbling Mind
  24. Academic Productivity » The failure of open science
  25. CoreEcon » Blog Archive » Getting with open science
  26. Some great posts and my opinions « Quang Phuc’s Weblog
  27. Science in the open » Thinking about peer review of online material: The Peer Reviewed Journal of Open Science Online
  28. Michael Nielsen » Science beyond individual understanding
  29. September aan de hand van de Tweets « Dee’tjes: over internet, zoeken, bibliotheken, research en nog zo wat
  30. davinci’s notebook » Blog Archive » Why my own website?
  31. Michael Nielsen » Five problems with doing research in the open
  32. Curious Cat Science and Engineering Blog » Toward a More Open Scientific Culture
  33. Post Your Purpose » Expert attention, the ultimate scarce resource in science, is very inefficiently allocated under existing practices for collaboration.
  34. Michael Nielsen » The role of open licensing in open science
  35. Michael Nielsen » When can the long tail be leveraged?
  36. Mathematics, Science, and Blogs « Combinatorics and more
  37. Michael Nielsen » Connecting scientists to scientists
  38. The Future of Science | Real Life of a PhD Student | jobs.ac.uk
  39. Reading Dr. Michael A. Nielsen « ripero’s blog
  40. On new modes of mathematical collaboration « What Is Research?
  41. Janet Street-Porter on the Internet Revolution « O’Really?
  42. P2P Foundation » Blog Archive » “Best” Science Blogs, Open Laboratory 08 available
  43. The Future of (Life) Scientists « Freelancing science
  44. 科学的未来 | azalea says
  45. My depression in Waterloo, part 6: meeting people at davinci’s notebook
  46. Michael Nielsen » Is scientific publishing about to be disrupted?
  47. science2.0 - 42 Degrees North Latitude
  48. The OpenScience Project » What, exactly, is Open Science?
  49. hyfen.net » Doing it in the open: Michael Nielsen at Science 2.0
  50. Collaborative mathematics, etc. « What Is Research?

Comments are closed.