Skip to content

Is scientific publishing about to be disrupted?

by Michael Nielsen on June 29, 2009

Part I: How Industries Fail

Until three years ago, the oldest company in the world was the construction company Kongo Gumi, headquartered in Osaka, Japan. Kongo Gumi was founded in 578 CE when the then-regent of Japan, Prince Shotoku, brought a member of the Kongo family from Korea to Japan to help construct the first Buddhist temple in Japan, the Shitenno-ji. The Kongo Gumi continued in the construction trade for almost one and a half thousand years. In 2005, they were headed by Masakazu Kongo, the 40th of his family to head Kongo Gumi. The company had more than 100 employees, and 70 million dollars in revenue. But in 2006, Kongo Gumi went into liquidation, and its assets were purchased by Takamatsu Corporation. Kongo Gumi as an independent entity no longer exists.

How is it that large, powerful organizations, with access to vast sums of money, and many talented, hardworking people, can simply disappear? Examples abound – consider General Motors, Lehman Brothers and MCI Worldcom – but the question is most fascinating when it is not just a single company that goes bankrupt, but rather an entire industry is disrupted. In the 1970s, for example, some of the world’s fastest-growing companies were companies like Digital Equipment Corporation, Data General and Prime. They made minicomputers like the legendary PDP-11. None of these companies exist today. A similar disruption is happening now in many media industries. CD sales peaked in 2000, shortly after Napster started, and have declined almost 30 percent since. Newspaper advertising revenue in the United States has declined 30 percent in the last 3 years, and the decline is accelerating: one third of that fall came in the last quarter.

There are two common explanations for the disruption of industries like minicomputers, music, and newspapers. The first explanation is essentially that the people in charge of the failing industries are stupid. How else could it be, the argument goes, that those enormous companies, with all that money and expertise, failed to see that services like iTunes and Last.fm are the wave of the future? Why did they not pre-empt those services by creating similar products of their own? Polite critics phrase their explanations less bluntly, but nonetheless many explanations boil down to a presumption of stupidity. The second common explanation for the failure of an entire industry is that the people in charge are malevolent. In that explanation, evil record company and newspaper executives have been screwing over their customers for years, simply to preserve a status quo that they personally find comfortable.

It’s true that stupidity and malevolence do sometimes play a role in the disruption of industries. But in the first part of this essay I’ll argue that even smart and good organizations can fail in the face of disruptive change, and that there are common underlying structural reasons why that’s the case. That’s a much scarier story. If you think the newspapers and record companies are stupid or malevolent, then you can reassure yourself that provided you’re smart and good, you don’t have anything to worry about. But if disruption can destroy even the smart and the good, then it can destroy anybody. In the second part of the essay, I’ll argue that scientific publishing is in the early days of a major disruption, with similar underlying causes, and will change radically over the next few years.

Why online news is killing the newspapers

To make our discussion of disruption concrete, let’s think about why many blogs are thriving financially, while the newspapers are dying. This subject has been discussed extensively in many recent articles, but my discussion is different because it focuses on identifying general structural features that don’t just explain the disruption of newspapers, but can also help explain other disruptions, like the collapse of the minicomputer and music industries, and the impending disruption of scientific publishing.

Some people explain the slow death of newspapers by saying that blogs and other online sources [1] are news parasites, feeding off the original reporting done by the newspapers. That’s false. While it’s true that many blogs don’t do original reporting, it’s equally true that many of the top blogs do excellent original reporting. A good example is the popular technology blog TechCrunch, by most measures one of the top 100 blogs in the world. Started by Michael Arrington in 2005, TechCrunch has rapidly grown, and now employs a large staff. Part of the reason it’s grown is because TechCrunch’s reporting is some of the best in the technology industry, comparable to, say, the technology reporting in the New York Times. Yet whereas the New York Times is wilting financially [2], TechCrunch is thriving, because TechCrunch’s operating costs are far lower, per word, than the New York Times. The result is that not only is the audience for technology news moving away from the technology section of newspapers and toward blogs like TechCrunch, the blogs can undercut the newspaper’s advertising rates. This depresses the price of advertising and causes the advertisers to move away from the newspapers.

Unfortunately for the newspapers, there’s little they can do to make themselves cheaper to run. To see why that is, let’s zoom in on just one aspect of newspapers: photography. If you’ve ever been interviewed for a story in the newspaper, chances are a photographer accompanied the reporter. You get interviewed, the photographer takes some snaps, and the photo may or may not show up in the paper. Between the money paid to the photographer and all the other costs, that photo probably costs the newspaper on the order of a few hundred dollars [3]. When TechCrunch or a similar blog needs a photo for a post, they’ll use a stock photo, or ask their subject to send them a snap, or whatever. The average cost is probably tens of dollars. Voila! An order of magnitude or more decrease in costs for the photo.

Here’s the kicker. TechCrunch isn’t being any smarter than the newspapers. It’s not as though no-one at the newspapers ever thought “Hey, why don’t we ask interviewees to send us a polaroid, and save some money?” Newspapers employ photographers for an excellent business reason: good quality photography is a distinguishing feature that can help establish a superior newspaper brand. For a high-end paper, it’s probably historically been worth millions of dollars to get stunning, Pulitzer Prizewinning photography. It makes complete business sense to spend a few hundred dollars per photo.

What can you do, as a newspaper editor? You could fire your staff photographers. But if you do that, you’ll destroy the morale not just of the photographers, but of all your staff. You’ll stir up the Unions. You’ll give a competitive advantage to your newspaper competitors. And, at the end of the day, you’ll still be paying far more per word for news than TechCrunch, and the quality of your product will be no more competitive.

The problem is that your newspaper has an organizational architecture which is, to use the physicists’ phrase, a local optimum. Relatively small changes to that architecture – like firing your photographers – don’t make your situation better, they make it worse. So you’re stuck gazing over at TechCrunch, who is at an even better local optimum, a local optimum that could not have existed twenty years ago:


local_optimum.jpg

Unfortunately for you, there’s no way you can get to that new optimum without attempting passage through a deep and unfriendly valley. The incremental actions needed to get there would be hell on the newspaper. There’s a good chance they’d lead the Board to fire you.

The result is that the newspapers are locked into producing a product that’s of comparable quality (from an advertiser’s point of view) to the top blogs, but at far greater cost. And yet all their decisions – like the decision to spend a lot on photography – are entirely sensible business decisions. Even if they’re smart and good, they’re caught on the horns of a cruel dilemma.

The same basic story can be told about the dispruption of the music industry, the minicomputer industry, and many other disruptions. Each industry has (or had) a standard organizational architecture. That organizational architecture is close to optimal, in the sense that small changes mostly make things worse, not better. Everyone in the industry uses some close variant of that architecture. Then a new technology emerges and creates the possibility for a radically different organizational architecture, using an entirely different combination of skills and relationships. The only way to get from one organizational architecture to the other is to make drastic, painful changes. The money and power that come from commitment to an existing organizational architecture actually place incumbents at a disadvantage, locking them in. It’s easier and more effective to start over, from scratch.

Organizational immune systems

I’ve described why it’s hard for incumbent organizations in a disrupted industry to change to a new model. The situation is even worse than I’ve described so far, though, because some of the forces preventing change are strongest in the best run organizations. The reason is that those organizations are large, complex structures, and to survive and prosper they must contain a sort of organizational immune system dedicated to preserving that structure. If they didn’t have such an immune system, they’d fall apart in the ordinary course of events. Most of the time the immune system is a good thing, a way of preserving what’s good about an organization, and at the same time allowing healthy gradual change. But when an organization needs catastrophic gut-wrenching change to stay alive, the immune system becomes a liability.

To see how such an immune system expresses itself, imagine someone at the New York Times had tried to start a service like Google News, prior to Google News. Even before the product launched they would have been constantly attacked from within the organization for promoting competitors’ products. They would likely have been forced to water down and distort the service, probably to the point where it was nearly useless for potential customers. And even if they’d managed to win the internal fight and launched a product that wasn’t watered down, they would then have been attacked viciously by the New York Times’ competitors, who would suspect a ploy to steal business. Only someone outside the industry could have launched a service like Google News.

Another example of the immune response is all the recent news pieces lamenting the death of newspapers. Here’s one such piece, from the Editor of the New York Times’ editorial page, Andrew Rosenthal:

There’s a great deal of good commentary out there on the Web, as you say. Frankly, I think it is the task of bloggers to catch up to us, not the other way around… Our board is staffed with people with a wide and deep range of knowledge on many subjects. Phil Boffey, for example, has decades of science and medical writing under his belt and often writes on those issues for us… Here’s one way to look at it: If the Times editorial board were a single person, he or she would have six Pulitzer prizes…

This is a classic immune response. It demonstrates a deep commitment to high-quality journalism, and the other values that have made the New York Times great. In ordinary times this kind of commitment to values would be a sign of strength. The problem is that as good as Phil Boffey might be, I prefer the combined talents of Fields medallist Terry Tao, Nobel prize winner Carl Wieman, MacArthur Fellow Luis von Ahn, acclaimed science writer Carl Zimmer, and thousands of others. The blogosophere has at least four Fields medallists (the Nobel of math), three Nobelists, and many more luminaries. The New York Times can keep its Pulitzer Prizes. Other lamentations about the death of newspapers show similar signs of being an immune response. These people aren’t stupid or malevolent. They’re the best people in the business, people who are smart, good at their jobs, and well-intentioned. They are, in short, the people who have most strongly internalized the values, norms and collective knowledge of their industry, and thus have the strongest immune response. That’s why the last people to know an industry is dead are the people in it. I wonder if Andrew Rosenthal and his colleagues understand that someone equipped with an RSS reader can assemble a set of news feeds that renders the New York Times virtually irrelevant? If a person inside an industry needs to frequently explain why it’s not dead, they’re almost certainly wrong.

What are the signs of impending disruption?

Five years ago, most newspaper editors would have laughed at the idea that blogs might one day offer serious competition. The minicomputer companies laughed at the early personal computers. New technologies often don’t look very good in their early stages, and that means a straightup comparison of new to old is little help in recognizing impending dispruption. That’s a problem, though, because the best time to recognize disruption is in its early stages. The journalists and newspaper editors who’ve only recognized their problems in the last three to four years are sunk. They needed to recognize the impending disruption back before blogs looked like serious competitors, when evaluated in conventional terms.

An early sign of impending disruption is when there’s a sudden flourishing of startup organizations serving an overlapping customer need (say, news), but whose organizational architecture is radically different to the conventional approach. That means many people outside the old industry (and thus not suffering from the blinders of an immune response) are willing to bet large sums of their own money on a new way of doing things. That’s exactly what we saw in the period 2000-2005, with organizations like Slashdot, Digg, Fark, Reddit, Talking Points Memo, and many others. Most such startups die. That’s okay: it’s how the new industry learns what organizational architectures work, and what don’t. But if even a few of the startups do okay, then the old players are in trouble, because the startups have far more room for improvement.

Part II: Is scientific publishing about to be disrupted?

What’s all this got to do with scientific publishing? Today, scientific publishers are production companies, specializing in services like editorial, copyediting, and, in some cases, sales and marketing. My claim is that in ten to twenty years, scientific publishers will be technology companies [4]. By this, I don’t just mean that they’ll be heavy users of technology, or employ a large IT staff. I mean they’ll be technology-driven companies in a similar way to, say, Google or Apple. That is, their foundation will be technological innovation, and most key decision-makers will be people with deep technological expertise. Those publishers that don’t become technology driven will die off.

Predictions that scientific publishing is about to be disrupted are not new. In the late 1990s, many people speculated that the publishers might be in trouble, as free online preprint servers became increasingly popular in parts of science like physics. Surely, the argument went, the widespread use of preprints meant that the need for journals would diminish. But so far, that hasn’t happened. Why it hasn’t happened is a fascinating story, which I’ve discussed in part elsewhere, and I won’t repeat that discussion here.

What I will do instead is draw your attention to a striking difference between today’s scientific publishing landscape, and the landscape of ten years ago. What’s new today is the flourishing of an ecosystem of startups that are experimenting with new ways of communicating research, some radically different to conventional journals. Consider Chemspider, the excellent online database of more than 20 million molecules, recently acquired by the Royal Society of Chemistry. Consider Mendeley, a platform for managing, filtering and searching scientific papers, with backing from some of the people involved in Last.fm and Skype. Or consider startups like SciVee (YouTube for scientists), the Public Library of Science, the Journal of Visualized Experiments, vibrant community sites like OpenWetWare and the Alzheimer Research Forum, and dozens more. And then there are companies like WordPress, Friendfeed, and Wikimedia, that weren’t started with science in mind, but which are increasingly helping scientists communicate their research. This flourishing ecosystem is not too dissimilar from the sudden flourishing of online news services we saw over the period 2000 to 2005.

Let’s look up close at one element of this flourishing ecosystem: the gradual rise of science blogs as a serious medium for research. It’s easy to miss the impact of blogs on research, because most science blogs focus on outreach. But more and more blogs contain high quality research content. Look at Terry Tao’s wonderful series of posts explaining one of the biggest breakthroughs in recent mathematical history, the proof of the Poincare conjecture. Or Tim Gowers recent experiment in “massively collaborative mathematics”, using open source principles to successfully attack a significant mathematical problem. Or Richard Lipton’s excellent series of posts exploring his ideas for solving a major problem in computer science, namely, finding a fast algorithm for factoring large numbers. Scientific publishers should be terrified that some of the world’s best scientists, people at or near their research peak, people whose time is at a premium, are spending hundreds of hours each year creating original research content for their blogs, content that in many cases would be difficult or impossible to publish in a conventional journal. What we’re seeing here is a spectacular expansion in the range of the blog medium. By comparison, the journals are standing still.

This flourishing ecosystem of startups is just one sign that scientific publishing is moving from being a production industry to a technology industry. A second sign of this move is that the nature of information is changing. Until the late 20th century, information was a static entity. The natural way for publishers in all media to add value was through production and distribution, and so they employed people skilled in those tasks, and in supporting tasks like sales and marketing. But the cost of distributing information has now dropped almost to zero, and production and content costs have also dropped radically [5]. At the same time, the world’s information is now rapidly being put into a single, active network, where it can wake up and come alive. The result is that the people who add the most value to information are no longer the people who do production and distribution. Instead, it’s the technology people, the programmers.

If you doubt this, look at where the profits are migrating in other media industries. In music, they’re migrating to organizations like Apple. In books, they’re migrating to organizations like Amazon, with the Kindle. In many other areas of media, they’re migrating to Google: Google is becoming the world’s largest media company. They don’t describe themselves that way (see also here), but the media industry’s profits are certainly moving to Google. All these organizations are run by people with deep technical expertise. How many scientific publishers are run by people who know the difference between an INNER JOIN and an OUTER JOIN? Or who know what an A/B test is? Or who know how to set up a Hadoop cluster? Without technical knowledge of this type it’s impossible to run a technology-driven organization. How many scientific publishers are as knowledgeable about technology as Steve Jobs, Sergey Brin, or Larry Page?

I expect few scientific publishers will believe and act on predictions of disruption. One common response to such predictions is the appealing game of comparison: “but we’re better than blogs / wikis / PLoS One / …!” These statements are currently true, at least when judged according to the conventional values of scientific publishing. But they’re as irrelevant as the equally true analogous statements were for newspapers. It’s also easy to vent standard immune responses: “but what about peer review”, “what about quality control”, “how will scientists know what to read”. These questions express important values, but to get hung up on them suggests a lack of imagination much like Andrew Rosenthal’s defense of the New York Times editorial page. (I sometimes wonder how many journal editors still use Yahoo!’s human curated topic directory instead of Google?) In conversations with editors I repeatedly encounter the same pattern: “But idea X won’t work / shouldn’t be allowed / is bad because of Y.” Well, okay. So what? If you’re right, you’ll be intellectually vindicated, and can take a bow. If you’re wrong, your company may not exist in ten years. Whether you’re right or not is not the point. When new technologies are being developed, the organizations that win are those that aggressively take risks, put visionary technologists in key decision-making positions, attain a deep organizational mastery of the relevant technologies, and, in most cases, make a lot of mistakes. Being wrong is a feature, not a bug, if it helps you evolve a model that works: you start out with an idea that’s just plain wrong, but that contains the seed of a better idea. You improve it, and you’re only somewhat wrong. You improve it again, and you end up the only game in town. Unfortunately, few scientific publishers are attempting to become technology-driven in this way. The only major examples I know of are Nature Publishing Group (with Nature.com) and the Public Library of Science. Many other publishers are experimenting with technology, but those experiments remain under the control of people whose core expertise is in others areas.

Opportunities

So far this essay has focused on the existing scientific publishers, and it’s been rather pessimistic. But of course that pessimism is just a tiny part of an exciting story about the opportunities we have to develop new ways of structuring and communicating scientific information. These opportunities can still be grasped by scientific publishers who are willing to let go and become technology-driven, even when that threatens to extinguish their old way of doing things. And, as we’ve seen, these opportunites are and will be grasped by bold entrepreneurs. Here’s a list of services I expect to see developed over the next few years. A few of these ideas are already under development, mostly by startups, but have yet to reach the quality level needed to become ubiquitous. The list could easily be continued ad nauseum – these are just a few of the more obvious things to do.

Personalized paper recommendations: Amazon.com has had this for books since the late 1990s. You go to the site and rate your favourite books. The system identifies people with similar taste, and automatically constructs a list of recommendations for you. This is not difficult to do: Amazon has published an early variant of its algorithm, and there’s an entire ecosystem of work, much of it public, stimulated by the Neflix Prize for movie recommendations. If you look in the original Google PageRank paper, you’ll discover that the paper describes a personalized version of PageRank, which can be used to build a personalized search and recommendation system. Google doesn’t actually use the personalized algorithm, because it’s far more computationally intensive than ordinary PageRank, and even for Google it’s hard to scale to tens of billions of webpages. But if all you’re trying to rank is (say) the physics literature – a few million papers – then it turns out that with a little ingenuity you can implement personalized PageRank on a small cluster of computers. It’s possible this can be used to build a system even better than Amazon or Netflix.

A great search engine for science: ISI’s Web of Knowledge, Elsevier’s Scopus and Google Scholar are remarkable tools, but there’s still huge scope to extend and improve scientific search engines [6]. With a few exceptions, they don’t do even basic things like automatic spelling correction, good relevancy ranking of papers (preferably personalized), automated translation, or decent alerting services. They certainly don’t do more advanced things, like providing social features, or strong automated tools for data mining. Why not have a public API [7] so people can build their own applications to extract value out of the scientific literature? Imagine using techniques from machine learning to automatically identify underappreciated papers, or to identify emerging areas of study.

High-quality tools for real-time collaboration by scientists: Look at services like the collaborative editor Etherpad, which lets multiple people edit a document, in real time, through the browser. They’re even developing a feature allowing you to play back the editing process. Or the similar service from Google, Google Docs, which also offers shared spreadsheets and presentations. Look at social version control systems like Git and Github. Or visualization tools which let you track different people’s contributions. These are just a few of hundreds of general purpose collaborative tools that are lightyears beyond what scientists use. They’re not widely adopted by scientists yet, in part for superficial reasons: they don’t integrate with things like LaTeX and standard bibliographical tools. Yet achieving that kind of integration is trivial compared with the problems these tools do solve. Looking beyond, services like Google Wave may be a platform for startups to build a suite of collaboration clients that every scientist in the world will eventually use.

Scientific blogging and wiki platforms: With the exception of Nature Publishing Group, why aren’t the scientific publishers developing high-quality scientific blogging and wiki platforms? It would be easy to build upon the open source WordPress platform, for example, setting up a hosting service that makes it easy for scientists to set up a blog, and adds important features not present in a standard WordPress installation, like reliable signing of posts, timestamping, human-readable URLs, and support for multiple post versions, with the ability to see (and cite) a full revision history. A commenter-identity system could be created that enabled filtering and aggregation of comments. Perhaps most importantly, blog posts could be made fully citable.

On a related note, publishers could also help preserve some of the important work now being done on scientific blogs and wikis. Projects like Tim Gowers’ Polymath Project are an important part of the scientific record, but where is the record of work going to be stored in 10 or 20 years time? The US Library of Congress has taken the initiative in preserving law blogs. Someone needs to step up and do the same for science blogs.

The data web: Where are the services making it as simple and easy for scientists to publish data as it to publish a journal paper or start a blog? A few scientific publishers are taking steps in this direction. But it’s not enough to just dump data on the web. It needs to be organized and searchable, so people can find and use it. The data needs to be linked, as the utility of data sets grows in proportion to the connections between them. It needs to be citable. And there needs to be simple, easy-to-use infrastructure and expertise to extract value from that data. On every single one of these issues, publishers are at risk of being leapfrogged by companies like Metaweb, who are building platforms for the data web.

Why many services will fail: Many unsuccessful attempts at implementing services like those I’ve just described have been made. I’ve had journal editors explain to me that this shows there is no need for such services. I think in many cases there’s a much simpler explanation: poor execution [8]. Development projects are often led by senior editors or senior scientists whose hands-on technical knowledge is minimal, and whose day-to-day involvement is sporadic. Implementation is instead delegated to IT-underlings with little power. It should surprise no one that the results are often mediocre. Developing high-quality web services requires deep knowledge and drive. The people who succeed at doing it are usually brilliant and deeply technically knowledgeable. Yet it’s surprisingly common to find projects being led by senior scientists or senior editors whose main claim to “expertise” is that they wrote a few programs while a grad student or postdoc, and who now think they can get a high-quality result with minimal extra technical knowledge. That’s not what it means to be technology-driven.

Conclusion: I’ve presented a pessimistic view of the future of current scientific publishers. Yet I hope it’s also clear that there are enormous opportunities to innovate, for those willing to master new technologies, and to experiment boldly with new ways of doing things. The result will be a great wave of innovation that changes not just how scientific discoveries are communicated, but also accelerates the way scientific discoveries are made.

Notes

[1] We’ll focus on blogs to make the discussion concrete, but in fact many new forms of media are contributing to the newspapers’ decline, including news sites like Digg and MetaFilter, analysis sites like Stratfor, and many others. When I write “blogs” in what follows I’m usually referring to this larger class of disruptive new media, not literally to conventional blogs, per se.

[2] In a way, it’s ironic that I use the New York Times as an example. Although the New York Times is certainly going to have a lot of trouble over the next five years, in the long run I think they are one of the newspapers most likely to survive: they produce high-quality original content, show strong signs of becoming technology driven, and are experimenting boldly with alternate sources of content. But they need to survive the great newspaper die-off that’s coming over the next five or so years.

[3] In an earlier version of this essay I used the figure 1,000 dollars. That was sloppy – it’s certainly too high. The actual figure will certainly vary quite a lot from paper to paper, but for a major newspaper in a big city I think on the order of 200-300 dollars is a reasonable estimate, when all costs are factored in.

[4] I’ll use the term “companies” to include for-profit and not-for-profit organizations, as well as other organizational forms. Note that the physics preprint arXiv is arguably the most successful publisher in physics, yet is neither a conventional for-profit or not-for-profit organization.

[5] This drop in production and distribution costs is directly related to the current move toward open access publication of scientific papers. This movement is one of the first visible symptoms of the disruption of scientific publishing. Much more can and has been said about the impact of open access on publishing; rather than review that material, I refer you to the blog “Open Access News”, and in particular to Peter Suber’s overview of open access.

[6] In the first version of this essay I wrote that the existing services were “mediocre”. That’s wrong, and unfair: they’re very useful services. But there’s a lot of scope for improvement.

[7] After posting this essay, Christina Pikas pointed out that Web of Science and Scopus do have APIs. That’s my mistake, and something I didn’t know.

[8] There are also services where the primary problem is cultural barriers. But for the ideas I’ve described cultural barriers are only a small part of the problem.

Acknowledgments: Thanks to Jen Dodd and Ilya Grigorik for many enlightening discussions.

About this essay: This essay is based on a colloquium given June 11, 2009, at the American Physical Society Editorial Offices. Many thanks to the people at the APS for being great hosts, and for many stimulating conversations.

Further reading:

Some of the ideas explored in this essay are developed at greater length in my book Reinventing Discovery: The New Era of Networked Science.

You can subscribe to my blog here.

My account of how industries fail was influenced by and complements Clayton Christensen’s book “The Innovator’s Dilemma”. Three of my favourite blogs about the future of scientific communication are “Science in the Open”, “Open Access News” and “Common Knowledge”. Of course, there are many more excellent sources of information on this topic. A good source aggregating these many sources is the Science 2.0 room on FriendFeed.

200 Comments
  1. It’s not unreasonable to suggest that this essay would be even better if its objectives, and its conclusions, were more radical.

    The point is that the conclusions are not particularly novel. For example, we find the following account in Burgelman and Groves’ classic analysis of Intel’s decision-making process Strategy is Destiny:

    “One of the toughest challenges is to make people see that self-evident truths are no longer true. I recall going to see Gordon [Moore] and asking what a new management would do if we were replaced. The answer was clear: get out of DRAM [computer memory]. So, I suggested to Gordon that we go out through the revolving door, come back in, and do it ourselves.”

    and

    Intel’s transformation illustrates the importance of strategy-making as an adaptive organizational capability, that is, a capability that transcends the traditional view of top management as the prime mover of strategy-making. … The evolutionary path of transformation is seldom clearly envisioned ex ante.

    In order to draw conclusions that are more novel, this essay (IMHO) needs to embrace objectives that are more radical.

    Instead of predictive essays about the “future of science”, perhaps what is needed are prescriptive essays … because isn’t the future of science going to be something that we design and create (as individuals, and as communities, and as a planet)?

    The idea that the future of science is something that will “just happen” seems (to me) to be inadequate to humanity’s urgent needs and pressing challenges. That is why I hope that Michael will consider including (at least some) explicitly prescriptive elements in The Future of Science.

  2. Andy McGregor permalink

    Excellent post Michael, sums up the area well and in particular I found the organisational architectures section really useful for thinking about how change is affecting various industries.

    I work for an agency in the UK called JISC (http://www.jisc.ac.uk) that is set up to support education and research in higher education by promoting innovation in new technologies.

    We fund a lot of innovation projects in the area of scholarly communication, and it struck me reading the post how much of the work we are funding relates to the points you make in your article

    We have funded 40 rapid innovation projects these are short agile projects that have just started and are designed to experiment and try out solutions to user problems, similar to start ups you mention in your post. A couple which sprang to mind as I read your post and are worth mentioning are a way to manage and publish data sets which uses a notecard metaphor: http://www.jisc.ac.uk/whatwedo/programmes/inf11/shuffl.aspx and a way to use bayesian filtering to help find journal articles of interest: http://www.jisc.ac.uk/whatwedo/programmes/inf11/personalisingalerts.aspx There are many more that are relevant to the areas you address in your article and you can read more about these projects at: http://code.google.com/p/jiscri/ as they develop over the next few months.

    Thinking about scholarly communication models more generally, we have recently released a report which examined the economic implications of various models and found open access represents a better “local optimum”: http://www.jisc.ac.uk/news/stories/2009/01/houghton.aspx . We are discussing further work in this area with publishers designed to envisage the future for scholarly communications.

    We have also been thinking about the preservation implications for blogs and funded a project called powr to study the preservation of web artefacts in general. This has lead on to a short project call archivepress to investigate using an installation of wordpress to preserve other blogs: http://jiscpowr.jiscinvolve.org/2009/06/24/archivepress-when-one-size-doesnt-fit-all/

    Apologies that this has turned into a lengthy, link filled post but this is an exciting area to be involved in at the moment and developments are coming thick and fast.

  3. As a follow-on to the above, the earliest essay on the topic of “the future of science” that I find in my database is Roberty Boyle’s 1641 essay The Sceptical Chymist, or Chymico-Physical Doubts & Paradoxes (which is freely) available on Project Guternberg.

    The impact of Boyle’s then-radical views upon the general population is vividly rendered in Joseph Wright’s painting An Experiment on a Bird in the Air-Pump.

    A singular advantage of studying Boyle and his essays is that we have 368 years of follow-up. :)

  4. Thoroughly insightful and thought provoking. I follow many journalism-and-technology blogs and your analysis sliced to the heart of the matter more than most articles, especially the local optimum observation.

    However, I disagree with “there’s no way you can get to that new optimum without passage through [hell].” There’s a well-established way for old media organizations to set up a base camp in the future: Establish a new media skunk works.

    Basically, you equip one or more groups of your smartest people (and a few crazy smart ones) and push them off the local optimum cliff to see what new optimum they discover.

    This has worked since the 1940s for “big iron” companies to break out of the mold. Lockheed used it to rapidly develop fighter planes. Motorola used it to develop their Razr cellphone. IBM used to develop the PC. Apple has used skunk works multiple times, first to develop the Mac and then the iPhone. See: http://www.economist.com/businessfinance/management/displaystory.cfm?story_id=11993055

    Every big media company should be aggressively seeding the nearby ground with micro-ventures that extend their reach. Even if many fail, the one that succeeds will give them competitive advantage and hope for the future, as you suggested.

  5. S. Jones – I’m in complete agreement. The Mac or (say) Nokia are very interesting and unusual cases where organizations did manage to reinvent themselves, using the approach you describe. So far as I know, that’s the only approach that works. However, the process is very hard on the organization — Nokia’s move from being primarily a rubber company to a telecommunications company was, obviously, not easy on their workforce. My understanding is that the Mac was also very tough on Apple internally, due to conflicts between old and new. This approach also requires the new venture to be thoroughly insulated from the old, otherwise it will be hard to resist the temptation to water down the new product to preserve the old business model.

  6. Andy (#75): No apology necessary for the links – that’s exciting stuff!

  7. Your take on scienfific publishing is insightful, but I’m sure not your analysis of the newspaper business is quite as on target.
    My first issue is with the application of scientific theories/ideas to business and/or society. It sounds great to talk about immune systems, evolutionary behavior, chaos theory, local maxima, etc., but those analogies are almost never actionable in a non-scientific context. I wince, for example, when I think of the damage the book “The Tao of Physics” did to a generation of non-scientists by convincing them they a) understood quantum mechanics, and b) that it applied to or solved personal, spiritual and societal problems.
    My second involves a big swing of Occam’s razor. Using complex scientific theories with catchy names to explain something as simple (and timeless) as the lifecycle of a business introduces unnecessary assumptions. Isn’t it as simple as this: successful businesses that have something to conserve (e.g. the NYT) act conservatively and startups with nothing to lose (TechCrunch) take big risks? The (very)few startups that succeed get big, end up with something to conserve, become conservative and then don’t take the risks required to dominate the next innovation cycle. And so it goes, over and over again.
    I don’t think attaching sexy names to this process gives businesspeople any actionable insights, but it does sell books (evidence Mr. Gladwell and his blinking tipping points) and employ McKinsey consultants. I would moreover suggest that it actually does a disservice by confusing catch-phrases with understanding. To violate my own rule about using scientific analogies, it’s the difference betwen botany and biology.

  8. Nick Mowat permalink

    Thanks Michael for an interesting and thought provoking article. Just a quick question. In today’s scientific community, grant funding works in concert with the publishing world using established ranking systems (Impact Factors) to score the researcher’s output and this in turn determines future support.

    Once Pandora’s box has been opened what are the implications for the research assessment process? Is this one of the immune responses you mention? (Researchers will still need to play the existing system as they need funding.) In your vision of a much more ‘open’ world how will the research assessment be preformed in a rigorous and mutually acceptable way?

  9. Well written, Michael. This is Clayton M. Christensen’s Disruptive Technology coming to the scientific publishing space.

    Like Menedeley and others you mentioned, we hope we’re on the right side of this transition. Check us out, I’m one of the founders: http://www.pubget.com [Warning! shameless plug]

  10. So far, no-one has commented upon what is arguably the most disruptive aspect of scientific publishing: the inexorable expansion of its scale.

    How many scientific articles contain the word “insulin” in their title or abstract? A PubMed search presently finds 206,607 such articles —and the present publication rate stands at about 30 more articles per day.

    How many biological molecules are as interesting as insulin? Surely more than 10^3 molecular species … presumably less than 10^8 … if we take the geometric mean of these bounds, we conclude that the scientific literature on biomedically interesting molecules will grow to the informatic equivalent of (say) 10^11 of today’s articles.

    Let’s cost-out these research articles at 10^4 dollars each: the required net investment thus is about 10^15 dollars. Assuming a peaceful, prosperous planet with 10^10 people on it and a GDP of 10^4 dollars per capita, the total investment is about 10 years planetary GDP … which (when you thing about it) is a wonderfully prudent and economic investment! :)

    Enterprises of this magnitude are (in my view) more than possibilities: their enabling technical foundations are in-place. Consequently there is a growing global appreciation that (to paraphrase George Marshall) “the ends are not yet clearly in sight but victory is certain”, that victory being (among other goals) a comprehensive understanding of our planetary biome.

    The point of this essay is simple: people who think about the future of science should be thinking big—much bigger than than any previous century has thought. Because pretty obviously, there’s an exciting century ahead, for everyone.

  11. The past few posts are an example of the internet waking-up and discussing itself along the lines that Ryan North of Dinosaur Comics has been discussing. Fun! :)

    I recently had occasion to typeset a book for a very particular, very knowledgeable author, and this opened my eyes to Donald Knuth’s radical contribution to scientific publishing — that contribution being, to prevent traditions of scientific publishing from changing.

    Isn’t this the core design philosophy of TeX/LaTeX: to achieve by digital means the same typographer results that a highly skilled typesetter can achieve with leaded type?

    If one reads the pre-Knuth Chicago Manual of Style (which is an encyclopedia of bookmaking techniqes) it is evident that Knuth’s TeX has played an essential role in preventing the extinction of this centuries-old bookmaking culture.

    From this point of view, the great contribution of TeX/LaTeX has been to prevent scientific publication from changing.

  12. Tim Arnold permalink

    Just a note about LaTeX and scientific writing. The python framework plasTeX can be set up for a journal class to produce XML or HTML; if you produce say, DocBook XML from a LaTeX article there are many avenues you can take from there for searching or converting the sources. I’ll be presenting a paper on it at the TeX User’s Group conference later this month.

  13. Tim, a wonderful (and seminal) article about scientific publishing in general, and typography in particular, is Donald Knuth’s 1979 article Mathematical Typography (Bulletin of the AMS, Vol 1(2), p.337).

    One take-home lesson is that Knuth painstakingly deconstructs the styles of no less than twelve typographic generations of the Transactions of the American Mathematical Society.

    What I worry about is that people are developong modern mark-up languages without Knuth’s painstaking attention to quality and detailed review tradition.

    Some of this loss of aesthetic quality may be inevitable — for the reason that computer screen real-estate is rubbery in a way that paper/parchment isn’t.

  14. Tim Arnold permalink

    @John, I totally agree; I see only cursory attention given to mathematical layouts. MathML, as far as I can tell, leaves out the beautiful and necessary align multline, etc vertical equation environments provided via the AMS LaTeX styles.

    I suppose you’re right that some of that is inevitable. What we gain in openness and interoperability will hopefully offset the losses in beauty. But hopefully we’ll not see a loss in readability!
    thanks,

  15. This post is so helpful to individuals like myself who are not scientists but want to help innovate to make the web work for science.

    My blog post on our OSTI.gov weblog asks the same question of the US DOE about opportunities for complementing eletronic documents with web content in the form of Science blogs.

    I’m proud to work for the US Dept of Energy who is an important contributor to published scientific information and my employer, OSTI, is working on many of the opportunities raised in Michael’s post.

    I hope Michael Nielson will include OSTI.gov in future discussions. We are not a large office but our work is global. Consider WorldWideScience.org and Science.gov. We strive to adopt open innovation standards for research and libraries. Compare our Eprint Network, OAI, and MARC records services. Indeed, we are addressing the opportunities for data and search. Compare OSTI’s DOE Data Explorer web application and compare the depth of our federated search results to that of surface crawlers like Google.

  16. In your article, you call for the development of blogging infrastructure for science. Readers may want to take a look at what Seed Media has been doing with their Research Blogging platform http://researchblogging.org/ (I’m not affiliated with them in any way, but I’ve admired their implementation of OpenURL linking).

    Your optimization physics neglects that localization is much more difficult in multidimensional spaces, and is thus overly pessimistic. The more dimensions you have, the more likely there is to be a path from on pseudo-maximum to another. Fifteen years ago, I truly thought that STM publishing was about to collapse, but I was wrong. STM publishers have in fact become more like technology companies (progress coming one gold watch at a time!) and have shifted their businesses and revenue streams predominantly onto the web.

    I also disagree that scientific publishing is facing the same precipice that Newspapers are facing. Newspapers are in trouble because their traditional income streams are disappearing. Scientific publishers will face a comparable situation only if the people who send them checks start sending the checks somewhere else.

  17. Great stuff, Michael. It is interesting and very well written. Of course, it is not a new notion, but it is about the best-expressed version of it that I have seen. I don’t really disagree with any of it.

    For me there are two main strands to this, 1)the what? and 2)the how? Translating into “old publishing” that would be “the editorial” and “the production”, if you like.

    1) The What? It is true that traditional publishers go on about the peer review process, quality control and filtering. But I do see this as still a genuinely important role. Certainly our authors think it is and I would put good money on the fact that when it comes to research findings, they would still prefer to read/trust an article from Proceedings B or Nature over one just posted on some random blog. However, this is not a reason for publishers to be complacent, certainly. Just that, I suspect the more visionary types are worrying about this a good deal earlier than are the scientists themselves (which of course is as it should be). Still, having said that, you doesn’t really say what would replace peer review in the new age. Imperfect as it is, it still provides scientists with the reassurance they need when reading research. I guess much of your argument is predicated on the notion that publishers are over-engineering their content (ie the staff photographer getting the Pulitzer prize image, instead of a quick shot from a mobile phone). We appear to be somewhat hung up on the idea of “adding value” (greatly amplified due to the attacks from the Open Access lobby, of course), which is somewhat in conflict with the idea that a “quick and dirty” report of a discovery on a blog may be much more what the user wants than a finely honed research article taking several months to come out.

    2) The How? Meaning what form of “deliverable” are publishers in the business of? This can’t really be separated meaningfully from the What? nowadays, I guess. But we do need to start thinking much more creatively about what it is that we actually provide. Or even if we “provide” at all (in the sense of a deliverable unit of information). Instead, we might provide a window or a filter on user generated content. A kind of Google with detailed subject knowledge. Though I was very taken with your question about Yahoo’s “topic directory” and whether it is any better than Google’s page rank system.

  18. good examples of disruption along related lines is Wiley Custom Select underpinned by marklogic xml database

    Mark Logic CEO provides a good overview job here

    http://marklogic.blogspot.com/2009/04/wiley-launches-wiley-custom-select.html

    I like this articles general premise but there is a rather large gap in its coverage, e.g. the ‘iceberg under the waterline’ that underpins most of these new routes is the continuing adoption of XML and supporting technologies.

    smarter newspapers and publishing firms knew this years ago ….

  19. Great blog post Michael. Have to say I loved the “flourishing ecosystem” part.

  20. SudarshanP permalink

    Is it not interesting that what I am reading right now IS A BLOG!!! I did not pay a penny, and I came to this site through http://news.ycombinator.com/ and surprisingly this website and news.yc DO NOT HAVE ADS!!! Still neither paulgraham nor michaelnielsen are getting poorer!!! Their essays are taking them higher and simultaneously making the world a more interesting place. On the other hand NewYorkTimes DRIVES ME AWAY demanding registration before viewing their “precious” stuff!!! THE OSTRICHES can bury their necks really deep and assume reality is a myth

  21. Barb Holand permalink

    What an insightful and thoughtful article. I loved how the author started with the story of Kongo Gumi and expanded his thoughts on the transformative time we live in. “Publish or Perish” has been a given in the world of academia for a long time, but through technology, that staid institution is being overtaken and I would hope the playing field of scientific research will be more level and therefore, more progressive.

  22. Michael Sestak permalink

    With 138 posts ahead of me, I’m surprised no one has mentioned (or is it just too obvious) that the world-wide web (not the internet, the www) was invented to improve scientific collaboration and access to scientific information…and the answer by demonstration of this article, it’s working!

    Also, it seems to me to be disruptive a paradigm has to not be at a new relative maximum, but in that valley between or partially up the slope of and on the way to the new, potentially higher relative maximum. Of course, this means it really is like evolution, we won’t know if one of these new scientific information propagating techniques is a successful adaptive strategy until it has succeeded.

    And as mentioned in post #79, to get across that valley of “hell” organizations have kicked off parts into the valley to test whether that slope is indeed heading toward a better relative maximum, or not.

  23. JohnM permalink

    Why would an academic publish in a low status medium (blog), for which little if any credit can be claimed when you can publish in an open access journal (http://www.doaj.org/) or archive and get credit in the academic world? The solution is open access archives and open publishing (see http://openaccess.athabascau.ca/ for an overview). The growth of these would suggest that academics have already figured this out.

    Blogs and Wikipedia are not for original research but for translating original research into publicly accessibly formats. Much is and should be lost in the process, as the audience has changed. Original academic research is written for academics in the field, which is why it is generally unintelligible to those outside a field. Technology doesn’t change the audience, but populist technologies do allow a new audience to be sought. Keeping these audiences separate is necessary for academics. Smart academics realize they need to use populist technologies to expand the reach of their work, but it will never get them promotion or tenure.

Trackbacks and Pingbacks

  1. Week 5 Rewriting to enhance ‘scannability’ « Virginia Krumins Blog
  2. Is scientific publishing about to be disrupted? « Harleymac's Blog
  3. Week 5 « Zac Holly (3304573)
  4. Um passo pra trás para dar dois passos para frente | Blog Pra falar de coisas
  5. Não, não, não… de novo | Blog Pra falar de coisas
  6. Week 5: Digital Readability: Part 2 « COMM1218
  7. Web readability – task 2 « Electro Pub Thoughts
  8. Writing for the web: article summary « Text Grand Central
  9. Writing for the web: article summary « Avneets first Blog
  10. Quora
  11. An Interview With Jane Friedman About “The Future Of Publishing: The Enigma Project” E-book
  12. IHS Shows Where STM Value Goes | Blogos
  13. Quora
  14. Mietchen, Pampel & Heller: Criteria for the Journal of the Future « beyondthejournal.net
  15. Quora
  16. Future Ready: The Pace of Change for Technology and Culture | Future Ready 365
  17. By Michael Nielsen: A good example is the popular technology blog TechCrunch, by most measures one of the top 100 blogs in the world. Started by Michael Arrington in 2005, TechCrunch has rapidly grown, and now employs a large staff. Part of the reason it
  18. Is Everything Disruptive? Not So Fast! | The Passive Voice
  19. Open Access: a short summary | Michael Nielsen
  20. Sheldon may play dice, but scientific publishing cannot be left to chance | Quantum Pie with Krister Shalm
  21. Michael Nielsen: Doing Science In the Open at the University Campus in Rijeka, Croatia | InTechWeb Blog
  22. An Exercise in Irrelevance » Blog Archive » Kcite, Greycite and Kblog-metadata
  23. The Future of Open Access Publishing is Free to Publish, Free to Read | Prasetyo Anggono
  24. In Defense of Social Media (At Least Some Of It) - O'Reilly Radar
  25. Scientific Publishing: Disruption and Semantic Build-Up « FrankHellwig.com
  26. Social Media Trends for 2010 | Heidi Allen
  27. Developing a Digital Strategy 002 – Current trends | Beyond Digital Strategy

Comments are closed.