My new essay on the use of digital media to explain scientific ideas is here.
I am delighted to announce that the first chapter of my book “Neural Networks and Deep Learning” is now freely available online here.
The chapter explains the basic ideas behind neural networks, including how they learn. I show how powerful these ideas are by writing a short program which uses neural networks to solve a hard problem — recognizing handwritten digits. The chapter also takes a brief look at how deep learning works.
Finally, if you’ve read this far I hope you’ll consider supporting my Indiegogo campaign for the book, which will give you access to perks like early drafts of later chapters.
In September of 2012, a team of scientists released a photograph showing the most distant parts of the Universe ever seen by any human being. They obtained the photograph by pointing the Hubble Space Telescope at a single tiny patch of sky, gradually building up an image over a total of 23 days of observation. It’s an otherwise undistinguished patch of sky, within the little-known constellation Fornax. It’s less than one hundredth the size of the full moon, and appears totally empty to the naked eye. Here’s what’s seen with the Hubble Telescope:
This image is known as the Hubble Extreme Deep Field. It contains 5,500 galaxies, each of which, in turn, contains billions of stars. This region of sky ordinarily looks empty because these galaxies are far too dim to be seen with the naked eye. Some of the galaxies are more than 13 billion light years away. We are seeing them as they were just a few hundred million years after the big bang, near the dawn of time. If you have the eyes to see, this is what deep space and the early history of the universe looks like.
One of the many striking things about the Hubble Extreme Deep Field is that it’s beautiful. Another work revealing beauty in nature is a 1999 art piece by a Dutch-Canadian artist named Juan Geuer. In a darkened room, Geuer shone orange laser light through a single droplet of water, projecting the light onto a wall. The resulting play of light is spectacular. We see the changing internal structure of the droplet, and the interplay of surface tension and gravity, a marvellous mix of refraction, reflection, and diffraction. Here’s a recording of Geuer’s art piece, known as Water in Suspense. (Some browsers don’t start the video at the right moment — move to 13:33 in the recording if that’s the case).
It’s not a typical action-packed online video. It moves slowly, slowly,… and then WHOOSH, a sudden, rapid change. I suggest watching it for at least a minute or two. During that time you’ll start to learn how to watch the droplet. Indeed, getting a feel for the multiple timescales at which the droplet changes is part of what we learn from the work.
Water in Suspense reveals a hidden world. We discover a rich structure immanent in the water droplet, a structure not ordinarily accessible to our senses. In this way it’s similar to the Hubble Extreme Deep Field, which also reveals a hidden world. Both are examples of what I call Super-realist art, art which doesn’t just capture what we can see directly with our eyes or hear with our ears, but which uses new sensors and methods of visualization to reveal a world that we cannot directly perceive. It’s art being used to reveal science.
Although I’m not an artist or an art critic, I find Super-realist art fascinating. Works like the Hubble Extreme Deep Field and Water in Suspense give us concrete, vivid representations of deep space and the interior structure of a water droplet. For most of us, these are usually little more than dry abstractions, remote from our understanding. By creating vivid representations, Super-realist art provides us with a new way of thinking about such phenomena. And so in this essay I explore Super-realist art: what it is, how it relates to other modes of art, and why it is flourishing today.
I’m a technologist and scientist, a person whose main interests are in areas such as machine learning, data science, and, especially, the development of cognitive tools (i.e., tools to extend human thought). It may seem strange that I’m writing an essay about art. But thinking about technology from this viewpoint gives a perspective that’s radically different to the usual take. It’s a flip side, an artistic perspective that sees technology and, in particular, Super-realism, as a means to discover new aesthetics and new representations of reality. That means Super-realism is not just of interest to artists, but can also inform technologists and scientists, giving them a new way of thinking about how we can develop tools that extend human thought.
In discussing Super-realism I will ignore many of the usual labels we use to understand art. Instead, I’ll develop a caricature which divides much of Western art into just three categories: Realism, Post-realism, and Super-realism. I believe this way of thinking illuminates Super-realism, and ask the indulgence of any experts on art history who happen to be reading.
Until the latter part of the 19th century Realism was one of the main aspirations of much Western art . Realist art aimed at capturing the reality accessible to our senses. My favourite Realist artist is Rembrandt:
Such paintings seem so true-to-life that it is as though the artist has captured an element of objective reality. Suppose aliens visited Earth after a cataclysm wiped out all traces of humanity, except for a few Rembrandt self-portraits that somehow survived. I believe the aliens could understand much about both Rembrandt and humanity from those portraits, including perhaps even something of Rembrandt’s personality, his habits, his likes, and his dislikes.
But even the most talented Realist artists capture only a tiny slice of reality. Rembrandt’s self-portraits, as real as they appear, omit nearly all the biological and mental processes going on in Rembrandt’s body and mind. Suppose, for example, we could zoom in on his face, seeing it at 10 times magnification, then 100, then 1,000, and so on. At each level of magnification a new world would be revealed, of pores and cells and microbes. And there is so much structure to understand in each of these worlds, from the incredibly complex ecology of microbes on the skin, to the inner workings of cells. Consider this video simulating the inside of a cell:
This is not to criticise Rembrandt and the other Realists. On their own terms, they succeeded. But what Super-realism shows is that we can see far deeper into the world, and find great beauty in so doing.
In the 20th century Western art moved away from Realism, and Post-realist art became increasingly influential. To describe how Super-realism relates to Post-realism I’ll use the work of Monet. Monet was an early Post-realist painter, one working at the boundary between Realism and Post-realism. Here’s one of Monet’s great works, the "Water Lillies" tryptych, housed in the Museum of Modern Art in New York:
Unfortunately, when viewed on a computer screen or as a small print, this tryptych looks washed out and insipid, a poor imitation of the Realists. But when you see the original — 6 and a half feet high, and 42 feet long — the effect is jaw-dropping. Monet’s genius, and what made his painting so overwhelming, was a deep understanding of how humans see. He was a master illusionist who understood how to trick the viewer’s brain into seeing something much, much better than a realistic rendition. This only works at scale. When "Water Lillies" is shrunk to print size, the psychology of perception changes, and the illusion vanishes. I don’t understand why this is. I’ve read something of the theory of Monet and his fellow Impressionists, and don’t find those explanations convincing. What I do know is that Monet’s shimmering pond loses its glory. It looks fake, less than a literal rendition, merely a washed out print. But somehow, in the original, Monet was able to use his understanding of human perception to evoke water lillies in a fashion more spectacular than a Realist painting could ever have achieved.
In Post-realism, art isn’t about directly representing an objective, independent reality. Instead, it’s about creating new types of relationship between the art piece and the human mind. Monet and his fellow Impressionists were on the boundary between Realism and Post-Realism, since their works still aimed to evoke an objective, independent reality, albeit not by a direct representation. But Post-realism was pushed much further by later artists, such as Picasso, M. C. Escher, and Kandinsky:
These works gradually shift away from representing reality, and toward exploring the relationship between art piece and viewer. Escher, for example, used quirks in the human visual system to evoke impossible realities. Abstract artists such as Kandinsky take this idea still further, and it is developed to an extreme in the work of abstract minimalists such as John McLaughlin, a pioneer in what is known as Hard-edge painting:
At first glance this (and much of McLaughlin’s other work) looks ludicrous. But it makes more sense when you understand that McLaughlin was exploring how people respond to art. His earlier works were more complex. But he gradually simplified his work as he realized that people can have novel responses even to simple stimuli.
Abstract minimalism annoys many people. The approach can and has led to fakery and foolishness on the part of some artists. But my point here is neither to criticise nor to praise. Instead, it’s to understand what artists such as McLaughlin were up to. Whereas the Realist artists believed that virtuosity came from depicting part of reality with high fidelity, in Post-realism virtuosity may come from creating a novel relationship between art piece and viewer.
Super-realism changes art again. Virtuosity becomes about revealing hidden worlds and discovering new aesthetics. Take a look at the following video, which shows — in extreme slow motion! — a packet of light passing through a bottle. Ordinarily that passage takes less than a nanosecond, but the video slows the passage down by a factor of 10 billion, so we can see it:
In a way, Super-realism is a return to the aims of Realism, back to representing reality. But what’s new about Super-realism is the challenge (and opportunity!) to find ways of representing previously unseen parts of the world.
Most of my examples of Super-realist art have come from people who regard themselves mainly as scientists, not as artists. However, there are many artists doing Super-realist art. I’ve drawn my examples mostly from scientists merely because I’ve found that artists’ work is often more exploratory, and thus requires more background explanation, making it less suited for an essay such as this.
Although Super-realism’s flourishing is recent, its origins go back centuries. In 1610, Galileo published a short book entitled Starry Messenger (Sidereus Nuncius), which contained several images showing the moon in unprecedented detail, as observed by Galileo through one of his new telescopes:
In a similar vein, in 1665 Robert Hooke published a bestselling book entitled Micrographia, which used the newly-invented microscope to reveal the world of the very small. Hooke’s book included the first ever drawings of biological cells, as well as drawings showing animals such as fleas in unprecedented detail:
While Super-realism isn’t new, that doesn’t mean it’s yet in the artistic mainstream. Many people don’t consider works such as the Hubble Extreme Deep Field or the light-in-a-bottle video to be art. (I would not be surprised if this includes the creators of those works.) Even works more explicitly artistic in intent, such as Water in Suspense, are viewed as borderline. But I believe that each of these works reveals a new aesthetic, an aesthetic generated by the scientific principles underlying the phenomenon being represented. And insofar as they reveal a new aesthetic, I believe these works are art.
Although Super-realism isn’t yet in the artistic mainstream, it has influenced parts of that mainstream. For example, in the 1980s the film diector and cinematographer Ron Fricke used time-lapse photography to reveal new aspects of the everyday world, in documentary films such as Chronos :
Fricke was not, of course, the first to use time-lapse photography in this way. However, his films have won wide acclaim, and inspired many other artists to develop time-lapse photography further as an artistic medium.
Super-realism has grown rapidly in the past twenty to thirty years. Three forces are driving that growth.
First, far more people can access and learn to use scientific instruments. Recall Juan Geuer and his virtuoso home-made laser light show. There are people building everything from home-made bubble chambers to balloons exploring the upper atmosphere. These are not isolated curiosities, but rather part of a rapidly expanding social phenomenon that has been called by many names: the DIY movement, citizen science, the Maker movement. Whatever it is, it’s growing, fed by the expansion of online and mail-order suppliers that serve niche markets, and by the growth of online (and, increasingly, offline) communities of people who work with these instruments and teach one another.
Second, the data being taken by many of these instruments is being shared openly, online. In the 1980s if a scientist used a telescope to take a photograph, likely no more than a few dozen people would ever touch the photographic plate. Now more than a billion people can download data from the Hubble Telescope, and find new ways to visualize it. Some of the most extraordinary visualizations of the Earth are made using data released openly by NASA. Any scientific project which releases data enables others to find new ways of making meaning from that data.
Third, we’re collectively building a powerful suite of tools to reveal these new worlds. For example, as I write there are more than 25,000 open source visualization projects available on the code repository GitHub. Most of those projects are, of coure, experiments that will be abandoned. But there are also many powerful tools that give people incredible abilities to make and reveal beauty. It’s no wonder Super-realism is flowering.
Story-tellers say that reality is often stranger and more interesting than fiction. I believe this is true for all art. The reason is that nature is more imaginative than we, and as we probe deeper into nature, we will continue to discover new aesthetics and new forms of beauty. I believe these new aesthetics will stimulate art for decades or centuries to come.
 I am referring to Realist art broadly, not the 19th century artistic movement known as Realism. I’m also certainly not claiming that all Western art was Realist until the late 19th century, merely that the idea of representing objective reality was often deeply influential.
 The YouTube video shows images from Chronos, but the music is from another source. Unfortunately, I could not find a good unabridged excerpt from Chronos online.
Acknowledgements: This essay arose out of many conversations with Jen Dodd, especially about her work with the Subtle Technologies Festival of Art and Science. Thanks also to Miriah Meyer and Lee Smolin for conversations that helped stimulate this essay.
As many of you no doubt know, the Federal Research Public Access Act; (FRPAA, pronounced fur-pa) was introduced into the US Congress a few days past. It’s a terrific bill, which, if it passes, will have the effect of making all US Government-funded scientific research accessible to the public within 6 months of publication.
Open access legislation like FRPAA doesn’t just happen in a vacuum. The Alliance for Taxpayer Access (ATA) is a Washington D.C.-based advocacy group that works to promote open access policies within the US Government. The ATA worked with Congress (and many other organizations) to help pass the NIH public access policy in 2008, and have been working for the past several years with members of Congress on FRPAA.
In this post, I interview Heather Joseph, the Executive Director of the Scholarly Publishing and Academic Resources Coalition (SPARC), which convenes the ATA, and ask her about the bill, about next steps, and about how people can help.
Q: Heather, thanks for agreeing to be interviewed! What is FRPAA, and what’s it trying to accomplish?
Thank you, Michael – I’m happy to talk about this bill!
In a nutshell, FRPAA is designed to make sure that the results of scientific research paid for by the public can be accessed by the public. Most people are surprised to learn that this isn’t automatically the case; they assume that if their tax dollars pay for a research study, they should be entitled to read the results. But the reality is quite different. Right now, if you want to access articles that report on publicly funded science, you have pay to do so, either through a subscription to a scientific journal (which can cost thousands of dollars a year), or though pay-per-view, which can easily cost upwards of $30 per article. This presents an often-unsurmountable obstacle for exactly those people who most want (or need) access – scientists, students, teachers, physicians, entrepreneurs – who too often find themselves unable to afford such fees, and end up locked out of the game.
Out of eleven federal agencies that fund science here in the United States, only one – the National Institutes of Health – actually has a policy that ensures that the public can freely access the results of their funded research online. FRPAA is designed to tackle this issue head on, and to make sure that the science stemming from all U.S. agencies is made freely available to anyone who wants to use. it.
FRPAA is a very straightforward bill – it simply says that if you receive money from a U.S. Agency to do scientific research, you agree (upfront) to make any articles reporting on the results available to the public in a freely accessible online database, no later than six months after publication in a peer-reviewed journal.
Q: What is the Alliance for Taxpayer Access (ATA)? What role did the ATA play in advocating for FRPAA?
The ATA is a coalition of groups who are working together to try and craft a positive solution to this problem. In 2004, the library community (led by my home organization, SPARC) decided that there must be other groups who shared our frustration over the current access situation. We reached out to research organizations, patient advocacy groups, consumer organizations, publishers, student groups – anyone we could think of who shared the goal of unlocking access to taxpayer funded research. We quickly attracted more than 80 organizations, representing millions of individuals. This created a whole new opportunity to advocate for national access policies from a much stronger position… there really is strength in numbers!
The ATA has evolved into the leading advocacy organization for taxpayer access to the results of taxpayer funded research. We knock on Congressional doors, talking with policymakers about the current barriers to access, and about new opportunities for scientific progress once those barriers are brought down. We are all about leveraging the public’s investment in science by making sure that anyone who is interested can easily access and build on this research. That’s how science advances, after all.
Q: In 2008, the Congress passed the NIH public access policy. Can you tell us about that, and the ATA’s role?
Absolutely! As I mentioned, the NIH is currently the only U.S. agency that has a policy guaranteeing the public access to the results of its funded research. The idea for the policy surfaced in 2003, when Congress expressed concern that results of the public’s nearly $30 billion annual investment in NIH research were not being made as widely accessible as they should be. They asked the NIH Director to create a policy to address the problem, setting in motion what would become 4 long years of intense debate in the scientific community.
Not surprisingly, some journal publishers expressed immediate concern that any policy that provided access to research results through any other channels other than subscription-based journals would irreparably damage their businesses. Because journal publishing is big business (nearly $9 billion in annual revenues) publishers were able to use their long-established trade associations to aggressively lobby the NIH and Congress against the establishment of such a policy.
The scientists, librarians, patients, and others who favored the policy found themselves at a disadvantage, advocating as individual organizations without a coordinated voice. This was the main reason the ATA was established, and we quickly found ourselves at the center of the debate, helping to ensure that all stakeholders who favored the establishment of a public access policy had a way to present a united message to policymakers. Ultimately, Congress passed a landmark policy fully supported by the ATA that was enacted in 2008.
Q: Who works at the ATA?
The ATA is essentially a virtual coalition. While we’ve grown to represent over 100 organizations, the organization’s advocacy is carried out by a pretty small core group of staff (all of whom have other full time jobs!) Besides myself, the wonderful Nick Shockey and Andrea Higginbotham are responsible for the coalition’s online presence – keeping our website up to date, maintaining our Congressional Action Center, and keeping our members looped in on various email lists. We also rely on our incredibly active members to help us continually refine our messages, and look for opportunities to spread the word about our work. People like Sharon Terry at the Genetic Alliance, Prue Adler at the Association of Research Libraries, and Pat Furlong at Parent Project Muscular Dystrophy are prime examples of some of the people who keep the ATA active on the front lines. Also: there is no cost to join the ATA (SPARC picks up the relatively low tab to keep it humming!); and the door is open for any organization to sign on as a member through our website. If you’re interested, please let us know!
Q: What happens next, with FRPAA? How does it (hopefully) become law? What could derail it?
The next steps for FRPAA will be for us (and our advocates) to encourage other members of Congress to sign onto the bill as co-sponsors. Generating a nice, robust list of supporting members of Congress is key in helping to keep the profile of the bill high. Procedurally, the bill will be referred to Committee for further consideration; in the Senate, it will go to the Homeland Security and Government Affairs Committee, and in the House, the Committee on Oversight and Government Reform will receive the bill. As with any legislation, FRPAA faces an uphill battle in an election year, but given the growing attention this issue has received in the past year (from the White House Office of Science and Technology Policy, to the America COMPETES Act, to the recent Research Works Act), we’re hopeful that the bill can continue to advance.
I think the biggest threat is inaction, so vocal support from stakeholders will be crucial!
Q: What can people do to help FRPAA become law?
The most important thing that people – especially active scientists – can do help advance this bill is to speak out in support of this bill. And we need folks to speak out in two ways:
First, speak out to your members of Congress. The ATA has an Action Center set up so that you can simply log on, pick your Senators and Representatives, and automatically generate a letter asking them to support FRPAA. The Action Center has all kinds of information about the bill, including Talking Points, FAQ’a and even template letters, to help make the process as easy as possible. Check it out!
Second, speak out to your colleagues and your community. Blog about the bill, or spread the word on Twitter. Consider writing an OpEd for your local newspaper, or writing an article for your organization’s newsletter. The more people become aware of this issue, the more they support it. Help us spread the word!
Q: Finally, how can people follow what the ATA is doing, and keep up with your calls for action?
You can sign onto the Alliance for Taxpayer Access by going to our website. There’s no charge.
If you simply want to be added to our email list for alerts and updates, contact either or myself (firstname.lastname@example.org) or Andrea Higginbotham (email@example.com), or follow us on Twitter at @SPARC_NA.
Elsevier is the world’s largest and most profitable scientific publisher, making a profit of 1.1 billion dollars on revenue of 3.2 billion dollars in 2009. Elsevier have also been involved in many dubious practices, including the publishing of fake medical journals sponsored by pharmaceutical companies, and the publication of what are most kindly described as extraordinarily shoddy journals. Until 2009, parent company Reed Elsevier helped facilitate the international arms trade. (This is just a tiny sample: for more, see Gowers’s blog post, or look at some of the links on this page.) For this, executives at Reed Elsevier are paid multi-million dollar salaries (see, e.g., 1 and 2, and links therein).
All this is pretty widely known in the scientific community. However, Tim Gowers recently started a large-scale discussion of Elsevier by scientists, by blogging to explain that he will no longer be submitting papers to Elsevier journals, refereeing for Elsevier, or otherwise supporting the company in any way. The post now has more than 120 comments, with many mathematicians and scientists voicing similar concerns.
Following up from the discussion on Gowers’s post, Tyler Neylon has created a website called The Cost of Knowledge (see also Gowers’s followup) where researchers can declare their unwillingness to “support any Elsevier journal unless they radically change how they operate”. If you’re a mathematician or scientist who is unhappy with Elsevier’s practices, then consider signing the declaration. And while you’re at it, consider making your scientific papers open access, either by depositing them into open repositories such as the arXiv, or by submitting them to open access journals such as the Public Library of Science. Or do both.
That’s the question I address (very partially) in a new post on my data-driven intelligence blog. The post reviews some of the recent work on causal inference done by people such as Judea Pearl. In particular the post describes the elements of a causal calculus developed by Pearl, and explains how the calculus can be applied to infer causation, even when a randomized, controlled experiment is not possible.
Click through for event details. I’ve included a few private events at organizations where it’s possible some readers work.
- The Tech Museum (Bay Area) November 1
- Harvard Book Store / Cambridge Forum (Boston) November 9
- Authors@Google (Bay Area) November 15.
- San Francisco Public Library (San Francisco) November 15
- Microsoft Colloquium (Seattle) November 16
- Town Hall Seattle (Seattle) November 16
- Powell’s Books (Portland) November 17
- LiveWire (Portland) November 18
- Howard Hughes Medical Institute (Washington DC) November 29
- TEDxPrincetonLibrary (New Jersey) November 30
- Carnegie Council (New York) December 1
It’s a great question. Suppose it’s announced in the next few years that the LHC has discovered the Higgs boson. There will, no doubt, be a peer-reviewed scientific paper describing the result.
How should we regard such an announcement?
The chain of evidence behind the result will no doubt be phenomenally complex. The LHC analyses about 600 million particle collisions per second. The data analysis is done using a cluster of more than 200,000 processing cores, and tens of millions of lines of software code. That code is built on all sorts of extremely specialized knowledge and assumptions about detector and beam physics, statistical inference, quantum field theory, and so on. Whatsmore that code, like any large software package, no doubt has many bugs, despite enormous effort to eliminate bugs.
No one person in the world will understand in detail the entire chain of evidence that led to the discovery of the Higgs. In fact, it’s possible that very few (no?) people will understand in much depth even just the principles behind the chain of evidence. How many people have truly mastered quantum field theory, statistical inference, detector physics, and distributed computing?
What, then, should we make of any paper announcing that the Higgs boson has been found?
Standard pre-publication peer review will mean little. Yes, it’ll be useful as an independent sanity check of the work. But all it will show is that there’s no glaringly obvious holes. It certainly won’t involve more than a cursory inspection of the evidence.
A related situation arose in the 1980s in mathematics. It was announced in the early 1980s that an extremely important mathematical problem had been solved: the classification of the finite simple groups. The proof had taken about 30 years, and involved an effort by 100 or so mathematicians, spread across many papers and thousands of pages of proof.
Unfortunately, the original proof had gaps. Most of them were not serious. But at least one serious gap remained. In 2004, two mathematicians published a two-volume, 1,200 page supplement to the original proof, filling in the gap. (At least, we hope they’ve filled in the gap!)
When discoveries rely on hundreds of pieces of evidence or steps of reasoning, we can be pretty sure of our conclusions, provided our error rate is low, say one part in a hundred thousand. But when we start to use a million or a billion (or a trillion or more) pieces of evidence or steps of reasoning, an error rate of one part in a million
becomes a guarantee of failure, unless we develop systems that can tolerate those errors.
It seems to me that one of the core questions the scientific community will wrestle with over the next few decades is what principles and practices we use to judge whether or not a conclusion drawn from a large body of networked knowledge is correct? To put it another way, how can we ensure that we reliably come to correct conclusions, despite the fact that some of our evidence or reasoning is almost certainly wrong?
At the moment each large-scale collaboration addresses this in their own way. The people at the LHC and those responsible for the classification of finite simple groups are certainly very smart, and I’ve no doubt they’re doing lots of smart things to eliminate or greatly reduce the impact of errors. But it’d be good to have a principled way of understanding how and when we can come to correct scientific conclusions, in the face of low-level errors in the evidence and reasoning used to arrive at those errors.
If you doubt there’s a problem here, then think about the mistakes that led to the Pentium floating point bug. Or think of the loss of the Mars Climate Orbiter. That’s often described as a failure to convert between metric and imperial units, which makes it sound trivial, like the people at NASA are fools. The real problem was deeper. As a NASA official said:
People sometimes make errors. The problem here was not the error [of unit conversion], it was the failure of NASA’s systems engineering, and the checks and balances in our processes to detect the error. That’s why we lost the spacecraft.
In other words, when you’re working at NASA scale, problems that are unlikely at small scale, like failing to do a unit conversion, are certain to occur. It’s foolish to act as though they won’t happen. Instead, you need to develop systems which limit the impact of such errors.
In the context of science, what this means is that we need new methods of fault-tolerant discovery.
I don’t have well-developed answers to the questions I’ve raised above, riffing off David Weinberger’s original question. But I will finish with the notion that one useful source of ideas may be systems and safety engineering, which are responsible for the reliable performance of complex systems such as modern aircraft. According to Boeing, a 747-400 has six million parts, and the first 747 required 75,000 engineering drawings. Not to mention all the fallible human “components” in a modern aircraft. Yet aircraft systems and safety engineers have developed checks and balances that let us draw with very high probability the conclusion “The plane will get safely from point A to B”. Sounds like a promising source of insights to me!
Further reading: An intriguing experiment in distributed verification of a mathematical proof has been done in an article by Doron Zeilberger. Even if you can’t follow the mathematics, it’s stimulating to look through. I’ve taken a stab at some of the issues in this post before, in my essay Science beyond individual understanding. I’m also looking forward to David Weinberger’s new book about networked knowledge, Too Big To Know. Finally, my new book Reinventing Discovery is about the promise and the challenges of networked science.
I wrote the following essay for one of my favourite online forums, Hacker News, which over the past few months has seen more and more discussion of the issue of open access to scientific publication. It seems like it might have broader interest, so I’m reposting it here. Original link here.
The topic of open access to scientific papers comes up often on Hacker News.
Unfortunately, those discussions sometimes bog down in misinformation and misunderstandings.
Although it’s not exactly my area of expertise, it’s close — I’ve spent the last three years working on open science.
So I thought it might be useful to post a summary of the current state of open access. There’s a lot going on, so even though this essay appears lengthy, it’s actually a very brief and incomplete summary of what’s happening. I have links to further reading at the end.
This is not a small stakes game. The big scientific publishers are phenomenally profitable. In 2009, Elsevier made a profit of 1.1 billion dollars on revenue of 3.2 billion dollars. That’s a margin (and business model) they are very strongly motivated to protect. They’re the biggest commercial journal publisher, but the other big publishers are also extremely profitable.
Even not-for-profit societies often make an enormous profit on their journals. In 2004 (the most recent year for which I have figures) the American Chemical Society made a profit of 40 million dollars on revenues of 340 million dollars. Not bad! This money is reinvested in other society activities, including salaries. Top execs receive salaries in the 500k to 1m range (as of 2006, I’m sure it’s quite a bit higher now.)
The traditional publishers make money by charging journal subscription fees to libraries. Why they make so much money is a matter for much discussion, but I will merely point out one fact: there are big systematic inefficiencies built into the market. University libraries for the most part pay the subscription fees, but they rely on guidance (and often respond to pressure) from faculty members in deciding what journals to subscribe to. In practice, faculty often have a lot of power in making these decisions, without bearing the costs. And so they can be quite price-insensitive.
The journal publishers have wildly varying (and changing) responses to the notion of open access.
For example, most Springer journals are closed access, but in 2008 Springer bought BioMedCentral, one of the original open access publishers, and by some counts the world’s largest. They continue to operate. (More on the deal here.)
[Update: It has been pointed out to me in email that Springer now uses a hybrid open access model for most of their journals, whereby authors can opt to pay a fee to make their articles open access. If the authors don't pay that fee, the articles remain closed. The other Springer journals, including BioMedCentral, are fully open access.]
Nature Publishing Group is also mostly closed access, but has recently started an open access journal called Scientific Reports, apparently modeled after the (open access) Public Library of Science’s journal PLoS One.
It is sometimes stated that big commercial publishers don’t allow authors to put free-to-access copies of their papers on the web. In fact, policies vary quite a bit from publisher to publisher. Elsevier and Springer, for example, do allow authors to put copies of their papers on their websites, and into institutional repositories. This doesn’t mean that always (or even often) happens, but it’s at least in principle possible.
Comments on HN sometimes assume that open access is somehow a new issue, or an issue that no-one has been doing anything about until recently.
This is far from the case. Take a look at the Open Access Newsletters and you’ll realize that there’s a community of people working very, very hard for open access. They’re just not necessarily working in ways that are visible to hackers.
Nonetheless, as a result of the efforts of people in the open access movement, a lot of successes have been achieved, and there is a great deal of momentum toward open access.
Here’s a few examples of success:
In 2008 the US National Institutes of Health (NIH) — by far the world’s largest funding agency, with a $30+ billion dollar a year budget — adopted a policy requiring that all NIH-funded research be made openly accessible within 12 months of publication. See here for more.
All 7 UK Research Councils have adopted similar open access policies requiring researchers they fund to make their work openly accessible.
As a result of policies like these, in years to come you should see more and more freely downloadable papers showing up in search results.
Note that there are a lot of differences of detail in the different policies, and those details can make a big difference to the practical impact of the policies. I won’t try to summarize all the nuances here, I’m merely pointing out that there is a lot of institutional movement.
Many more pointers to open access policies may be found at ROARMAP. That site notes 52 open access policies from grant agencies, and 135 from academic institutions.
There’s obviously still a long way to go before there is universal open access to publicly-funded research, but there has been a lot of progress, and a lot of momentum.
One thing that I hope will happen is that the US Federal Research Public Access Act passes. First proposed in 2006 (and again in 2010), this Act would essentially extend the NIH policy to all US Government-funded research (from agencies with budgets over 100 million). My understanding is that at present the Act is tied up in committee.
Despite (or because of) this progress, there is considerable pushback on the open access movement from some scientific publishers. As just one instance, in 2007 some large publishers hired a very aggressive PR firm to wage a campaign to publicly discredit open access.
I will not be surprised if this pushback escalates.
What can hackers do to help out?
One great thing to do is start a startup in this space. Startups (and former startups) like Mendeley, ChemSpider, BioMedCentral, PLoS and others have had a big impact over the past ten or so years, but there’s even bigger opportunities for hackers to really redefine scientific publishing. Ideas like text mining, recommender systems, open access to data, automated inference, and many others can be pushed much, much further.
I’ve written about this in the following essay: Is Scientific Publishing About to be Disrupted? Many of those ideas are developed in much greater depth in my book on open science, Reinventing Discovery.
For less technical (and less time-consuming!) ways of getting involved, you may want to subscribe to the RSS feed at the Alliance for Taxpayer Access. This organization was crucial in helping get the NIH open access policy passed, and they’re helping do the same for the Federal Public Research Access Act, as well as other open access efforts.
If you want to know more, the best single resource I know is Peter Suber’s website.
Suber has, for example, written an extremely informative introduction to open access. His still-active Open Access Newsletter is a goldmine of information, as is his (no longer active) blog. He also runs the open access tracking project.
If you got this far, thanks for reading! Corrections are welcome.