Is there a tension between creativity and accuracy?

On Twitter, I’ve been chatting with my friend Julia Galef about tensions between thinking creatively and thinking in a way that reduces error.

Of course, all other things being equal, I’m in favour of reducing error in our thinking!

However, all other things are not always equal.

In particular, I believe “there’s a tension, too, between behaviours which maximize accuracy & which maximize creativity… A lot of important truths come from v. irrational ppl.”

Julia has summarized some of her thinking in a blog post, where she disagrees, writing: “I totally agree that we need more experimentation with “crazy ideas”! I’m just skeptical that rationality is, on the margin, in tension with that goal.”

Before getting to Julia’s arguments, I want to flesh out the idea of a tension between maximizing creativity and maximizing accuracy.

Consider the following statement of Feynman’s, on the need to fool himself into believing that he had a creative edge in his work. He’s talking about his early ideas on how to develop a theory of electrons and light (which became, after many years, quantum electrodynamics). The statement is a little jarring to modern sensibilities, but please look past that to the idea he’s trying to convey:

I told myself [of his competitors]: “They’re on the wrong track: I’ve got the track!” Now, in the end, I had to give up those ideas and go over to their ideas of retarded action and so on – my original idea of electrons not acting on themselves disappeared, but because I had been working so hard I found something. So, as long as I can drive myself one way or the other, it’s okay. Even if it’s an illusion, it still makes me go, and this is the kind of thing that keeps me going through the depths.

It’s like the African savages who are going into battle – first they have to gather around and beat drums and jump up and down to build up their energy to fight. I feel the same way, building up my energy by talking to myself and telling myself, “They are trying to do it this way, I’m going to do it that way” and then I get excited and I can go back to work again.

Many of the most creative scientists I know are extremely determined people, willing to explore unusual positions for years. Sometimes, those positions are well grounded. And sometimes, even well after the fact, it’s obvious they were fooling themselves, but somehow their early errors helped them find their way to the truth. They were, to use the mathematician Goro Shimura’s phrase “gifted with the special capability of making many mistakes, mostly in the right direction”.

An extreme example is the physicist Joseph Weber, who pioneered gravitational wave astronomy. The verdict of both his contemporaries and of history is that he was fooling himself: his systems simply didn’t work the way he thought. On the other hand, even though he fooled himself for decades, the principals on the (successful!) LIGO project have repeatedly acknowledged that his work was a major stimulus for them to work on finding gravitational waves. In retrospect, it’s difficult to be anything other than glad that Weber clung so tenaciously to his erroneous beliefs.

For me, what matters here is that: (a) much of Weber’s work was based on an unreasonable belief; and (b) on net, it helped speed up important discoveries.

Weber demonstrates my point in an extreme form. He was outright wrong, and remained so, and yet his erroneous example still served a useful purpose, helping inspire others to pursue ideas that eventually worked. In some sense, this is a collective (rather than individual) version of my point. More common is the case – like Feynman – of a person who may cling to mistaken beliefs for a long period, but ultimately uses that as a bridge to new discovery.

Turning to Julia’s post, she responds to my argument with: “In general, I think overconfidence stifles experimentation”, and argues that the great majority of people in society reject “crazy” ideas – say, seasteading – because they’re overconfident in conventional wisdom.

I agree that people often mistakenly reject unusual ideas because they’re overconfident in the conventional wisdom.

However, I don’t think it’s relevant to my argument. Being overconfident in beliefs that most people hold is not at all the same as being overconfident in beliefs that few people hold.

You may wonder if the underlying cognitive mechanisms are the same, and perhaps there’s some kind of broad disposition to overconfidence?

But if that was the case then you’d expect that someone overconfident in their own unusual ideas would, in other areas, also be overconfident in the conventional wisdom.

However, my anecdotal experience is that a colleague willing to pursue unusual ideas of their own is often particularly sympathetic to unusual ideas from other people in other areas. This suggests that being overconfident in your own crazy ideas isn’t likely to stifle other experimentation.

Julia also suggests several variants on the “strategy of temporarily suspending your disbelief and throwing yourself headlong into something for a while, allowing your emotional state to be as
if
you were 100% confident.”

In a sense, Feynman and Weber were practicing an extreme version of this strategy. I don’t know Weber’s work well, but it’s notable that in the details of Feynman’s work he was good at ferreting out error, and not fooling himself. He wasn’t always rigorous – mathematicians have, for instance, spent decades trying to make the path integral rigorous – but there was usually a strong core argument. Indeed, Feynman delivered a very stimulating speech on the value of careful thought in scientific work.

How can this careful approach to the details of argument be reconciled with his remarks about the need to fool yourself in creative work?

I never met Feynman, and can’t say how he reconciled the two points of view. But my own approach in creative work, and I believe many others also take this approach, is to carve out a sort of creative cocoon around nascent ideas.

Consider Apple designer Jony Ive’s remarks at a memorial after Steve Jobs’ death:

Steve used to say to me — and he used to say this a lot — “Hey Jony, here’s a dopey idea.”

And sometimes they were. Really dopey. Sometimes they were truly dreadful. But sometimes they took the air from the room and they left us both completely silent. Bold, crazy, magnificent ideas. Or quiet simple ones, which in their subtlety, their detail, they were utterly profound. And just as Steve loved ideas, and loved making stuff, he treated the process of creativity with a rare and a wonderful reverence. You see, I think he better than anyone understood that while ideas ultimately can be so powerful, they begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished.

To be creative, you need to recognize those barely formed thoughts, thoughts which are usually wrong and poorly formed in many ways, but which have some kernel of originality and importance and
truth. And if they seem important enough to be worth pursuing, you construct a creative cocoon around them, a set of stories you tell yourself to protect the idea not just from others, but from your own self doubts. The purpose of those stories isn’t to be an air tight defence. It’s to give you the confidence to nurture the idea, possibly for years, to find out if there’s something really there.

And so, even someone who has extremely high standards for the final details of their work, may have an important component to their thinking which relies on rather woolly arguments. And they may well need to cling to that cocoon. Perhaps other approaches are possible. But my own experience is that this is often the case.

Postscript

Julia finishes her post with:

One last point: Even if it turned out to be true that irrationality is necessary for innovators, that’s only a weak defense of your original claim, which was that I’m significantly overrating the value of rationality in general. Remember, “coming up with brilliant new ideas” is just one domain in which we could evaluate the potential value-add of increased rationality. There are lots of other domains to consider, such as designing policy, allocating philanthropic funds, military strategy, etc. We could certainly talk about those separately; for now, I’m just noting that you made this original claim about the dubious value of rationality in general, but then your argument focused on this one particular domain, innovation.

To clarify, I didn’t intend my claim to be in general: the tension I see is between creativity and accuracy.

That said, this tension does leak into other areas.

If you’re a funder, say, trying to determine what to fund in AI research, you go and talk to AI experts. And many of those people are likely to have cultivated their own creative cocoons, which will inform their remarks. How a funder should deal with that is a separate essay. My point here is simply that this process of creative cocooning isn’t easily untangled from things like evaluation of work.

Where will the key ideas shaping the future of scientific publishing come from?

Stefan Janusz from the Royal Society asked me to comment briefly on where I’d look for new ideas about the future of scientific publishing. Here’s my response, crossposted to the Royal Society’s blog about scientific publishing.

It’s tempting to assume the key ideas will come from leading scientists, journal publishers, librarians, policy makers, and so on.

While these are all important groups, I don’t think they’re going to invent the key ideas behind the future of scientific publishing. That will be done primarily by two groups of outsiders: exceptionally creative user interface designers, and people who design group experiences.

Let me unpack both those statements.

The first important group is user interface designers. Ultimately, scientific journals are a user interface to humanity’s scientific knowledge, and people such as Henry Oldenburg, Johannes Gutenberg, and Aldus Manutius were all interface designers.

Now, many people working in science don’t understand the importance or difficulty of user interface design. It’s tempting to think it’s either about “making things pretty” or about “making things easy to use”. And, in fact, much work on interface design doesn’t go much deeper than those tasks. But the designers I’m talking about are doing something much deeper. They’re attempting to invent powerful new representations for knowledge, representations that will let us manipulate and comprehend knowledge in new ways.

Think, for example, of how the invention of user interface ideas such as the hyperlink and the search box have transformed how we relate to knowledge. Or take a look at some of Bret Victor’s beautiful designs for changing how we think about systems and mathematics. In a more playful vein, look at Marco ten Bosch’s gorgeous game Miegakure, which challenges people to learn to think in four spatial dimensions. Or consider the way programming languages such as Coq and Logo change the way people interface to mathematical knowledge.

The second group I named is people who design group experiences. In addition to being user interfaces to scientific knowledge, journals are also a medium for collective intelligence. The design of media for collective intelligence isn’t yet a widely recognized field. But there are many people doing amazing things in this area. Just as a random sample, not necessarily related to science, take a look at Ned Gulley’s work on the Mathworks programming competition. Or economist Robin Hanson on idea futures. Or even people such as the musician Bobby McFerrin, who understands crowd behaviour as well as anyone. Or Jane McGonigal and Elan Lee’s work on creating games based on “puzzles and challenges that no single person could solve on their own”. This broad vein of work is a key direction from which important new fundamental ideas will ultimately come.

Let me finish by identifying a questionable assumption implicit in the question “Where will the future of scientific publishing come from?” The assumption is that there will be a single future for scientific publishing, a kind of jazzed-up version of the scientific article, and it’s simply up to enterprising publishers to figure out what it is.

I believe that, if things go well, there will instead be a proliferation of media types. Some will be informal, cognitive media for people to think and carry out experiments with. Look, for example, at some of Peter Norvig’s ipython notebooks. Others will be collaborative environments for building up knowledge – look at Tim Gowers’s and Terry Tao’s use of blogs and wikis to solve mathematical problems collaboratively. And some will be recognizable descendants of the “paper of record” model common in journals today. So what I hope we’ll see is a much richer and more varied ecosystem, and one that continues to change and improve rapidly over many decades

Neural Networks and Deep Learning: first chapter goes live

I am delighted to announce that the first chapter of my book “Neural Networks and Deep Learning” is now freely available online here.

The chapter explains the basic ideas behind neural networks, including how they learn. I show how powerful these ideas are by writing a short program which uses neural networks to solve a hard problem — recognizing handwritten digits. The chapter also takes a brief look at how deep learning works.

The book’s landing page gives a broader view on the book. And I’ve written a more in-depth discussion of the philosophy behind the book.

Finally, if you’ve read this far I hope you’ll consider supporting my Indiegogo campaign for the book, which will give you access to perks like early drafts of later chapters.

The artist and the machine

In September of 2012, a team of scientists released a photograph showing the most distant parts of the Universe ever seen by any human being. They obtained the photograph by pointing the Hubble Space Telescope at a single tiny patch of sky, gradually building up an image over a total of 23 days of observation. It’s an otherwise undistinguished patch of sky, within the little-known constellation Fornax. It’s less than one hundredth the size of the full moon, and appears totally empty to the naked eye. Here’s what’s seen with the Hubble Telescope:


Hubble Extreme Deep Field

This image is known as the Hubble Extreme Deep Field. It contains 5,500 galaxies, each of which, in turn, contains billions of stars. This region of sky ordinarily looks empty because these galaxies are far too dim to be seen with the naked eye. Some of the galaxies are more than 13 billion light years away. We are seeing them as they were just a few hundred million years after the big bang, near the dawn of time. If you have the eyes to see, this is what deep space and the early history of the universe looks like.

One of the many striking things about the Hubble Extreme Deep Field is that it’s beautiful. Another work revealing beauty in nature is a 1999 art piece by a Dutch-Canadian artist named Juan Geuer. In a darkened room, Geuer shone orange laser light through a single droplet of water, projecting the light onto a wall. The resulting play of light is spectacular. We see the changing internal structure of the droplet, and the interplay of surface tension and gravity, a marvellous mix of refraction, reflection, and diffraction. Here’s a recording of Geuer’s art piece, known as Water in Suspense. (Some browsers don’t start the video at the right moment — move to 13:33 in the recording if that’s the case).



It’s not a typical action-packed online video. It moves slowly, slowly,… and then WHOOSH, a sudden, rapid change. I suggest watching it for at least a minute or two. During that time you’ll start to learn how to watch the droplet. Indeed, getting a feel for the multiple timescales at which the droplet changes is part of what we learn from the work.

Water in Suspense reveals a hidden world. We discover a rich structure immanent in the water droplet, a structure not ordinarily accessible to our senses. In this way it’s similar to the Hubble Extreme Deep Field, which also reveals a hidden world. Both are examples of what I call Super-realist art, art which doesn’t just capture what we can see directly with our eyes or hear with our ears, but which uses new sensors and methods of visualization to reveal a world that we cannot directly perceive. It’s art being used to reveal science.

Although I’m not an artist or an art critic, I find Super-realist art fascinating. Works like the Hubble Extreme Deep Field and Water in Suspense give us concrete, vivid representations of deep space and the interior structure of a water droplet. For most of us, these are usually little more than dry abstractions, remote from our understanding. By creating vivid representations, Super-realist art provides us with a new way of thinking about such phenomena. And so in this essay I explore Super-realist art: what it is, how it relates to other modes of art, and why it is flourishing today.

I’m a technologist and scientist, a person whose main interests are in areas such as machine learning, data science, and, especially, the development of cognitive tools (i.e., tools to extend human thought). It may seem strange that I’m writing an essay about art. But thinking about technology from this viewpoint gives a perspective that’s radically different to the usual take. It’s a flip side, an artistic perspective that sees technology and, in particular, Super-realism, as a means to discover new aesthetics and new representations of reality. That means Super-realism is not just of interest to artists, but can also inform technologists and scientists, giving them a new way of thinking about how we can develop tools that extend human thought.

In discussing Super-realism I will ignore many of the usual labels we use to understand art. Instead, I’ll develop a caricature which divides much of Western art into just three categories: Realism, Post-realism, and Super-realism. I believe this way of thinking illuminates Super-realism, and ask the indulgence of any experts on art history who happen to be reading.

Until the latter part of the 19th century Realism was one of the main aspirations of much Western art [1]. Realist art aimed at capturing the reality accessible to our senses. My favourite Realist artist is Rembrandt:


Rembrandt self portrait (credit)

Such paintings seem so true-to-life that it is as though the artist has captured an element of objective reality. Suppose aliens visited Earth after a cataclysm wiped out all traces of humanity, except for a few Rembrandt self-portraits that somehow survived. I believe the aliens could understand much about both Rembrandt and humanity from those portraits, including perhaps even something of Rembrandt’s personality, his habits, his likes, and his dislikes.

But even the most talented Realist artists capture only a tiny slice of reality. Rembrandt’s self-portraits, as real as they appear, omit nearly all the biological and mental processes going on in Rembrandt’s body and mind. Suppose, for example, we could zoom in on his face, seeing it at 10 times magnification, then 100, then 1,000, and so on. At each level of magnification a new world would be revealed, of pores and cells and microbes. And there is so much structure to understand in each of these worlds, from the incredibly complex ecology of microbes on the skin, to the inner workings of cells. Consider this video simulating the inside of a cell:



This is not to criticise Rembrandt and the other Realists. On their own terms, they succeeded. But what Super-realism shows is that we can see far deeper into the world, and find great beauty in so doing.

In the 20th century Western art moved away from Realism, and Post-realist art became increasingly influential. To describe how Super-realism relates to Post-realism I’ll use the work of Monet. Monet was an early Post-realist painter, one working at the boundary between Realism and Post-realism. Here’s one of Monet’s great works, the "Water Lillies" tryptych, housed in the Museum of Modern Art in New York:

Monet's Water Lillies tryptych (credit)

Unfortunately, when viewed on a computer screen or as a small print, this tryptych looks washed out and insipid, a poor imitation of the Realists. But when you see the original — 6 and a half feet high, and 42 feet long — the effect is jaw-dropping. Monet’s genius, and what made his painting so overwhelming, was a deep understanding of how humans see. He was a master illusionist who understood how to trick the viewer’s brain into seeing something much, much better than a realistic rendition. This only works at scale. When "Water Lillies" is shrunk to print size, the psychology of perception changes, and the illusion vanishes. I don’t understand why this is. I’ve read something of the theory of Monet and his fellow Impressionists, and don’t find those explanations convincing. What I do know is that Monet’s shimmering pond loses its glory. It looks fake, less than a literal rendition, merely a washed out print. But somehow, in the original, Monet was able to use his understanding of human perception to evoke water lillies in a fashion more spectacular than a Realist painting could ever have achieved.

In Post-realism, art isn’t about directly representing an objective, independent reality. Instead, it’s about creating new types of relationship between the art piece and the human mind. Monet and his fellow Impressionists were on the boundary between Realism and Post-Realism, since their works still aimed to evoke an objective, independent reality, albeit not by a direct representation. But Post-realism was pushed much further by later artists, such as Picasso, M. C. Escher, and Kandinsky:


Picasso's Three Musicians (credit) Escher's Ascending and Descending (credit)
Kandinsky's Composition VI (credit)

These works gradually shift away from representing reality, and toward exploring the relationship between art piece and viewer. Escher, for example, used quirks in the human visual system to evoke impossible realities. Abstract artists such as Kandinsky take this idea still further, and it is developed to an extreme in the work of abstract minimalists such as John McLaughlin, a pioneer in what is known as Hard-edge painting:


John McLaughlin's '#17, 1966' (credit)

At first glance this (and much of McLaughlin’s other work) looks ludicrous. But it makes more sense when you understand that McLaughlin was exploring how people respond to art. His earlier works were more complex. But he gradually simplified his work as he realized that people can have novel responses even to simple stimuli.

Abstract minimalism annoys many people. The approach can and has led to fakery and foolishness on the part of some artists. But my point here is neither to criticise nor to praise. Instead, it’s to understand what artists such as McLaughlin were up to. Whereas the Realist artists believed that virtuosity came from depicting part of reality with high fidelity, in Post-realism virtuosity may come from creating a novel relationship between art piece and viewer.

Super-realism changes art again. Virtuosity becomes about revealing hidden worlds and discovering new aesthetics. Take a look at the following video, which shows — in extreme slow motion! — a packet of light passing through a bottle. Ordinarily that passage takes less than a nanosecond, but the video slows the passage down by a factor of 10 billion, so we can see it:



In a way, Super-realism is a return to the aims of Realism, back to representing reality. But what’s new about Super-realism is the challenge (and opportunity!) to find ways of representing previously unseen parts of the world.

Most of my examples of Super-realist art have come from people who regard themselves mainly as scientists, not as artists. However, there are many artists doing Super-realist art. I’ve drawn my examples mostly from scientists merely because I’ve found that artists’ work is often more exploratory, and thus requires more background explanation, making it less suited for an essay such as this.

Although Super-realism’s flourishing is recent, its origins go back centuries. In 1610, Galileo published a short book entitled Starry Messenger (Sidereus Nuncius), which contained several images showing the moon in unprecedented detail, as observed by Galileo through one of his new telescopes:


Galileo's sketches of the moon (credit)

In a similar vein, in 1665 Robert Hooke published a bestselling book entitled Micrographia, which used the newly-invented microscope to reveal the world of the very small. Hooke’s book included the first ever drawings of biological cells, as well as drawings showing animals such as fleas in unprecedented detail:


Robert Hooke's sketch of a flea

While Super-realism isn’t new, that doesn’t mean it’s yet in the artistic mainstream. Many people don’t consider works such as the Hubble Extreme Deep Field or the light-in-a-bottle video to be art. (I would not be surprised if this includes the creators of those works.) Even works more explicitly artistic in intent, such as Water in Suspense, are viewed as borderline. But I believe that each of these works reveals a new aesthetic, an aesthetic generated by the scientific principles underlying the phenomenon being represented. And insofar as they reveal a new aesthetic, I believe these works are art.

Although Super-realism isn’t yet in the artistic mainstream, it has influenced parts of that mainstream. For example, in the 1980s the film diector and cinematographer Ron Fricke used time-lapse photography to reveal new aspects of the everyday world, in documentary films such as Chronos [2]:



Fricke was not, of course, the first to use time-lapse photography in this way. However, his films have won wide acclaim, and inspired many other artists to develop time-lapse photography further as an artistic medium.

Super-realism has grown rapidly in the past twenty to thirty years. Three forces are driving that growth.

First, far more people can access and learn to use scientific instruments. Recall Juan Geuer and his virtuoso home-made laser light show. There are people building everything from home-made bubble chambers to balloons exploring the upper atmosphere. These are not isolated curiosities, but rather part of a rapidly expanding social phenomenon that has been called by many names: the DIY movement, citizen science, the Maker movement. Whatever it is, it’s growing, fed by the expansion of online and mail-order suppliers that serve niche markets, and by the growth of online (and, increasingly, offline) communities of people who work with these instruments and teach one another.

Second, the data being taken by many of these instruments is being shared openly, online. In the 1980s if a scientist used a telescope to take a photograph, likely no more than a few dozen people would ever touch the photographic plate. Now more than a billion people can download data from the Hubble Telescope, and find new ways to visualize it. Some of the most extraordinary visualizations of the Earth are made using data released openly by NASA. Any scientific project which releases data enables others to find new ways of making meaning from that data.

Third, we’re collectively building a powerful suite of tools to reveal these new worlds. For example, as I write there are more than 25,000 open source visualization projects available on the code repository GitHub. Most of those projects are, of coure, experiments that will be abandoned. But there are also many powerful tools that give people incredible abilities to make and reveal beauty. It’s no wonder Super-realism is flowering.

Story-tellers say that reality is often stranger and more interesting than fiction. I believe this is true for all art. The reason is that nature is more imaginative than we, and as we probe deeper into nature, we will continue to discover new aesthetics and new forms of beauty. I believe these new aesthetics will stimulate art for decades or centuries to come.

If you enjoyed this essay, you may wish to subscribe to my blog, or follow me on Twitter.

Footnotes

[1] I am referring to Realist art broadly, not the 19th century artistic movement known as Realism. I’m also certainly not claiming that all Western art was Realist until the late 19th century, merely that the idea of representing objective reality was often deeply influential.

[2] The YouTube video shows images from Chronos, but the music is from another source. Unfortunately, I could not find a good unabridged excerpt from Chronos online.

Acknowledgements: This essay arose out of many conversations with Jen Dodd, especially about her work with the Subtle Technologies Festival of Art and Science. Thanks also to Miriah Meyer and Lee Smolin for conversations that helped stimulate this essay.

How you can help the Federal Research Public Access Act (FRPAA) become law

As many of you no doubt know, the Federal Research Public Access Act; (FRPAA, pronounced fur-pa) was introduced into the US Congress a few days past.  It’s a terrific bill, which, if it passes, will have the effect of making all US Government-funded scientific research accessible to the public within 6 months of publication.

Open access legislation like FRPAA doesn’t just happen in a vacuum.  The Alliance for Taxpayer Access (ATA) is a Washington D.C.-based advocacy group that works to promote open access policies within the US Government.  The ATA worked with Congress (and many other organizations) to help pass the NIH public access policy in 2008, and have been working for the past several years with members of Congress on FRPAA.

In this post, I interview Heather Joseph, the Executive Director of the Scholarly Publishing and Academic Resources Coalition (SPARC), which convenes the ATA, and ask her about the bill, about next steps, and about how people can help.

Q: Heather, thanks for agreeing to be interviewed! What is FRPAA, and what’s it trying to accomplish?

Thank you, Michael – I’m happy to talk about this bill!

In a nutshell, FRPAA is designed to make sure that the results of scientific research paid for by the public can be accessed by the public. Most people are surprised to learn that this isn’t automatically the case; they assume that if their tax dollars pay for a research study, they should be entitled to read the results.  But the reality is quite different.  Right now, if you want to access articles that report on publicly funded science, you have pay to do so, either through a subscription to a scientific journal (which can cost thousands of dollars a year), or though pay-per-view, which can easily cost upwards of $30 per article. This presents an often-unsurmountable obstacle for exactly those people who most want (or need) access – scientists, students, teachers, physicians, entrepreneurs – who too often find themselves unable to afford such fees, and end up locked out of the game.

Out of eleven federal agencies that fund science here in the United States, only one – the National Institutes of Health – actually has a policy that ensures that the public can freely access the results of their funded research online. FRPAA is designed to tackle this issue head on, and to make sure that the science stemming from all U.S. agencies is made freely available to anyone who wants to use. it.

FRPAA is a very straightforward bill – it simply says that if you receive money from a U.S. Agency to do scientific research, you agree (upfront) to make any articles reporting on the results available to the public in a freely accessible online database, no later than six months after publication in a peer-reviewed journal.

Q: What is the Alliance for Taxpayer Access (ATA)? What role did the ATA play in advocating for FRPAA?

The ATA is a coalition of groups who are working together to try and craft a positive solution to this problem.  In 2004, the library community (led by my home organization, SPARC) decided that there must be other groups who shared our frustration over the current access situation. We reached out to research organizations, patient advocacy groups, consumer organizations, publishers, student groups – anyone we could think of who shared the goal of unlocking access to taxpayer funded research.  We quickly attracted more than 80 organizations, representing millions of individuals. This created a whole new opportunity to advocate for national access policies from a much stronger position… there really is strength in numbers!

The ATA has evolved into the leading advocacy organization for taxpayer access to the results of taxpayer funded research. We knock on Congressional doors, talking with policymakers about  the current barriers  to access, and about new opportunities for scientific progress once those barriers are brought down. We are all about leveraging the public’s investment in science by making sure that anyone who is interested can easily access and build on this research. That’s how science advances, after all.

Q: In 2008, the Congress passed the NIH public access policy.  Can you tell us about that, and the ATA’s role?

Absolutely!  As I mentioned, the NIH is currently the only U.S. agency that has a policy guaranteeing the public access to the results of its funded research. The idea for the policy surfaced in 2003, when Congress expressed concern that results of the public’s nearly $30 billion annual investment in NIH research were not being made as widely accessible as they should be.  They asked the NIH Director to create a policy to address the problem, setting in motion what would become 4 long years of intense debate in the scientific community. 

Not surprisingly, some journal publishers expressed immediate concern that any policy that provided access to research results through any other channels other than subscription-based journals would irreparably damage their businesses. Because journal publishing is big business (nearly $9 billion in annual revenues) publishers were able to use their long-established trade associations to aggressively lobby the NIH and Congress against the establishment of such a policy.

The scientists, librarians, patients, and others who favored the policy found themselves at a disadvantage, advocating as individual organizations without a coordinated voice. This was the main reason the ATA was established, and we quickly found ourselves at the center of the debate, helping to ensure that all stakeholders who favored the establishment of a public access policy had a way to present a united message to policymakers. Ultimately, Congress passed a landmark policy fully supported by the ATA that was enacted in 2008. 

Q: Who works at the ATA?

The ATA is essentially a virtual coalition. While we’ve grown to represent over 100 organizations, the organization’s advocacy is carried out by a pretty small core group of staff (all of whom have other full time jobs!)  Besides myself, the wonderful Nick Shockey and Andrea Higginbotham are responsible for the coalition’s online presence – keeping our website up to date, maintaining our Congressional Action Center, and keeping our members looped in on various email lists.  We also rely on our incredibly active members to help us continually refine our messages, and look for opportunities to spread the word about our work.  People like Sharon Terry at the Genetic Alliance, Prue Adler at the Association of Research Libraries, and Pat Furlong at Parent Project Muscular Dystrophy are prime examples of some of the people who keep the ATA active on the front lines. Also: there is no cost to join the ATA (SPARC picks up the relatively low tab to keep it humming!); and the door is open for any organization to sign on as a member through our website. If you’re interested, please let us know!

Q: What happens next, with FRPAA?  How does it (hopefully) become law? What could derail it?

The next steps for FRPAA will be for us (and our advocates) to encourage other members of Congress to sign onto the bill as co-sponsors. Generating a nice, robust list of supporting members of Congress is key in helping to keep the profile of the bill high.  Procedurally, the bill will be referred to Committee for further consideration; in the Senate, it will go to the Homeland Security and Government Affairs Committee, and in the House, the Committee on Oversight and Government Reform will receive the bill.  As with any legislation, FRPAA faces an uphill battle in an election year, but given the growing attention this issue has received in the past year (from the White House Office of Science and Technology Policy, to the America COMPETES Act, to the recent Research Works Act), we’re hopeful that the bill can continue to advance.

I think the biggest threat is inaction, so vocal support from stakeholders will be crucial!

Q: What can people do to help FRPAA become law?

The most important thing that people – especially active scientists – can do help advance this bill is to speak out in support of this bill.  And we need folks to speak out in two ways:

First, speak out to your members of Congress. The ATA has an Action Center set up so that you can simply log on, pick your Senators and Representatives, and automatically generate a letter asking them to support FRPAA.  The Action Center has all kinds of information about the bill, including Talking Points, FAQ’a and even template letters, to help make the process as easy as possible. Check it out!

Second, speak out to your colleagues and your community.  Blog about the bill, or spread the word on Twitter.  Consider writing an OpEd for your local newspaper, or writing an article for your organization’s newsletter. The more people become aware of this issue, the more they support it. Help us spread the word!

Q: Finally, how can people follow what the ATA is doing, and keep up with your calls for action?

You can sign onto the Alliance for Taxpayer Access by going to our website. There’s no charge.

If you simply want to be added to our email list for alerts and updates, contact either or myself (heather@arl.org)  or Andrea Higginbotham (andrea@arl.org), or follow us on Twitter at @SPARC_NA.

On Elsevier

Elsevier is the world’s largest and most profitable scientific publisher, making a profit of 1.1 billion dollars on revenue of 3.2 billion dollars in 2009. Elsevier have also been involved in many dubious practices, including the publishing of fake medical journals sponsored by pharmaceutical companies, and the publication of what are most kindly described as extraordinarily shoddy journals. Until 2009, parent company Reed Elsevier helped facilitate the international arms trade. (This is just a tiny sample: for more, see Gowers’s blog post, or look at some of the links on this page.) For this, executives at Reed Elsevier are paid multi-million dollar salaries (see, e.g., 1 and 2, and links therein).

All this is pretty widely known in the scientific community. However, Tim Gowers recently started a large-scale discussion of Elsevier by scientists, by blogging to explain that he will no longer be submitting papers to Elsevier journals, refereeing for Elsevier, or otherwise supporting the company in any way. The post now has more than 120 comments, with many mathematicians and scientists voicing similar concerns.

Following up from the discussion on Gowers’s post, Tyler Neylon has created a website called The Cost of Knowledge (see also Gowers’s followup) where researchers can declare their unwillingness to “support any Elsevier journal unless they radically change how they operate”. If you’re a mathematician or scientist who is unhappy with Elsevier’s practices, then consider signing the declaration. And while you’re at it, consider making your scientific papers open access, either by depositing them into open repositories such as the arXiv, or by submitting them to open access journals such as the Public Library of Science. Or do both.

If correlation doesn’t imply causation, then what does?

That’s the question I address (very partially) in a new post on my data-driven intelligence blog. The post reviews some of the recent work on causal inference done by people such as Judea Pearl. In particular the post describes the elements of a causal calculus developed by Pearl, and explains how the calculus can be applied to infer causation, even when a randomized, controlled experiment is not possible.

Book tour

Click through for event details. I’ve included a few private events at organizations where it’s possible some readers work.