How Are the Mighty Fallen

Joshua Gans writes to point out a paper he wrote with George Shepherd, “How Are the Mighty Fallen: Rejected Classic Articles by Leading Economists” (link to pdf), about the experience some leading economists have had with peer review:

We asked over 140 leading economists, including all living winners of the Nobel Prize and John Bates Clark Medal, to describe instances in which journals rejected their papers. We hit a nerve. More than 60 percent responded, many with several blistering pages. Paul Krugman expressed the tone of many letters: “Thanks for the opportunity to let off a bit of steam.”

The paper is extremely readable (and entertaining), if you have any interest in peer review. Among other tidbits: an extraordinary list of rejected papers, many of them among the classics of economics; the estimate from Krugman that 60% of his papers are rejected on first try; the remarkable story of George Akerlof’s Nobel Prize-Winning paper “The Market for Lemons”, rejected by three separate journals before being published; the two rejections of the famous Black-Scholes options pricing paper, also Nobel Prize-Winning; Krugman’s comment that “I am having a terrible time with my current work on economic geography: referees tell me that it’s obvious, it’s wrong, and anyway they said it years ago.” There’s much more.

Addendum: Joshua also pointed me to a retrospective on the article (pdf here), which makes for interesting followup reading.

7 comments

  1. Thanks for the good read.

    I think we’re missing a control here: is there any way to measure whether a rejected paper improved prior to its eventual publication? The “buried the paper for 18 months and then resent” anecdotes suggest the cooldown period might have actually been good for the paper! Egos are fragile things, in academia especially. Maybe a way to measure the impact of rejections is by comparing the manuscript versions. But it would be subjective, personal, and bound to hurt more egos than to do good.

  2. Carlos – Sure, that’s a very interesting question, and there’s some interesting stuff in Gans and Shepherds’ paper about improvements. On the other hand, it’s hard to believe that many papers go from unpublishable to Nobel-worthy because of referee reports.

  3. As the other commenter has noted, it’s impossible to tell whether this is a rejection of the peer review system or a great validation of the system, without a careful review of the original submissions versus their final rejected forms. Winning a Nobel Prize does not make one a good writer (a case in point being Ed Lewis who always owned up to his shortcomings as an author and an instructor). Perhaps the reviewer comments helped vastly improve the papers in question. I can’t say without seeing them and the sour grapes of the authors are not a fair way to judge.

    Also, one wonders how well one can compare a field like economics to a field that is less theoretical, say, chemistry, where there is replicable data and results are perhaps less open to debate.

  4. One idea I thought about for is that when a paper is accepted for a journal, the reviews, and the reviewer names, should be published with the paper. Double-blind reviews are asymmetric. When the paper is accepted, the reviews and their writers’ identities are thrown away, and so it’s easy to game the system by giving the paper an insincere score – there’s no explicit penalty for doing so.

    One part of the process is already symmetric: if the paper is rejected, neither party knows the other’s identity. I have never heard a good explanation about the other part. If, at reviewing time, the reviewer knows he risks having his comments made public, there’s an incentive for making an honest effort. It’s the same incentive for not making up data in experiments – it will be there for other people to look at.

    Obviously this only works for malicious reviews. Ramsey would probably be fine having “this is trivial” as his published review.

Comments are closed.