Skip to content

Biweekly links for 10/30/2009

by Michael Nielsen on October 30, 2009
  • Deep Data Dives Discover Natural Laws | Communications of the ACM
    • “[researchers Lipson and Schmidt] recently mined a large quantity of metabolic data provided by Gurol Suel, assistant professor at the University of Texas Southwestern Medical Center. The algorithm came up with two “very simple, very elegant” invariants—so far unpublished—that are able to accurately predict new data. But neither they nor Suel has any idea what the invariants mean, Lipson says. “So what we are doing now is trying to automate the interpretation stage, by saying, ‘Here’s what we know about the system, here’s the textbook biology; can you explain the new equations in terms of the old equations?'”

      Lipson says the ultimate challenge may lie in dealing with laws so complicated they defy human understanding. Then, automation of the interpretation phase would be extremely difficult. “What if it’s like trying to explain Shakespeare to a dog?” he asks.”

  • The Data Explosion and the Scientific Method
    • Eric Drexler reminds us that the shift from hypothesis-driven to data-driven science in fact _is_ a shift, and likely one with surprising effects.
  • Seb’s Open Research: The Fate of the Incompetent Teacher in the YouTube Era
    • “Good teachers have always had some measure of fame at the local level. Let’s not kid ourselves: within a school, the students know who is a good teacher and who is no more illuminating than a wet pack of matches.

      The net takes that to a whole different level. Eventually everyone will know who the good teachers are, and will be able to tune into them. They will be rock stars.”

  • Research on Twitter and Microblogging
    • danah boyd’s bibliography of research on twitter and microblogging.
  • Smart Mobs » Blog Archive » Just one degree of separation
    • “An Australian intel analyst blogger, Leah Farrall, and an insurgent strategist blogger, Abu Walid, are now holding a debate in public across the blogs…. [Abu Walid is ] one of the leading figures in the interwoven tales of Al Q and the Taliban, a veteran muj from the Afghan fight against the Soviets with “a reputation as a skilled and pragmatic strategist and battlefield tactician”. He was an early member of Mullah Omar’s circle, has served as a correspondent for Al-Jazeera, and has penned a dozen books.”

Click here for all of my bookmarks.

From → Uncategorized

  1. Thanks for linking to my post on “The Data Explosion and the Scientific Method” — not so much for the readership you brought, but for indirectly pointing me back to this blog, which I think is excellent.

    In particular, I look forward to your book on The Future of Science. I am in the early stages of work on a book that will have some overlapping themes, while in no sense being a competing work. The book will explore methods of evaluating information from experts, chiefly scientists, to get a more accurate picture of reality (as indicated, for example, by recognizing an emerging scientific consensus when the public information streams are still presenting the question as an open controversy). The focus will be on improving information for problem solving, and one topic will be the meta-problem of how to improve the operation of science itself.

    (A closely related topic is the widely misunderstood relationship between science and engineering:

    As you may know, I’ve been an advocate of the value of social software (a term that Clay Shirky traced back to me) since the late 1980s. In particular, I argued for the value of something called “hypertext publishing” (; the web is a decent approximation of the concept, and far exceeds it by some metrics. Like you, I look forward to seeing science move further into this and related social software media. Your analysis of the problems and the processes at work in this area offers insights on a challenge of first-rank importance for the world.

    Now, if only there were a way to get people to focus a remotely proportionate amount of attention on challenges at that level of abstraction….

  2. Thankyou, Eric, for the kind words, and for the links. I’m a regular reader of your blog, and was very stimulated by your essay on hypertext publishing, which I first read a few years back.

    Your book sounds fascinating. In my opinion, our current institutions do a poor job of accurately incorporating public knowledge into public decision-making. I find it fascinating (and dispiriting) that, for example, almost all of the public discussion about climate change in mainstream media has very little scientific content. By that, I don’t necessarily mean that the conclusions are wrong; I mean that the discussion makes almost no mention of any observed facts relevant to the discussion, or how they were obtained. Instead, it’s all he-said, she-said, relying on institutional reputations. This is true of most discussion, regardless of the policy point of view being espoused. Even something as elementary as the Keeling curve hasn’t really entered broad public consciousness.

    Similar remarks may be made about the public discussion surrounding other existential threats – certain types of bioterror, nuclear proliferation, etc. If you have ideas for how to solve this kind of problem I’ll be very interested to hear them.

  3. Regarding problem with public discussion, I don’t see solutions, but perhaps some opportunities for improvement. A relevant slogan:

    We need a medium that is as good at representing controversy as Wikipedia is at representing consensus.

    I should hasten to add that I mean, specifically, controversy about facts, not about values or (directly) policy. Facts about cause-and-effect relationships, together with widely shared values, can of course have strong implications for policy.

    Let’s see if this might satisfy Shirky’s Law, now that you’ve led me to understand that that’s necessary… [delay follows…] Here’s a first try at a short enough statement:

    “It’s like Wikipedia, but for disagreements. Both sides line up their evidence point-by-point, so the holes become obvious.”

    Ignoring holes, attacking straw men in response, changing the subject, etc., are far too effective today because there is no specific place where a hole becomes well-defined, localized, and hence visible. I think this could be fixed.

    Regarding ignorance and climate change, what bothers me most is that almost everyone thinks that (for example) cutting CO2 emissions in half would soon reduce CO2 levels, when in fact, levels would continue to rise. “Almost everyone” includes MIT students who’ve just read a relevant excerpt from an IPCC “Summary for Policymakers”.

    As you may have seen, I blogged this earlier

  4. Eric – That’s a beautiful slogan.

    Scott Aaronson has done an experiment in a related vein, setting up a “Worldview Manager” whose goal is to help people discover inconsistencies in their beliefs:

    The site has a number of problems, but seems like an interesting prototype.

    Turning a little more directly to your suggestion, I wonder about social motivations. Who will map out arguments like this? People who use straw men, change the subject, etc, are often doing so deliberately, and have no incentive to see their argument spelled out. One might imagine a nonpartisan notprofit trying to build a community who did this, much as wikileaks has done. E.g., it’d be interesting to see a site that could automatically summarize all the inconsistencies in the public face of an organization, and might even provide various summary data.


    Distributed version control systems like git (and services like github) do something that seems related for open source software, allowing people to very easily clone, change, and merge (or not) source code, while providing simple graphical representations of all activity. An informative example is the graph at:

    In a sense, that graph (and the associated file diffs) are a way of graphically representing different people’s ideas about the best way to move a software project forward. You can almost think of the graph as showing different lines of argument; sometimes different lines are merged back into one another, but sometimes they diverge more and more.

    The analogy to what you’re suggesting isn’t quite exact, but I wonder about using a similar version control system as a back end for a wiki, instead of the linear version control system currently used in most wikis.

  5. Thanks. I’ll need to give your information more thought before commenting on it, but can I outline my current thinking on your question, “Who will map out arguments like this?” I’ll rough out a base-scenario and then the kind of process I think it could support.

    As a basis, consider a generally Wikipedia-like UI that (as a central feature) displays pairs of slots that hold relatively fine-grained points and counterpoints written and edited by advocates of opposing sides of a factual controversy. (This “leading feature” would of course require support from other features.)

    Here’s the dynamic that I have in mind, presupposing that the system (call it X) has a significant degree of momentum, and that the community has cloned a Wikipedia-like ethic for contributors. (I’ve been an occasional editor in Wikipedia for several years, which gives me some grounding in thinking about this.) My thinking runs like this:

    * As is hard to forget, many people are eager to post their arguments (whether good or bad) for or against almost any controversial position. It would be natural for some of them to post arguments in X, too. Strategic silence by other advocates could do nothing to prevent this.

    * Seeing a favorable but weak or mistaken argument on public display in X, or seeing any unopposed argument on the opposing side, will tend to annoy knowledgeable advocates of a position.

    * The barrier to entry in a Wikipedia-style system can be extremely low, making it easy for knowledgeable advocates to fix what annoys them by providing, setting aside, or upgrading an argument.

    * A Wikipedia-like ethic seems suitable for the development of factual content, not only in Wikipedia’s fact-based neutral-point-of-view environment, but also in a fact-based advocacy environment. This ethic, in Wikipedia, has been compatible with participation from knowledgeable experts.

    * The entry of knowledgeable advocates would improve the scope and quality of the content, increasing its credibility, while growing credibility would make the game increasingly serious, increasing the pressure on knowledgeable advocates to upgrade what are becoming increasingly visible and influential arguments. (And with each point exposed to visible challenge.)

    * Past a soft threshold, significant communities will regard a significant range of arguments as probably presenting a reasonable approximation to the best cases that can be made for their corresponding positions (and each of the previous qualifiers of course lowers the specified threshold). Crossing this threshold creates a qualitatively distinct incentive to make a strong case, since failure to make a strong case, placed on display *in one, specific, globally visible slot*, would be taken as evidence that there *is* no strong case. That is, holes in the presentation and defense of a position become interpreted as evidence for holes in the position itself.

    A process like this, if effective, could make a great difference to the quality of serious discussion and decision-making in the world. Many specific points of apparent disagreement would be set aside, because the knowledgeable advocates would be found to agree on more facts than the noisy advocates or their audiences would expect. Some disagreements would be decisively resolved by the collapse of one side (that is, as interpreted by a broad community). Other disagreements would be refined, producing sharper questions and better evaluations of their relevance.

    I think that a virtuous-circle dynamic along the lines I’ve described could work well, in a messy way, given an effective framework and launch. Development as a sibling of Wikipedia itself might be able to provide that.

  6. I tried Worldview manager this morning (the Strong AI topic); it has an interesting approach with some interesting problems. I posted several suggestions regarding comment collection and statement formulation here:

    (And emailed the comments to “Contact”).

    As shown by the comments made on the agree/disagree statements, the formulation and meaning of a crucial statement can be controversial. Still, statements will typically be less controversial without asserted truth-values, and they help define the structure of an argument. Even with the inclusion of implication relationships, I’d expect that (with sufficient effort!) a proposed argument-structure could often be found that would be less controversial than any specific, statement-level position, and that this might serve as a useful approximation to a consensus regarding how the pieces of the argument fit together.

    If so, then a system resembling Worldview manager (the UI and/or the underlying representation and inference) could offer a useful way to represent the structure of an argument in a social software system – in particular, it could indicate whether a particular set of conclusions regarding pieces of an argument does or doesn’t support an overall conclusion at a higher level, which would often reveal that various points of contention are, in fact, unimportant.

    However, a representation that can accommodate softer, evidential relationships — not just truth and logic — seems better suited to many problem domains. It may be worth thinking about representations that are more Bayesian.

    I’d want to see an open wiki-style process for evolving the underlying structure, of course, and a way to represent alternative structures to enable their comparison or use.

    This direction of development would look to me like a research project, however, and I think it’s important to make progress on less-structured representations that accommodate content that is more like what people already produce when making arguments.

  7. Eric – Reading your next-to-last comment, I’m reminded that a process like this is already visible in some Wikipedia articles: multiple viewpoints are sometimes mapped out in separate statements. Together with the NPOV policy this works better than I would a priori have thought:

    This process is a long way from perfect, but is encouraging for the type of effort you are proposing.

Comments are closed.