Via Marc Andreessen, whose title, “Don’t fool yourself – they’re coming for us”, seems spookily appropriate.
Biweekly links for 04/28/2008
- Matt Prager on the writer’s strike
- Absolutely fascinating account of two legal fantasies that dominate life in Hollywood. Prager claims these are the reason for the writer’s strike.
- MediaWiki API
- API for Wikipedia.
- Gin, Television, and Social Surplus – Here Comes Everybody
- Superb essay, read it. “Let’s say … people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing…that is 10,000 Wikipedia projects per year worth of participation.”
- A Blog Around The Clock : Seasonal Affective Disorder – The Basics
- PageRank for Ranking Journals « Synthèse
- Nice summary of various approaches.
- Knowledge sharing
- A review of the literature, coming from the World Bank.
- Liquidpub – Trac
- Open source meets science proposal.
- Raganwald: Why we are the biggest obstacles to our own growth
- Bruce Sterling Interview: Life, the Internet and Everything
- Kevin Kelly: McLuhan, Web 2.0 Master
- Blind scientist: How to improve scientific software?
Click here for all of my del.icio.us bookmarks.
Interrupting Google search
How can Google search be beaten? Google’s edge is to do search better than other companies, i.e., they have access to knowledge about search those other companies don’t, in part because they place a high premium on developing such knowledge in-house.
What happens if Google’s understanding of search starts to saturate, and further research produces only small gains in user experience? The knowledge gap to their competitors will start to close. Other companies will be able to replicate the search experience Google offers. The advantage will then shift to whichever company can manage the operations side of search (e.g., maintaining large teams, large data centers and so on) better. Google’s culture – all those clever people improving search – will then become a liability, not an asset.
This is the classic path to commodization. A new industry opens up. In the early days, the race is to those who develop know-how quickly, providing an edge in service. As know-how saturates, everyone can provide the same service, and the edge moves to whoever can manage operations the best. The old innovators are actually at a disadvantage at this point, since they have a culture strongly invested in innovation.
In Google’s case, there’s another interesting possibility. Maybe search just keeps getting better and better. It’s certainly an interesting enough problem that that may well be posible. But if our knowledge of search ever starts to saturate, Google may find itself needing another source of support for its major business (advertising).
How institutions change
Does anyone know of a good discussion of how institutions change? I’ve looked around a fair bit, online, in catalogues, and in bookstores. Nothing I’ve found has quite fit the bill.
Update: Shortly after posting this, I thought of Cosma’s notebooks, which do indeed contain several promising leads.
Biweekly links for 04/25/2008
- Knowledge sharing
- A review of the literature, coming from the World Bank.
- Liquidpub – Trac
- Open source meets science proposal.
- Raganwald: Why we are the biggest obstacles to our own growth
- Bruce Sterling Interview: Life, the Internet and Everything
- Kevin Kelly: McLuhan, Web 2.0 Master
- Blind scientist: How to improve scientific software?
- Raganwald: Are we building Universities or Amphitheaters?
- Thoughtful piece on the incentives and disincentives to build good social spaces on the web.
- Karl Schroeder: week-long science fiction writing workshop, Toronto, July 2008
- Karl Schroeder’s giving a week-long intensive workshop on science fiction writing in July. If only it were possible to live multiple lives – in one, I’d love to try science fiction writing.
- Cocktail Party Physics: let me explain
- PLoS Medicine – Finding Cures for Tropical Diseases: Is Open Source an Answer?
- Early paper on open source biology.
- David Heinemeier Hansson at Startup School 08 | Omnisio
- A good example of Omnisio in action. Advocates the radical idea that web startups should actually aim to make money.
- Omnisio
- Great new video site – perfect for scientists, since it handles the problem that there are two streams of information (the slides and the speaker) far better than any other site I’ve seen. The interface for moving through the slides is extremely slick.
- Raganwald: Good sense
- Descartes: “Good sense is the most equitably distributed of all things because no matter how much or little a person has, everyone feels so abundantly provided with good sense that he feels no desire for more than he already possesses.|”
- Blog: Toby Segaran
- Toby Segaran, who wrote the excellent “Programming Collective Intelligence”, has a blog.
- Blog : business|bytes|genes|molecules
- Another good find for my blogroll – lots of thoughtful posts on how science is done, and how it’s changing.
- Ologeez! – How It Works
- Ambitious site for scientific collaboration.
- One Big Lab: New paper-protocol-lab-knowledge sharing website out of Stanford
- Journal publishers are pioneers of Web 2.0 | iMechanica
- A conservative take, basically saying that journal publishers can be trusted to modernize science. The example of other media (music, movies, books, etc) doesn’t give me much confidence that this will happen.
- Biocurious: Journal publishers are pioneers of Web 2.0?
- Want to Remember Everything You’ll Ever Learn? Surrender to This Algorithm
- Utterly fascinating, at several different levels.
- The Loom : When Scientists Go All Bloggy
- Thoughtful discussion of the role blogs and comments are playing in the discussion of peer-reviewed science.
- Startup School 2008
- Kevin Kelly : the reality of depending on True Fans
- Open Science Directory
Click here for all of my del.icio.us bookmarks.
Info, bio, nano, or thermo? Turing’s revenge
People sometimes claim that we’re moving from the information age into the biotech age, or the nanotech age, or the age of energy. Will we really see such a shift, or is this just hype?
My recent thinking about the idea that everything should be code convinces me that the people claiming that such shifts will occur are wrong, at least in the case of biotech and nanotech.
It’s not that biotech and nanotech won’t make enormous, world-changing strides in the near future. They will. But the effect of many of those strides will be to bring biotech and nanotech effectively into the realm of information technology. Expressing biology and nanotechnology in the language of information allows you to set loose all the powerful ideas of computation. This is too much to pass up. So what we’ll see is not a shift, but rather a gradual convergence between the info, bio and nano worlds. Which of the three will have the upper hand, commercially, seems to me to be difficult to predict.
What about energy? Here the situation is different. Like information, energy has a fundamental, irreducible quality. Because of this, I expect we’ll see a complementary relationship between information and energy technologies, but one will never subsume the other.
Money, markets, and evolution
Aside from human beings, can anyone think of biological systems which have evolved money or a market?
FriendFeed
I’m now on FriendFeed.
A moment of creative genius
I’ve been feeling quite pleased with myself for getting the weblogger emacs mode working, giving me a simple way to post directly from emacs, without logging into my blogging software (WordPress).
That is, I was feeling pleased until this morning, when a cut-and-paste error made in weblogger mode resulted in me posting my blog password to the front page of my blog. It was only online for a few seconds, and I changed the password immediately, but it’s not exactly a shining moment…
How much power is used when you do a Google search?
The web is a great way of outsourcing tasks to specialized parallel supercomputers. Here’s a crude order-of-magnitude estimate of the amount of computing power used in a single Google search.
The size of the Google server cluster is no longer public, but online estimates typically describe it as containing about half a million commodity machines, each comparable in power to the personal computers widely used by consumers. As a rough estimate, let’s say about 200,000 of those are involved in serving search results.
I don’t know how many searches are done using Google. But online estimates put it in the range of hundreds of millions per day. At peak times this means perhaps 10,000 searches per second.
In usability studies, Google has found that users are less happy with their search experience when it takes longer than 0.4 seconds to serve pages. So they aim to serve most of their pages in 0.4 seconds or less. In practice, this means they’ve got to process queries even faster, since network communication can easily chew up much of that 0.4 seconds. For simplicity we’ll assume that all that time is available.
What this means is that at peak times, the Google cluster’s effort is being distributed across approximately 4,000 searches.
Put another way, each time you search, you’re making use of a cluster containing the equivalent of (very) roughly 50 machines.
I’d be very interested to know what fraction of computing power is contained in such supercomputers, versus the fraction in personal computers. Even more interesting would be to see a graph of how this fraction is changing over time. My guess is that at some point in the not-too-distant future most power will be in specialized services like Google, not personal computers.