More on funding
… a good deal of the image problems that science in general has at the moment can be traced to a failure to grapple more directly with issues of funding and the justification of funding… In the latter half of the 20th century, we probably worked out the quantum details of 1000 times as many physical systems as in the first half, but that sort of thing feels a little like stamp collecting– adding one new element to a mixture and then re-measuring the band structure of the resulting solid doesn’t really seem to be on the same level as, say, the Schrödinger equation, but I’m at a loss for how to quantify the difference… The more important question, though, is should we really expect or demand that learning be proportional to funding?
This really gets to the nub of it. In research, as in so many other things, funding may hit a point of diminishing returns beyond which what we learn becomes more and more marginal. However, it is by no means obvious where the threshold is beyond which society as a whole would be better off allocating its resources to other more worthy causes.
And what, exactly, do we as a society expect to get out of fundamental research?
For years, the argument has been based on technology– that fundamental research is necessary to understand how to build the technologies of the future, and put a flying car in every garage. This has worked well for a long time, and it’s still true in a lot of fields, but I think it’s starting to break down in the really big-ticket areas. You can make a decent case that, say, a major neutron diffraction facility will provide materials science information that will allow better understanding of high-temperature superconductors, and make life better for everyone. It’s a little harder to make that case for the Higgs boson, and you’re sort of left with the Tang and Velcro argument– that working on making the next generation of whopping huge accelerators will lead to spin-off technologies that benefit large numbers of people. It’s not clear to me that this is a winning argument– we’ve gotten some nice things out of CERN, the Web among them, but I don’t know that the return on investment really justifies the expense.
The spinoff argument also has the problem that it’s hard to argue that these things wouldn’t have happened anyway. No disrespect to Tim Berners-Lee’s wonderful work, but it’s hard to believe that if he hadn’t started the web, some MIT student in a dorm room wouldn’t have done so shortly thereafter.
Of course, it’s not like I have a sure-fire argument. Like most scientists, I think that research is inherently worth funding– it’s practically axiomatic. Science is, at a fundamental level, what sets us apart from other animals. We don’t just accept the world around us as inscrutable and unchangeable, we poke at it until we figure out how it works, and we use that knowledge to our advantage. No matter what poets and musicians say, it’s science that makes us human, and that’s worth a few bucks to keep going. And if it takes millions or billions of dollars, well, we’re a wealthy society, and we can afford it.
We really ought to have a better argument than that, though.
As for the appropriate level of funding, I’m not sure I have a concrete number in mind. If we’ve got half a trillion to piss away on misguided military adventures, though, I think we can throw a few billion to the sciences without demanding anything particular in return.
One could attempt to frame this in purely economic terms: what’s the optimal rate at which to invest in research in order to maximize utility, under reasonable assumptions? This framing misses some of the other social benefits that Chad alludes to – all other things being equal, I’d rather live in a world where we understand general relativity, just because – but has the benefit of being at less passably well posed. I don’t know a lot about their conclusions, but I believe this kind of question has recently come under a lot of scrutiny from economists like Paul Romer, under the name endogeneous growth theory.