Biweekly links for 08/31/2009

  • Facebook’s Religion Question Prompts Soul-Searching
    • Facebook gives people a free-form text box to describe their religion. Asking such a personal question gives some surprising answers. My favourite was probably the woman who summed up both her Catholicism and her difficulties with Catholicism by describing her religion as “Matthew 25”. “Jedi” comes in at number 10.
  • Markets marketed better « Meteuphoric
    • “What do you call a system where costs and benefits return to those who cause them? Working markets or karma, depending on whether the accounting uses money or magic.

      In popular culture karma generally has good connotations, and markets generally have bad. Reasons for unease about markets should mostly apply just as well to karma, but nobody complains…that inherent tendencies to be nice are an unfair basis for wellbeing distribution. Nor that people who have had a lot of good fortune recently might have cheated the system somehow. Nor that the divine internalizing of externalities encourages selfishness. Nor that people who are good out of desperation for fair fortune are being exploited. So why the difference?

      Perhaps mysterious forces are just more trustworthy than social institutions? Or perhaps karma seems nice because its promotion is read as ‘everyone will get what they deserve’, while markets seem nasty because their promotion is read as ‘everyone deserves what they’ve got’”

  • Astonishing video of a chimp at a magic show
  • News organisations and start-ups
    • “What would a content site look like if you started from how to make money – as print media once did – instead of taking a particular form of journalism as a given and treating how to make money from it as an afterthought?”
  • Mike Brown’s Planets: Fog! Titan! Titan Fog! (and a peer review experiment)
    • Cool for two reasons. First, Titan has fog! Second, Mike Brown seriously invites reviews of the paper, and promises to treat them as he would referee comments.
  • 25 Years Later, First Registered Domain Name Changes Hands
    • The first .com was apparently registered in 1985; it just changed hands for the first time ever.
  • Mathematics and the internet (pdf)
    • Terry Tao’s talk about how online tools are changing mathematics.

Click here for all of my del.icio.us bookmarks.

Biweekly links for 08/28/2009

  • 25 Years Later, First Registered Domain Name Changes Hands
    • The first .com was apparently registered in 1985; it just changed hands for the first time ever.
  • Mathematics and the internet (pdf)
    • Terry Tao’s talk about how online tools are changing mathematics.
  • What We Can Learn From Craigslist
  • How XML Threatens Big Data : Dataspora Blog
    • Excellent thoughtful article on data bureaucracy and the limitations of XML.
  • The impact factor’s Matthew effect: a natural experiment in bibliometrics
    • “Using an original method for controlling the intrinsic value of papers–identical duplicate papers published in different journals with different impact factors–this paper shows that the journal in which papers are published have a strong influence on their citation rates, as duplicate papers published in high impact journals obtain, on average, twice as much citations as their identical counterparts published in journals with lower impact factors. The intrinsic value of a paper is thus not the only reason a given paper gets cited or not; there is a specific Matthew effect attached to journals and this gives to paper published there an added value over and above their intrinsic quality. “
  • The importance of failure
    • “This is a point that I don’t often hear made when people talk about failure; the moral behind a failure-related story is usually about preventing it, or dealing with the aftermath, but not about the fact that sometimes things go bad despite your best efforts, and all the careful risk management and contingency planning won’t keep you from going down in flames. This is important, because it forces every person to establish a risk threshold that they are willing to accept in every one of their life efforts. “
  • US Top All-Time Donors 1989-2008
    • Surprising list of top donors in US politics.
  • High-Speed Robot Hand
    • Incredible video of a robot which can throw a ball, pick up a grain of rice, spin a pen, and many other things, all with incredible speed.

Click here for all of my del.icio.us bookmarks.

Biweekly links for 08/24/2009

  • Pointless babble « Stephen Fry on Twitter
    • “The clue’s in the name of the service: Twitter. It’s not called Roar, Assert, Debate or Reason, it’s called Twitter. As in the chirruping of birds. Apparently, according to Pears (the soapmakers…– certainly their “study” is froth and bubble) 40% of Twitter is “pointless babble”, (http://is.gd/2mKSg) which means of course that a full 60% of Twitter discourse is NOT pointless babble, which is disappointing. Very disappointing. I would have hoped 100% of Twitter was fully free of earnestness, usefulness and commercial intent. Why do these asinine reports jump onto a bandwagon they don’t understand and why do those reporting on them relate with such glee that a service that was never supposed in the first place to be more than gossipy tittle-tattle and proudly banal verbal doodling is “failing to deliver meaningful commercial or political content”. Bollocky bollocks to the lot of them. They can found their own “enterprise oriented” earnest microblogging service. Remind me to avoid it.”
  • Amplifying on the PCR Amplifier « Gödel’s Lost Letter and P=NP
    • Excellent explanation of how the polymerase chain reaction lets us make many copies of a DNA strand.
  • Study Hacks » How to Schedule Your Writing Like a Professional Writer

Click here for all of my del.icio.us bookmarks.

Biweekly links for 08/21/2009

  • Avatar
    • First trailer for the new James Cameron film.
  • Public Library of Science announces new initiative for sharing influenza research
    • “PLoS is launching PLoS Currents (Beta) — a new and experimental website for the rapid communication of research results and ideas. In response to the recent worldwide H1N1 influenza outbreak, the first PLoS Currents research theme is influenza.

      PLoS Currents: Influenza, which we are launching today, is built on three key components: a small expert research community that PLoS is working with to run the website; Google Knol with new features that allow content to be gathered together in collections after being vetted by expert moderators; and a new, independent database at the National Center for Biotechnology Information (NCBI) called Rapid Research Notes, where research targeted for rapid communication, such as the content in PLoS Currents: Influenza will be freely and permanently accessible. To ensure that researchers are properly credited for their work, PLoS Currents content will also be given a unique identifier by the NCBI so that it is citable.”

  • Nicholas Carr’s Blog: Close down the schools!
  • Cosma Shalizi’s course on Data Mining
    • Lecture notes included. I wish I’d looked at these earlier – the bits I’ve read are very informative.
  • LogiLogi: Philosophy beyond the paper
    • Thoughtful and stimulating discussion of how philosophy might benefit from the introduction of new online tools.
  • Science magazine and JoVE announce scientific-video partnership
    • “Science, the journal of scientific research, news, and commentary published by The American Association for the Advancement of Science (AAAS), and JoVE, the scientific video journal, announced that they have entered into a partnership for joint production and publication of scientific videos online. The purpose of the partnership is to enhance scientific articles published in Science through video demonstrations of experimental techniques.

      Under the partnership, which is currently in its pilot phase, Science will select papers suitable for the video enhancement, and will identify author groups willing to help shape the video demonstrations. JoVE will then work with the authors to create the actual demonstrations, using the company’s platform for geographically distributed video-production. According to Stewart Wills, Online Editor at Science, direct, in-article video demonstrations should increase the value of Science research to its main audience, working scientists and students. “

  • The definitive, two-part answer to “is data journalism?” | Holovaty.com
    • “It’s a hot topic among journalists right now: Is data journalism? Is it journalism to publish a raw database? Here, at last, is the definitive, two-part answer:

      1. Who cares?

      2. I hope my competitors waste their time arguing about this as long as possible.”

  • Sharing with Google Groups
    • Potentially rather handy: you can share stuff on Google with entire groups: “As more and more businesses and organizations “go Google,” we find that many of the features we develop based on feedback from large enterprises end up benefiting all of our users. We recently rolled out improvements to the way Google Groups interacts with several of our applications. Now, sharing calendars, sites and documents with multiple people is easy — instead of adding people one at a time, you can simply share with an entire Google Group.”
  • Official Google Research Blog: On the predictability of Search Trends
    • “As we see that many of the search trends are predictable, we are introducing today a new forecasting feature in Insights for Search, along with a new version of the product. The forecasting feature is applied to queries which are identified as predictable (see, for instance, basketball or the trends in the Automotive category) and then shown as an extrapolation of the historical trends and search patterns.”

Click here for all of my del.icio.us bookmarks.

Biweekly links for 08/17/2009

  • Project Chanology – Wikipedia, the free encyclopedia
    • Excellent Wikipedia article on the confrontations between Anonymous and the Church of Scientology.
  • rehash.nl
    • Archive of materials (including video) from the series of hacker conferences in the Netherlands.
  • Measuring Distances in Google Maps
    • How to measure distances along customized routes in Google Maps.
  • DebugAdvisor: A Recommender System for Debugging – Microsoft Research
    • “In large software development projects, when a programmer is assigned a bug to fix, she typically spends a lot of time searching (in an ad-hoc manner) for instances from the past where similar bugs have been debugged, analyzed and resolved. Systematic search tools that allow the programmer to express the context of the current bug, and search through diverse data repositories associated with large projects can greatly improve the productivity of debugging. This paper presents the design, implementation and experience from such a search tool called DebugAdvisor.

      The context of a bug includes all the information a programmer has about the bug, including natural language text, textual rendering of core dumps, debugger output etc.

      Our key insight is to allow the programmer to collate this entire context as a query to search for related information. Thus, DebugAdvisor allows the programmer to search using a fat query, which could be kilobytes of structured and unstructured data…”

  • Happiness and unhappiness in east and west: Themes…[Emotion. 2009] – PubMed Result
    • Why studies which ask people to self-assess “happiness” on some scale are of dubious utility (what do they measure?): “Cultural folk models of happiness and unhappiness are likely to have important bearings on social cognition and social behavior… the authors systematically analyzed American and Japanese participants’ spontaneously produced descriptions of the two emotions and observed… that whereas Americans associated positive hedonic experience of happiness with personal achievement, Japanese associated it with social harmony. Furthermore, Japanese were more likely than Americans to mention both social disruption and transcendental reappraisal as features of happiness… descriptions of unhappiness included various culture-specific coping actions: Whereas Americans focused on externalizing behavior (e.g., anger and aggression), Japanese highlighted transcendental reappraisal and self-improvement. Implications for research on culture and emotion are discussed.”
  • Genesis 1 – LOLCat Bible Translation Project
    • “Boreded Ceiling Cat makinkgz Urf n stuffs

      1 Oh hai. In teh beginnin Ceiling Cat maded teh skiez An da Urfs, but he did not eated dem.

      2 Da Urfs no had shapez An haded dark face, An Ceiling Cat rode invisible bike over teh waterz.

      3 At start, no has lyte. An Ceiling Cat sayz, i can haz lite? An lite wuz.4 An Ceiling Cat sawed teh lite, to seez stuffs, An splitted teh lite from dark but taht wuz ok cuz kittehs can see in teh dark An not tripz over nethin.5 An Ceiling Cat sayed light Day An dark no Day. It were FURST!!!1”

  • Scylla and Charybdis
    • “Institutions need both fixed representations and novel representations to remain organized and retain people’s attention. Interpretive traditions, where the interpretand is fixed, face a special challenge in this regard. That they are able to resolve it successfully, most of the time, is a testament to the immense skill of our species as information managers.”
  • Thinking about Mario, Pompeii and the internet – confused of calcutta
  • co5TARS
    • Fun and informative way of visualizing the careers of actors, directors, etc.

Click here for all of my del.icio.us bookmarks.

Biweekly links for 08/14/2009

  • ongoing · Blog & Tweet
    • Tim Bray: “Because whenever you see a vendor owning a communications medium, that’s part of the problem, not part of the solution. Even if the vendor is as lovable as Twitter; and I do love ’em. So I’m going to route around the breakage, and you might want to think about doing the same.”
  • Motivic stuff
    • Another interesting new mathematical blog, this one focusing on cohomology, homotopy theory, and arithmetic geometry.
  • Augmented Social Cognition: More details of changing editor resistance in Wikipedia
    • Data showing that new editors are much more likely to have their edits reverted. Claims that this shows Wikipedia is become more resistant to new ideas. The obvious objection is that maybe all it shows is that the contributions of new editors aren’t very good compared to established editors. Stil, lots of interesting data.
  • Augmented Social Cognition: The slowing growth of Wikipedia: some data, models, and explanations
    • Wikipedia’s growth rate has essentially plateaued.
  • David Byrne: So, How Does It Work on the Bus?
    • Excellent article from David Byrne about being a rock star on tour. A few little tidbits: they place at least 4 shows a week to make ends meet, so there’s not a lot of time to stick around; the tour used to be viewed as a loss leader to sell albums (no more!); they usually depart just 90 minutes after show ends; and much more.
  • A Comparison of Open Source Search Engines « zooie’s blog
  • Benchmarking Amazon EC2 for High-Performance Scientific Computing
    • Interesting, though many others factors will often need to be compared in practice. Abstract: “How effective are commercial cloud computers for high-performance scientific computing compared to currently available alternatives? I aim to answer a specific instance of this question by examining the performance of Amazon EC2 for high-performance scientific applications. I used macro and micro benchmarks to study the performance of a cluster composed of EC2 high-CPU compute nodes and compared this against the performance of a cluster composed of equivalent processors available to the open scientific research community. My results show a significant performance gap in the examined clusters that system builders, computational scientists, and commercial cloud computing vendors need to be aware of.”

Click here for all of my del.icio.us bookmarks.

Biweekly links for 08/10/2009

  • Bad science: Hit and myth: curse of the ghostwriters
    • Excellent article explaining mechanisms by which incorrect science can be amplified and become widely accepted: “Using the interlocking web of citations you can see how this happened. A small number of review papers funnelled large amounts of traffic through the network. These acted like a lens, collecting and focusing citations on the papers supporting the hypothesis.”
  • Citing papers that you’ve never read — or that were never written « IREvalEtAl
    • “The Most Influential Paper Gerard Salton Never Wrote, an article by David Dubin tracing the history of the vector space model as applied to the field of information retrieval. In this article, Dubin points out that a highly cited paper, “A Vector Space Model for Information Retrieval”, published by Gerard Salton in 1975 in the Journal of the American Society for Information Science, does not in fact exist…Nevertheless, the non-existent article is cited 215 times according to Google Scholar.”
  • Decca Aitkenhead meets Clive James | The Guardian
    • Clive James on writing: “Thomas Mann, he said – and this is great, this is writing – he said a writer is someone for whom writing is harder than it is for other people.

      That line is perfect in every way. Not only is it perfectly written, but it’s absolutely true.

      The only thing I’ve got better at as the years have gone by is I’ve grown more resigned to the fact that it comes hard. You realise that hesitation and frustration and waiting are part of the process, and you don’t panic. I get a lot better at not panicking. I get up every morning early if it’s a writing day and I will do nothing else but write that day. But the secret is not to panic if it doesn’t come.”

  • Total Recall
    • Blog for the book “Total Recall”, a book about lifelogging, by Gordon Bell and Jim Gemmell. Many interesting tidbits about what you can do with a record of your life.
  • MyLifeBits – Microsoft Research
    • Gordon Bell’s remarkable MyLifeBits project: “MylifeBits is a lifetime store of everything. It is the fulfillment of Vannevar Bush’s 1945 Memex vision including full-text search, text & audio annotations, and hyperlinks…. a lifetime’s worth of articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures, and voice recordings and stored them digitally. He is now paperless, and is beginning to capture phone calls, IM transcripts, television, and radio.”
  • Three Rivers Institute » Approaching a Minimum Viable Product
    • I’ve been guilty of this: Kent Beck: “By far the dominant reason for not releasing sooner was a reluctance to trade the dream of success for the reality of feedback.” Interesting to think about what this means in the context of open science.
  • MediaFile » Why I believe in the link economy
    • From Chris Ahearn, President, Media at Thomson Reuters: “Blaming the new leaders or aggregators for disrupting the business of the old leaders, or saber-rattling and threatening to sue are not business strategies – they are personal therapy sessions. Go ask a music executive how well it works… If you are doing something that you would object to if others did it to you – stop. If you don’t want search engines linking to you, insert code to ban them. I believe in the link economy. Please feel free to link to our stories — it adds value to all producers of content… I don’t believe you could or should charge others for simply linking to your content. Appropriate excerpting and referencing are not only acceptable, but encouraged. If someone wants to create a business on the back of others’ original content, the parties should have a business relationship that benefits both.”
  • Scan This Book! – New York Times
    • Kevin Kelly on book digitization. I was particularly interested to see that Kelly takes very seriously both the idea that: (1) it will be near-impossible to maintain current business models built around copyright; and (2) we may end up with a lot less creative work going on as a result. Many people take (1) seriously, and many people take (2) seriously; relatively few people really do both.

Click here for all of my del.icio.us bookmarks.

Polymath4

The Polymath4 Project is now underway, with the first formal post here.

The basic problem is very simple and appealing: it’s to find a deterministic algorithm which will quickly generate a prime of at least some given length, ideally in time polynomial in that length. There are fast algorithms which will generate such a prime with high probability – cryptography algorithms like RSA wouldn’t work if that weren’t true. But there’s no known deterministic algorithm.

I’m going to miss the first week of the project – I’ll be camping in a field in the Netherlands, surrounded by 1000+ hackers. But I’m looking forward to catching up when I come back.

On a related note, John Baez asks what mathematicians need to know about blogs.

A simple Wiki with Ruby on Rails

I prepared the following simple demo for RailsNite Waterloo. It’s a very simple Wiki application, illustrating some basic ideas of Ruby on Rails development.

To get the demo running, we need a Ruby on Rails installation. I won’t explain here how to get such an installation going. See the Rails site to get things up and running. I’ll assume that you’re using an installation which includes Ruby on Rails version 1.2.* with MySQL, running on Windows, from the command line. Most of this should work with other installations as well, but I haven’t tested it.

We start from the command line, and move to the “rails_apps” directory, which typically sits somewhere within the Ruby on Rails installation. From the command line we run:

rails wiki
cd wiki

This creates a new directory called wiki, and installs some basic files into that directory. What are those files? To understand the
answer to that question, what you need to understand is that Ruby on Rails really has two parts.

The first part is the Ruby programming language, which is a beautiful object-oriented programming language. Ruby is a full-featured programming language, and can be used to do all the things other programming languages can do. Like most programming languages, Ruby has certain strengths and weaknesses; Ruby sits somewhere in the continuum of programming languages near Python and Smalltalk.

The second part of the framework is Ruby on Rails proper, or “Rails” as we’ll refer to it from now on. Rails is essentially a suite of programs, written in Ruby, that make developing web applications in Ruby a lot easier. What happened when you ran rails wiki above is that Rails generated a basic Ruby web application for you. What all those files are that were generated is the skeleton of a Ruby web application.

So what Rails does is add an additional layer of functionality on top of Ruby. This sounds like it might be ugly, but in fact Ruby is designed to be easily extensible, and in practice Rails feels like a very natural extension of ordinary Ruby programming.

To get a Rails application going, we need to do one more piece of configuration. This is generating a database that will be used to store the data for our application. We do this using mysqladmin, which comes with MySQL:

mysqladmin -u root create wiki_development

If you’re not all that familiar with MySQL you may be wondering whether you’ll need to learn it as well as Ruby and Rails. The answer is that for basic Rails applications you only need to know the very basics of MySQL. For more advanced applications you’ll need to know more, but the learning curve is relatively gentle, and you can concentrate on first understanding Ruby and Rails. In this tutorial I’ll assume that you have a basic understanding of concepts such as tables and rows, but won’t use any complex features of relational databases.

With all our configuration set up, lets start a local webserver. From the command line type:

ruby script/server

Now load up http://localhost:3000/ in your browser. You should see a basic welcome page. We’ll be changing this shortly.

Let’s get back to the database for a minute. You may wonder why we need a database at all, if Ruby is an object-oriented language. Why not just use Ruby’s internal object store?

This is a good question. One reason for using MySQL is that for typical web applications we may have thousands of users accessing a site simultaneously. Ruby wasn’t designed with this sort of concurrency in mind, and problems can occur if, for example, two users try to modify the same data near-simultaneously. However, databases like MySQL are designed to deal with this sort of problem in a transparent fashion. A second reason for using MySQL is that it can often perform operations on data sets much faster than Ruby could. Thus, MySQL offers a considerable performance advantage.

Using MySQL in this way does create a problem, however. Ruby is an object-oriented programming language, and it’s designed to work with objects. If all our data is being stored in a database, how can we use Ruby’s object-orientation? Rails offers a beautiful solution to this problem, known as Object Relational Mapping (ORM). One of the core pieces of Rails is a class known as ActiveRecord which provides a way of mapping between Ruby objects and rows in the database. The beauty of ActiveRecord is that from the programmer’s point of view it pretty much looks like the rows in the database are Ruby objects!

This is all a bit abstract. Let’s work through an example of ActiveRecord in action. The basic object type in our wiki is going to be a page. Let’s ask Rails to generate a model named Page:

ruby script/generate model Page

You should see the following:

      exists  app/models/
      exists  test/unit/
      exists  test/fixtures/
      create  app/models/page.rb
      create  test/unit/page_test.rb
      create  test/fixtures/pages.yml
      create  db/migrate
      create  db/migrate/001_create_pages.rb

For our purposes, the important files are app/models/page.rb, which contains the class definition for the Page model, and 001_create_pages.rb, which is the file that will set up the corresponding table in the database

(You’ll notice, by the way, that 001_create_pages.rb is pluralized and in lower case, when our original model is not. This is one of the more irritating design decisions in Rails – it automatically pluralizes model names to get the corresponding database table name, and the cases can vary a lot. It’s something to watch out for.)

The next step is to decide what data should be associated with the Page model. We’ll assume that every page has a title, and a body, both of which are strings. To generate this, edit the file db/migrate/001_create_pages.rb so that it looks like this:

class CreatePages < ActiveRecord::Migration
  def self.up
    create_table :pages do |t|
      t.column "title", :string
      t.column "body", :string
    end
  end

  def self.down
    drop_table :pages
  end
end

This is known as a migration. It’s a simple Ruby file that controls changes made to the database. The migration can also be reversed – that’s what the “def self.down” method definition does. By using a series of migrations, it is possible to both make and undo modifications to the database structure used by your Rails application.

Notice, incidentally, that Rails created most of the migration code for you when you asked it to generate the model. All you have to do is to fill in the details of the fields in the database table / object model.

The actual creation of the database is now done by invoking the rake command, which is the Ruby make utility:

rake db:migrate

Incidentally, when run in devleopment mode (the default, which we’re using) the Rails webserver is really clever about reloading files as changes are made to them. This means that you can see the effect of changes as you make them. However, this doesn’t apply to changes to the structure of the database, and it’s usually a good idea to restart the webserver after using rake to run a migration. If you’re following along, do so now by hitting control C to interrupt the webserver, and then running ruby script/server again.

Now that we have a Page class set up, the next step is to add a way of interacting with the model over the web. Our wiki is going to have three basic actions that it can perform: (1) creating a page; (2) displaying a page; (3) editing a page.

To make this happen, we ask Rails to generate what is known as a controller for the Page model:

ruby script\generate controller Page

Once again, this generates a whole bunch of Ruby code. The most important for us is the file app/controller/page_controller.rb. When generated it looks like:

class PageController < ApplicationController
end

What we want is to add some Ruby methods that correspond to the three actions (displaying, creating, and editing a page) that we want to be able to do on a page. Edit the file to add the three method definitions:

class PageController < ApplicationController

def create_page
end

def display_page
end

def edit_page
end

end

(Incidentally, the names here are a bit cumbersome. I started with the simpler method names create, display and edit, and then wasted an hour or so, confused by various weird behaviour caused by the fact that the word display is used internally by Ruby on Rails. A definite gotcha!)

These methods don’t do anything yet. In your browser, load the URL http://localhost:3000/page/create_page. You’ll get an error message that says “Unknown action: No action responded to create”. In fact, what has happened is that Rails parses the URL, and determines from the first part (“page”) that it should load page_controller.rb, and from the second part that it should call the create_page action.

What is missing is one final file. Create the file app/views/page/create_page.rhtml, and add the following:

Hello world

Now reload the URL, and you should see “Hello world” in your browser. Let’s improve this so that it displays a form allowing us to create an instance of the Page model. Let’s re-edit the file so that it looks like this instead:

<% form_for :page, :url => {:action => :save_page} do |form| %>
  <p>Title: <%= form.text_field :title, :size => 30 %></p>
  <p>Body: <%= form.text_area :body, :rows => 15 %></p>
  <p><%= submit_tag "Create page" %></p>
<% end %>

There’s a lot going on in this code snippet. It’s not a raw html file, but rather a template which blends html and Ruby. In particular, if you want to execute Ruby code, you can do so using:

<% INSERT RUBY CODE HERE %>

All Ruby code is treated as an expression, and returns a value. If you want the value of that expression to be displayed by the template, you use a slight variant of the above, with an extra equals sign near the start:

<%= INSERT RUBY EXPRESSION TO BE EVALUATED HERE %>

The first line of the snippet tells us that this is a form for objects of class Page, and that when the form is submitted, it should call the save_page action in the page controller, which we’ll add shortly. The result of the form is pretty straightforward – it does more or less what you’d expect it to do. Let’s add a save action (i.e., a method) to the page controller:

def save_page
  new_page = Page.create(params[:page])
  redirect_to :action => "display_page", :page_id => new_page.id
end

What happens is that when the submit tag is clicked by the user, the details of the form field are loaded into a Ruby has called params[:page]. We then create a new Page model object using Page.create(params[:page]), which we call new_page. Finally, we redirect to the action display_page, passing it as a parameter a unique id associated to the new page we’ve just created.

Let’s now create a view for the display_page action. Start by editing the display_page action so that it looks like:

def display_page
  @page = Page.find_by_id(params[:page_id])
end

What is happening is that the :page_id we were passed before is being passed in as a hash, params[:page_id], and we are now asking Rails to find the corresponding model object, and assign it to the variable @page. We now create a view template that will display the corresponding data, in app/views/page/display_page.rhtml:

<h1><%= @page.title %></h1>
<%= @page.body %>

Okay, time to test things out. Let’s try loading up the URL http://localhost:3000/page/create_page. Type in a title and some body text, and hit the “Create page” button. You should see a webpage with your title and body text.

Let’s modify the page slightly, adding a link so we can create more pages. Append the following to the above code for the display_page template:

<%= link_to "Create page", :action => "create_page" %>

This calls a Rails helper method that generates the required html. Of course, in this instance it would have been almost equally easy to insert the html ourselves. However, the syntax of the above helper method generalizes to much more complex tasks as well, and so it’s worth getting used to using the Rails helpers.

In our skeleton for the page controller we had an edit_page action. This could be done along very similar lines to the create_page action we’ve already described. In fact, there’s an interesting alternative, which is to use Rails’ built in Ajax (Javascript) libraries to edit the fields inplace. We’ll try this instead.

To do it, we need to make sure that the appropriate Javascript libraries are loaded whenever we load a page. There are many ways of achieving this, but one way is to generate a general html layout that will be applied application wide. Create a file named app/views/layouts/application.rhtml with the contents:

<html>
<head>
<%= javascript_include_tag :defaults %>
</head>
<body>
<%= yield %>
</body>
</html>

The java_script_include_tag helper ensures that the appropriate javascript libraries are loaded. Whenever any view from the application is displayed, the output from the view template will be inserted where the yield statement is.

The final steps required to get this to work are to first delete the edit_page method from the page controller. Then modify the controller by inserting two lines so that the top reads:

class PageController < ApplicationController

in_place_edit_for :page, :title
in_place_edit_for :page, :body

[...]

Modify app/views/display_page.rhtml so that it reads:

<h1><%= in_place_editor_field :page, :title %></h1>
<%= in_place_editor_field :page, :body %>

Once again, we’re using Rails helpers to make something very simple. Let’s modify it a bit further, adding a div structure, adding a link to make page creation easy, and adding a list of all pages in the database, with links to those pages.

<div id="main">
  <h1><%= in_place_editor_field :page, :title %></h1>
  <%= in_place_editor_field :page, :body, {}, {:rows => 10} %>
</div>

<div id="sidebar">
  <p><%= link_to "Create a page", :action => "create_page" %></p>
  <center><h3>Existing pages</h3></center>
  <% for page in Page.find(:all) %>
    <%= link_to page.title, :action => :display_page, :page_id => page.id %>
    <br>
  <% end %>
</div>

We’ll use a similar div structure for the create_page action:

<div id="main">
  <% form_for :page, :url => {:action => :save_page} do |form| %>
    <p>Title: <%= form.text_field :title, :size => 30 %></p>
    <p>Body: <%= form.text_area :body, :rows => 15 %></p>
    <p><%= submit_tag "Create page" %></p>
  <% end %>
</div>

Let’s modify the layout in app/views/layouts/application.rhtml in order to load a stylesheet, and add a shared header:

<html>
<head>
<%= stylesheet_link_tag 'application' %>
<%= javascript_include_tag :defaults %>
</head>
<body>

<div id="header">
  <center><h1>RailsNite Wiki</h1></center>
</div>

<%= yield %>
</body>
</html>

Finally, let’s drop a stylesheet in. Here’s a very simple one, that goes in public/stylesheets/application.css:

body 	{
	font-family: trebuchet ms, sans-serif;
	font-size: 16px;
	}

#header {
	position: absolute;
	top: 0em;
	left: 0em;
	right: 0em;
	height: 5em;
        background: #ddf;
	}

#main {
	position: absolute;
	top: 5em;
	left: 0em;
	right: 20em;
	padding: 1em;
	}

#sidebar {
	position: absolute;
	top: 5em;
	right: 0em;
	width: 20em;
	background: #efe;
	}

There you have it! A very simple Wiki in 42 lines of Rails code, with a few dozen extra lines of templates and stylesheets. Of course, it’s not much of a wiki. It really needs exception handling, version histories for pages, user authentication, and a general clean up. But it is nice to see so much added so quickly, and all those other features can be added with just a little extra effort.

The Research Funding “Crisis”

If you talk with academics for long, sooner or later you’ll hear one of them talk about a funding crisis in fundamental research (e.g. Google and Cosmic Variance).

There are two related questions that bother me.

First, how much funding is enough for fundamental research? What criterion should be used to decide how much money is the right amount to spend on fundamental research?

Second, the human race spent a lot lot more on fundamental research in the second half of the twentieth century than it did in the first. It’s hard to get a good handle on exactly how much, in part because it depends on what you mean by fundamental research. At a guess, I’d say at least 1000 times as much was spent in the second half of the twentieth century. Did we learn 1000 times as much? In fact, did we learn as much, even without a multiplier?