Killer Bean Forever

The lead animator of the Matrix has spent the last 4 years working 14 hours a day to create a full-length feature animated movie called Killer Bean Forever (site and trailer).

Aside from general awesomeness (and that’s quite an aside) it’s amazing that it’s even possible for a single person to create something on the scale of Killer Bean Forever. With amazing open source 3d suites like Blender quickly catching up with the top commerical products like Maya, and people open sourcing more and more CG effects, it won’t be long before products like Killer Bean Forever can be produced by less dedicated individuals.


A simple Wiki with Ruby on Rails

I prepared the following simple demo for RailsNite Waterloo. It’s a very simple Wiki application, illustrating some basic ideas of Ruby on Rails development.

To get the demo running, we need a Ruby on Rails installation. I won’t explain here how to get such an installation going. See the Rails site to get things up and running. I’ll assume that you’re using an installation which includes Ruby on Rails version 1.2.* with MySQL, running on Windows, from the command line. Most of this should work with other installations as well, but I haven’t tested it.

We start from the command line, and move to the “rails_apps” directory, which typically sits somewhere within the Ruby on Rails installation. From the command line we run:

rails wiki
cd wiki

This creates a new directory called wiki, and installs some basic files into that directory. What are those files? To understand the
answer to that question, what you need to understand is that Ruby on Rails really has two parts.

The first part is the Ruby programming language, which is a beautiful object-oriented programming language. Ruby is a full-featured programming language, and can be used to do all the things other programming languages can do. Like most programming languages, Ruby has certain strengths and weaknesses; Ruby sits somewhere in the continuum of programming languages near Python and Smalltalk.

The second part of the framework is Ruby on Rails proper, or “Rails” as we’ll refer to it from now on. Rails is essentially a suite of programs, written in Ruby, that make developing web applications in Ruby a lot easier. What happened when you ran rails wiki above is that Rails generated a basic Ruby web application for you. What all those files are that were generated is the skeleton of a Ruby web application.

So what Rails does is add an additional layer of functionality on top of Ruby. This sounds like it might be ugly, but in fact Ruby is designed to be easily extensible, and in practice Rails feels like a very natural extension of ordinary Ruby programming.

To get a Rails application going, we need to do one more piece of configuration. This is generating a database that will be used to store the data for our application. We do this using mysqladmin, which comes with MySQL:

mysqladmin -u root create wiki_development

If you’re not all that familiar with MySQL you may be wondering whether you’ll need to learn it as well as Ruby and Rails. The answer is that for basic Rails applications you only need to know the very basics of MySQL. For more advanced applications you’ll need to know more, but the learning curve is relatively gentle, and you can concentrate on first understanding Ruby and Rails. In this tutorial I’ll assume that you have a basic understanding of concepts such as tables and rows, but won’t use any complex features of relational databases.

With all our configuration set up, lets start a local webserver. From the command line type:

ruby script/server

Now load up http://localhost:3000/ in your browser. You should see a basic welcome page. We’ll be changing this shortly.

Let’s get back to the database for a minute. You may wonder why we need a database at all, if Ruby is an object-oriented language. Why not just use Ruby’s internal object store?

This is a good question. One reason for using MySQL is that for typical web applications we may have thousands of users accessing a site simultaneously. Ruby wasn’t designed with this sort of concurrency in mind, and problems can occur if, for example, two users try to modify the same data near-simultaneously. However, databases like MySQL are designed to deal with this sort of problem in a transparent fashion. A second reason for using MySQL is that it can often perform operations on data sets much faster than Ruby could. Thus, MySQL offers a considerable performance advantage.

Using MySQL in this way does create a problem, however. Ruby is an object-oriented programming language, and it’s designed to work with objects. If all our data is being stored in a database, how can we use Ruby’s object-orientation? Rails offers a beautiful solution to this problem, known as Object Relational Mapping (ORM). One of the core pieces of Rails is a class known as ActiveRecord which provides a way of mapping between Ruby objects and rows in the database. The beauty of ActiveRecord is that from the programmer’s point of view it pretty much looks like the rows in the database are Ruby objects!

This is all a bit abstract. Let’s work through an example of ActiveRecord in action. The basic object type in our wiki is going to be a page. Let’s ask Rails to generate a model named Page:

ruby script/generate model Page

You should see the following:

      exists  app/models/
      exists  test/unit/
      exists  test/fixtures/
      create  app/models/page.rb
      create  test/unit/page_test.rb
      create  test/fixtures/pages.yml
      create  db/migrate
      create  db/migrate/001_create_pages.rb

For our purposes, the important files are app/models/page.rb, which contains the class definition for the Page model, and 001_create_pages.rb, which is the file that will set up the corresponding table in the database

(You’ll notice, by the way, that 001_create_pages.rb is pluralized and in lower case, when our original model is not. This is one of the more irritating design decisions in Rails – it automatically pluralizes model names to get the corresponding database table name, and the cases can vary a lot. It’s something to watch out for.)

The next step is to decide what data should be associated with the Page model. We’ll assume that every page has a title, and a body, both of which are strings. To generate this, edit the file db/migrate/001_create_pages.rb so that it looks like this:

class CreatePages < ActiveRecord::Migration
  def self.up
    create_table :pages do |t|
      t.column "title", :string
      t.column "body", :string

  def self.down
    drop_table :pages

This is known as a migration. It’s a simple Ruby file that controls changes made to the database. The migration can also be reversed – that’s what the “def self.down” method definition does. By using a series of migrations, it is possible to both make and undo modifications to the database structure used by your Rails application.

Notice, incidentally, that Rails created most of the migration code for you when you asked it to generate the model. All you have to do is to fill in the details of the fields in the database table / object model.

The actual creation of the database is now done by invoking the rake command, which is the Ruby make utility:

rake db:migrate

Incidentally, when run in devleopment mode (the default, which we’re using) the Rails webserver is really clever about reloading files as changes are made to them. This means that you can see the effect of changes as you make them. However, this doesn’t apply to changes to the structure of the database, and it’s usually a good idea to restart the webserver after using rake to run a migration. If you’re following along, do so now by hitting control C to interrupt the webserver, and then running ruby script/server again.

Now that we have a Page class set up, the next step is to add a way of interacting with the model over the web. Our wiki is going to have three basic actions that it can perform: (1) creating a page; (2) displaying a page; (3) editing a page.

To make this happen, we ask Rails to generate what is known as a controller for the Page model:

ruby script\generate controller Page

Once again, this generates a whole bunch of Ruby code. The most important for us is the file app/controller/page_controller.rb. When generated it looks like:

class PageController < ApplicationController

What we want is to add some Ruby methods that correspond to the three actions (displaying, creating, and editing a page) that we want to be able to do on a page. Edit the file to add the three method definitions:

class PageController < ApplicationController

def create_page

def display_page

def edit_page


(Incidentally, the names here are a bit cumbersome. I started with the simpler method names create, display and edit, and then wasted an hour or so, confused by various weird behaviour caused by the fact that the word display is used internally by Ruby on Rails. A definite gotcha!)

These methods don’t do anything yet. In your browser, load the URL http://localhost:3000/page/create_page. You’ll get an error message that says “Unknown action: No action responded to create”. In fact, what has happened is that Rails parses the URL, and determines from the first part (“page”) that it should load page_controller.rb, and from the second part that it should call the create_page action.

What is missing is one final file. Create the file app/views/page/create_page.rhtml, and add the following:

Hello world

Now reload the URL, and you should see “Hello world” in your browser. Let’s improve this so that it displays a form allowing us to create an instance of the Page model. Let’s re-edit the file so that it looks like this instead:

<% form_for :page, :url => {:action => :save_page} do |form| %>
  <p>Title: <%= form.text_field :title, :size => 30 %></p>
  <p>Body: <%= form.text_area :body, :rows => 15 %></p>
  <p><%= submit_tag "Create page" %></p>
<% end %>

There’s a lot going on in this code snippet. It’s not a raw html file, but rather a template which blends html and Ruby. In particular, if you want to execute Ruby code, you can do so using:


All Ruby code is treated as an expression, and returns a value. If you want the value of that expression to be displayed by the template, you use a slight variant of the above, with an extra equals sign near the start:


The first line of the snippet tells us that this is a form for objects of class Page, and that when the form is submitted, it should call the save_page action in the page controller, which we’ll add shortly. The result of the form is pretty straightforward – it does more or less what you’d expect it to do. Let’s add a save action (i.e., a method) to the page controller:

def save_page
  new_page = Page.create(params[:page])
  redirect_to :action => "display_page", :page_id =>

What happens is that when the submit tag is clicked by the user, the details of the form field are loaded into a Ruby has called params[:page]. We then create a new Page model object using Page.create(params[:page]), which we call new_page. Finally, we redirect to the action display_page, passing it as a parameter a unique id associated to the new page we’ve just created.

Let’s now create a view for the display_page action. Start by editing the display_page action so that it looks like:

def display_page
  @page = Page.find_by_id(params[:page_id])

What is happening is that the :page_id we were passed before is being passed in as a hash, params[:page_id], and we are now asking Rails to find the corresponding model object, and assign it to the variable @page. We now create a view template that will display the corresponding data, in app/views/page/display_page.rhtml:

<h1><%= @page.title %></h1>
<%= @page.body %>

Okay, time to test things out. Let’s try loading up the URL http://localhost:3000/page/create_page. Type in a title and some body text, and hit the “Create page” button. You should see a webpage with your title and body text.

Let’s modify the page slightly, adding a link so we can create more pages. Append the following to the above code for the display_page template:

<%= link_to "Create page", :action => "create_page" %>

This calls a Rails helper method that generates the required html. Of course, in this instance it would have been almost equally easy to insert the html ourselves. However, the syntax of the above helper method generalizes to much more complex tasks as well, and so it’s worth getting used to using the Rails helpers.

In our skeleton for the page controller we had an edit_page action. This could be done along very similar lines to the create_page action we’ve already described. In fact, there’s an interesting alternative, which is to use Rails’ built in Ajax (Javascript) libraries to edit the fields inplace. We’ll try this instead.

To do it, we need to make sure that the appropriate Javascript libraries are loaded whenever we load a page. There are many ways of achieving this, but one way is to generate a general html layout that will be applied application wide. Create a file named app/views/layouts/application.rhtml with the contents:

<%= javascript_include_tag :defaults %>
<%= yield %>

The java_script_include_tag helper ensures that the appropriate javascript libraries are loaded. Whenever any view from the application is displayed, the output from the view template will be inserted where the yield statement is.

The final steps required to get this to work are to first delete the edit_page method from the page controller. Then modify the controller by inserting two lines so that the top reads:

class PageController < ApplicationController

in_place_edit_for :page, :title
in_place_edit_for :page, :body


Modify app/views/display_page.rhtml so that it reads:

<h1><%= in_place_editor_field :page, :title %></h1>
<%= in_place_editor_field :page, :body %>

Once again, we’re using Rails helpers to make something very simple. Let’s modify it a bit further, adding a div structure, adding a link to make page creation easy, and adding a list of all pages in the database, with links to those pages.

<div id="main">
  <h1><%= in_place_editor_field :page, :title %></h1>
  <%= in_place_editor_field :page, :body, {}, {:rows => 10} %>

<div id="sidebar">
  <p><%= link_to "Create a page", :action => "create_page" %></p>
  <center><h3>Existing pages</h3></center>
  <% for page in Page.find(:all) %>
    <%= link_to page.title, :action => :display_page, :page_id => %>
  <% end %>

We’ll use a similar div structure for the create_page action:

<div id="main">
  <% form_for :page, :url => {:action => :save_page} do |form| %>
    <p>Title: <%= form.text_field :title, :size => 30 %></p>
    <p>Body: <%= form.text_area :body, :rows => 15 %></p>
    <p><%= submit_tag "Create page" %></p>
  <% end %>

Let’s modify the layout in app/views/layouts/application.rhtml in order to load a stylesheet, and add a shared header:

<%= stylesheet_link_tag 'application' %>
<%= javascript_include_tag :defaults %>

<div id="header">
  <center><h1>RailsNite Wiki</h1></center>

<%= yield %>

Finally, let’s drop a stylesheet in. Here’s a very simple one, that goes in public/stylesheets/application.css:

body 	{
	font-family: trebuchet ms, sans-serif;
	font-size: 16px;

#header {
	position: absolute;
	top: 0em;
	left: 0em;
	right: 0em;
	height: 5em;
        background: #ddf;

#main {
	position: absolute;
	top: 5em;
	left: 0em;
	right: 20em;
	padding: 1em;

#sidebar {
	position: absolute;
	top: 5em;
	right: 0em;
	width: 20em;
	background: #efe;

There you have it! A very simple Wiki in 42 lines of Rails code, with a few dozen extra lines of templates and stylesheets. Of course, it’s not much of a wiki. It really needs exception handling, version histories for pages, user authentication, and a general clean up. But it is nice to see so much added so quickly, and all those other features can be added with just a little extra effort.

Refactoring checklist

Programming has much in common with other kinds of writing. In particular, one of the goals of programming, as with writing, is to write in a way that is both beautiful and interesting. Of course, as with other kinds of writing, one’s first drafts are typically neither beautiful nor interesting. Rather, one starts by hacking something out that achieves one’s ends, without worrying too much about the quality of the code.

For a certain kind of program, this approach is sufficient. One may write a quick throwaway program to accomplish a one-off task. However, for more ambitious programs, this kind of throwaway approach is no longer feasible; the more ambitious the goals, the higher the quality of the code you must produce. And the only way to get high quality code is to start with the first draft of your code, and then gradually rewrite – refactor – that code, improving its quality.

I’m a relative novice as a programmer, and I’m still learning the art of refactoring. I try to set aside a certain amount of time for improving the quality of my code. Unfortunately, in my refactoring attempts, I’m occasionally stymied, simply going blank. I may look at a section of code and think “gosh, that’s terrible”, but that doesn’t mean that I instantly have concrete, actionable ways of improving the quality of the code.

To address this problem, I produced the following refactoring checklist as a way of stimulating my thinking when I’m refactoring. The items on the checklist are very and mechanical to check, and so provide a nice starting point to improve things, and to get more deeply into the code.

  • Start by picking a single method to refactor. Attempting to refactor on a grander scale is too intimidating.
  • If a method is more than 40 lines long, look for ways of splitting out pieces of functionality into other methods.
  • Are variables, methods, classes and files well named? This question can often be broken down into two subquestions. First, is there any way I can improve the readability of my code by changing the names? Second, is my naming consisten? So, for example, a file named “bunny” should define a class named “bunny”, not “rabbit”.
  • When I see a problem, do something about it, even if I can’t see a fix right away. In particular, at the very least add a comment which attempts to describe the problem. This makes future refactoring easier, and sends a clear message to my subconscious that I mean to eliminate all the bad code from my program. As a bonus, more often than not it stimulates some ideas for how to improve the code.
  • Partial solutions are okay. It’s fine to replace an ugly hack by a slightly less ugly hack.
  • How readable is the code being refactored? In particular, is it obvious what this section of code does? If not, how could I make it obvious?
  • Explain the code to someone else. I’ve never explained a piece of code to someone without having a bunch of ideas for how to improve it.
  • Is there a way of abstracting away what is being done, in a way that would enable other uses?

Announcement: RailsNite Waterloo

Announcing RailsNite Waterloo, to be held Monday, Nov 12, 6pm-9pm, at Caesar Martini’s (140 University Ave West, map).

Ruby on Rails is a powerful and easy-to-use Web development framework, based on the Ruby programming language. It’s being widely used by web startups, including Twitter, 37 Signals (Basecamp), and 43 Things.

RailsNite Waterloo is an opportunity for people interested in Rails and in the Waterloo region to meet, share knowledge, and ask questions. The format will be informal, and mostly oriented towards meeting people and discussion. Everyone is encouraged to attend, from those who are simply interested in Rails through to experienced developers.

We’ll have one or more short presentations during the evening. Please RSVP to Michael Nielsen ( by Nov 12 if you’re attending. Please also pass encourage other people to attend!

Hope to see you there!

Ilya Grigorik and Michael Nielsen (Organizers)


Is Google buying local bandwith?

I’ve been using the Unix utility ping to check the speed I can relay information to and from various organizations. Here are some approximate times for a few:

I tried a bunch of others, and the times above were fairly typical, with 30-100 ms common for sites in North America, and 80-150ms outside. I’m based in Waterloo, Canada, so that distribution is not surprising.

Google, however, consistently came in with times between 5-20ms. So far as I know, they don’t have any local datacenters, so I’m wondering how they’re doing this, and, if they’re creating local infrastructure to serve results quickly, how broadly they’re doing it.

Could readers in other cities try doing the same experiment, and put the results in comments?

Update: Well, that was quick. A helpful commenter points me to, which basically automates this. Looks like I’m seeing a local anomaly – presumably Google has a data center somewhere nearby.


Refactoring Prose

One of the most interesting ideas from software development is that of refactoring code.

A famous book on refactoring identifies a set of “code smells” that indicate when something is wrong with your code (blog). The book is in part diagnostic, explaining how to identify poor code and determine what the problem is. It is also prescriptive, explaining how to fix many common problems.

I like the idea of producing a similar list of “writing smells” that helps writers identify bad prose, figure out what the problem is, and how to improve it. Such a list would be much more useful than the laundry lists of do’s and don’ts that form the bulk of style manuals such as Strunk and White. Such laundry lists are usually focused on the problem of writing clearly, a problem for which it is possible to give relatively precise rules, not the vaguer problem of writing in a way that is interesting. For the latter problem, the concept writing smells and refactoring prose is, I think, just what is needed.

An example of such a list of writing smells is provided by the book “Made to Stick” by Chip and Dan Heath. Summarizing the book here doesn’t do it justice, but it does suggest the following list of prose smells:

  • Is the prose unnecessarily complex?
  • Is the prose surprising? If not, why bore your reader?
  • Are abstract points illustrated by concrete examples and stories?
  • Is the prose credible enough? Where are the most significant credibility gaps?
  • Does the prose engage the reader’s emotions at all?
  • Does the prose engage issues the reader considers important?

These are all “just” common sense, of course, but I definitely find that when I take the time to apply the Heath’s list my writing improves considerably.


Information Aggregators

One of the most interesting class of tools that I know of is information aggregators. These are tools which pull in information from multiple sources, and consolidate that information into a smaller and more easily digested number of streams.

There are many such information aggregators, and we use them all the time without thinking – things like the Encyclopedia, the newspaper, the library, and so on. I’d never thought much about them until a few years ago, when I started using RSS feedreaders like Google Reader and Bloglines to keep up with my blogreading.

Up to that point, I used to read a dozen or so blogs regularly. Every day I’d do the rounds, laboriously checking each blog for updates.

RSS feedreaders changed that. They pull information from many blogs into a single stream of information, giving a huge boost to reading efficiency. Nowadays I read 200 or so blogs regularly (see my blogroll, on the right), and it takes perhaps half an hour or so a day. This is partially because I’m pretty ruthless about skimming content, and only focus on a small fraction of what is posted. But it’s also because the feedreader makes it much easier to track a large number of sources. Even the mechanics of skimming / focused reading are made easier by the feedreaders, because they simplify all the other mechanics of reading.

The more I thought about this, the more surprising I found it. Today I process several times more information from blogs than I did a few years ago, and yet it takes me about the same amount of time. A huge fraction of my former “reading” was not really reading at all. Rather, it was being spent on switching costs caused by the heterogeneity of the information sources I was using.

Okay, that’s blogs. What about everything else? I, and, I suspect, many of my readers, spend a great deal of time dealing with heterogeneous, fine-grained sources of information – someone mentions something interesting in conversation, passes me a note, recommends a book, or whatever. I use an ad hoc personal system to integrate all this, and it’s really not a very good system. How much more can I improve my ability to deal with information? How can I better integrate all this information into my creative workflow so that I have access to information precisely when I need to know it?

Lots of people talk about information overload. But, actually, a huge fraction of the problem isn’t overload, per se it’s that (1) the problem is too open-ended, absent good tools for organizing information and integrating it into our lives; and (2) we waste a lot of our time in dealing with switching costs associated with multiple sources of information.

This post has been pretty personal, up to now. Let’s take a few steps back, and look at things in a broader context. Over the past 15 years, we’ve seen an explosion in the number of different sources of information. Furthermore, much of that information has been pretty heterogeneous. If they’re smart, content producers don’t sit around waiting for standards committees to agree on common formats for information; they put it up on the network, and worry about standards when (if) they ever become an issue.

There’s an interesting arms race going on on the web. First, people innovate, producing new content in a multitude of new media types and formats; and then people consoldiate, producing aggregators which pull in the other direction, consolidating information so that it can be more easily digested, and reducing the heterogeneity in formats. (They also produce filters, and organize the information in other ways, but that’s a whole other topic!) We saw a cycle in this arms race play out first with information about things like travel, for which it was relatively straightforward to aggregate the information. Now we’re seeing more complex types of data be aggregated – stuff like financial data, with single portals like through which we can manage all our financial transactions, across multiple institutions. This week’s announcement of OpenSocial by Google is a nice example – it’ll help aggregate and consolidate all the data produced by Social Networks like LinkedIn, MySpace, and SixApart. The result of this arms race is gradually improving our ability to manage large swathes of information.

A notable aspect of all this is that a lot of power and leverage ends up in the hands not of the people who originally produce the information, but in the hands of the people who aggregate the information, especially if they add additional layers of value by organizing that information in useful ways. We can expect to see a major associated economic shift, where people move away from being content producers to adding value through the aggregation and organization of information. It will be less visible, but I think this shift is in some ways as important from the 19’th century shift as work moved off farms and into factories.

The aggregation metaphor is a useful one, and at the moment lots of really successful tools do aggregation and not much else. I think, though, that in the future the best aggregators will combine with other metaphors to help us more actively engage with information, by organizing it better, and better incorporating it into our workflow.

We’re a long ways short of this at present. For example, although Bloglines has organizational tools built in, they’re not very good. Still, the overall service is good enough to be worth using. Another example is some newspapers which rely on syndicated content from multiple sources for their best news, and do relatively little reporting of their own; they still offer a useful service to their communities.

Part of the problem is that we don’t yet have very powerful metaphors for organizing information, and for better integrating it into our workflow. Files and tags are awful ways of organizing information, although they are getting better in settings where collective intelligence can be brought to bear. Search is a more powerful metaphor for organizing personal information, as Gmail and Google Desktop show, but it’s still got a long way to go. We need more and better metaphors for organizing information. And workflow management is something that no-one has figured out. Why don’t my tools tell me what I need to know, when I need to know it?

With that said, there are some useful tools out there for organizing information and integrating it into our workflow. For an aggregation service like Bloglines it’d be nice to have simple, well-designed integration with tools like Flickr and, which do provide useful ways of organizing content, or to Basecamp or Trac, which are oriented towards managing one’s workflow. Part of the success of Facebook and Gmail are that they integrate, in limited ways, information aggregation, organization, and workflow management.

To do this kind of integration one can either try to do it all, like Facebook, or else try to integrate with other products. There’s a lot of problems with this kind of integration. Developers distrust API’s controlled by other companies. If can with a flick of the switch shut down a large part of your functionality, that’s a problem, and might cause you to think about developing your own tagging system. Sounds great, except now your users have two sets of tags, and everyone loses. The only solution I know of to this problem is open standards, which bring problems of their own.

I’ve been talking about this from the point of view of typical users. To finish off, I want to switch to a different type of user, the tool-building programmer. It’s interesting how even advanced programming languages like python and ruby are not designed to deal with aggregating and organizing lare quantities of information. Joel Spolsky has a great related quote:

A very senior Microsoft developer who moved to Google told me that Google works and thinks at a higher level of abstraction than Microsoft. “Google uses Bayesian filtering the way Microsoft uses the if statement,” he said. That’s true. Google also uses full-text-search-of-the-entire-Internet the way Microsoft uses little tables that list what error IDs correspond to which help text. Look at how Google does spell checking: it’s not based on dictionaries; it’s based on word usage statistics of the entire Internet, which is why Google knows how to correct my name, misspelled, and Microsoft Word doesn’t.

Where are the programming languages that have Bayesian filters, PageRank, and other types of collective intelligence as a central, core part of the language? I don’t mean libraries or plugines, I mean integrated into the core of the language in the same way the if statement is. Instead, awarenes of the net is glued on through things like XML libraries, REST, and so on.

(I can’t resist two side remarks. First, what little I know of Prolog suggests that it has something of the flavour I’m talking about. And I also have to mention that my friend Laird Breyer taught his Bayesian spam flter to play chess.)

An example which captures part (though only part) of what I’m talking about is emacs. One of the reasons emacs is such a wonderful text editor is that it isn’t really a text editor. It’s a programming language and development platform specifically geared toward the development of text editing tools, and which happens to have a very nice text editor built into it. This is one of the reasons why people like emacs – it’s possible to do everything related to text through a single programmable interface which is specifically geared towards the processing of text. What would a language and development platform for information aggregation, organization and workflow management look like?