On scaling up the Polymath project

Tim Gowers has an interesting post on the problem of scaling up the Polymath project to involve more contributors. Here are a few comments on the start of Tim’s post. I’ll return to the remainder of the post tomorrow:

As I have already commented, the outcome of the Polymath experiment differed in one important respect from what I had envisaged: though it was larger than most mathematical collaborations, it could not really be described as massive. However, I haven’t given up all hope of a much larger collaboration, and in this post I want to think about ways that that might be achieved.

As discussed in my earlier post, I think part of the reason for the limited size was the short time-frame of the project. The history of open source software suggests that building a large community usually takes considerably more time than Polymath had available – Polymath’s community of contributors likely grew faster than open source projects like Linux and Wikipedia. In that sense, Polymath’s limited scale may have been in part a consequence of its own rapid success.

With that said, it’s not clear that the Polymath community could have scaled up much further, even had it taken much longer for the problem to be solved, without significant changes to the collaborative design. The trouble with scaling conversation is that as the number of people participating goes up, the effort required to track the conversation also goes up. The result is that beyond a certain point, participants are no longer working on the problem at hand, but instead simply trying to follow the conversation (c.f. Brooks’ law). My guess is that Polymath was near that limit, and, crucially, was beyond that limit for some people who would otherwise like to have been involved. The only way to avoid this problem is to introduce new social and technical means for structuring the conversation, limiting the amount of attention participants need to pay to each other, and so increasing the scale at which conversation can take place. The trick is to do this without simultaneously destroying the effectiveness of the medium as a means of problem-solving.

(As an aside, it’s interesting to think about what properties of a technological platform make it easy to rapidly assemble and grow communities. I’ve noticed, for example, that the communities in FriendFeed rooms can grow incredibly rapidly, under the right circumstances, and this growth seems to be a result of some very particular and clever features of the way information is propagated in FriendFeed. But that’s a discussion for another day.)

First, let me say what I think is the main rather general reason for the failure of Polymath1 to be genuinely massive. I had hoped that it would be possible for many people to make small contributions, but what I had not properly thought through was the fact that even to make a small contribution one must understand the big picture. Or so it seems: that is a question I would like to consider here.

One thing that is undeniable is that it was necessary to have a good grasp of the big picture to contribute to Polymath1. But was that an essential aspect of any large mathematical collaboration, or was it just a consequence of the particular way that Polymath1 was organized? To make this question more precise, I would like to make a comparison with the production of open-source software (which was of course one of the inspirations for the Polymath idea). There, it seems, it is possible to have a large-scale collaboration in which many of the collaborators work on very small bits of code that get absorbed into a much larger piece of software. Now it has often struck me that producing an elaborate mathematical proof is rather like producing a complex piece of software (not that I have any experience of the latter): in both cases there is a clearly defined goal (in one case, to prove a theorem, and in the other, to produce a program that will perform a certain task); in both cases this is achieved by means of a sequence of strings written in a formal language (steps of the proof, or lines of code) that have to obey certain rules; in both cases the main product often splits into smaller parts (lemmas, subroutines) that can be treated as black boxes, and so on.

This makes me want to ask what it is that the producers of open software do that we did not manage to do.

Here’s two immediate thoughts inspired by that question, both of which are ways large open-source projects (a) reduce barriers to entry, and (b) limit the amount of attention required from potential contributors.

Clear separation of what is known from how it is known: In some sense, to get involved in an open source project, all you need do is understand the current source code. (In many projects, the code is modular, which means you may only need to understand a small part of the code.) You don’t need to understand all the previous versions of the code, or read through all the previous discussion that led to those versions. By contrast, it was, I think, somewhat difficult to follow the Polymath project without also following a considerable fraction of the discussion along the way.

Bugtracking: One of the common answers to the question “How can I get involved in open source?” is “Submit a bug report to your favourite open source project’s bugtracking system”. The next step up the ladder is: “Fix a bug in the bugtracking system”. Bugtracking systems are a great way of providing an entry point for new contributors, because they narrow the scope of problems down, limiting what a new contributor needs to learn, and how many other contributors they need to pay attention to. Of course, many bugs will be beyond a beginning contributor to fix. But it’s easy to browse through the bug database to find something within your ability to solve. While I don’t think bugtracking is quite the right model for doing mathematics, it’s possible that a similar system for managing problems of limited scope may help in projects like Polymath.

More tomorrow.

The Polymath project: scope of participation

As I’ve mentioned before, over the past seven weeks mathematician Tim Gowers has been running a remarkable experiment in how mathematics is done, a project he dubbed the Polymath1 project. Using principles similar to those employed in open source programming projects, he used blogs and a wiki to organize an open mathematical collaboration attempting to find a new proof of an important mathematical theorem known as the density Hales-Jewett (DHJ) theorem. The problem was a tough one. Gowers, an accomplished professional mathematician, initially thought that the chances of success were “substantially less than 100%”, even adopting a quite modest criterion for success.

Last week, Gowers announced that the problem was “probably solved”. In fact, if the results hold up, the project has exceeded expectations. The original goal was to find a new proof of an important special case of the DHJ theorem using a particular approach Gowers suggested (or explain why that approach failed). This goal broadened over time, and the project appears to have found a new proof of the full theorem, using an approach different to that Gowers originally proposed. A writeup is in progress. Inspired by the work of Polymath1, mathematician Tim Austin has released a preprint claiming another proof of DHJ, and citing Polymath1 as crucial to his work.

The scope of participation in the project is remarkable. More than 1000 mathematical comments have been written on Gowers’ blog, and the blog of Terry Tao, another mathematician who has taken a leading role in the project. The Polymath wiki has approximately 59 content pages, with 11 registered contributors, and more anonymous contributors. It’s already a remarkable resource on the density Hales-Jewett theorem and related topics. The project timeline shows notable mathematical contributions being made by 23 contributors to date. This was accomplished in seven weeks.

The original hope was that the project would be a “massive collaboration”. Let’s suppose we take the number above (23) as representative of the number of people who made notable mathematical contributions, bearing in mind that there are obviously substantial limitations to using the timeline in this way. (The timeline contains some pointers to notable general comments, which I have not included in this count.) It’s certainly true that 23 people is a very large number for a mathematical collaboration – a few days into the project, Tim Gowers remarked that “this process is to normal research as driving is to pushing a car” – but it also falls well short of mass collaborations such as Linux and Wikipedia. Gowers has remarked that “I thought that there would be dozens of contributors, but instead the number settled down to a handful, all of whom I knew personally”.

These numbers take on a different flavour, however, when you note that the number of people involved compares favourably even to very successful open source collaborations, at the seven-week mark. 7 weeks after the inception of Wikipedia it had approximately 150 articles. This is considerably more than the Polymath1 wiki, but keep in mind that (a) the Polymath1 wiki is only a small part of the overall project; and (b) I doubt anyone would disagree that the average quality on the Polymath1 wiki is considerably higher. Similarly, while Linux has now received contributions from several thousand people, it took years to build that community. 6 months after Linus Torvalds first announced Linux there were 196 people on the Linux activists mailing list, but most were merely watching. Many had not even installed Linux (at the time, a tricky operation), much less contributed substantively. I’m willing to bet more than 196 people were following Polymath1.

A great deal can be said about scaling up future projects. I believe this can be done, and that there are potentially substantial further benefits. For now, I’ll just make one remark. Long-lived open-source collaborations sometimes start with narrowly focused goals, but they typically broaden over time, and become more open-ended, allowing the community of contributors to continue to grow over time. That’s certainly true of Linux, whose goal – the construction of an operating system kernel – is extremely broad. At the same time, that broad goal naturally gives rise to many focused and to some extent independent problems, which can be tackled in parallel by the development community. It may be possible to broaden Polymath1’s goals in a natural way at this point, but it seems like an interesting challenge to at the same time retain the sharp problem-oriented focused that characterized the collaboration.

wiki_tex.py

I’ve written a simple python script called wiki_tex.py to help convert LaTeX to the wiki-tex used on MediaWikis like the Polymath1 wiki, and Wikipedia. It’s a stripped-down port of a similar script I use to put LaTeX on my blog – the original script is heavily customized, which is why I haven’t made it generally available.

A zipped version of the script can be downloaded here.

To use the script, put a copy of the script in the directory you’re writing LaTeX in. You also need to make sure Python is installed on your machine. Mac and most Linuxes have it pre-installed. Windows doesn’t, but you can get a one-click installer at the python site. I wrote the script under Python 2.5, but would be surprised if it doesn’t work under Python 2.x. I don’t know how it runs under the recently released Python 3.

To run the script, you need a command line prompt, and to have the directory set to be whatever directory you’re writing LaTeX in. Once in the right directory, just run:

python wiki_tex.py filename

where filename.tex is the file you’re texing. Note the omission of the .tex suffix is intentional. The script will extract everything between \begin{document} and \end{document}, convert to wiki-tex, and output a file filename0.wiki which you can cut and paste into the MediaWiki.

As a convenience, if you put lines of the form “%#break” into the file (these are comments, so LaTeX ignores them) then the script breaks up the resulting output into multiple files, filename0.wiki, filename1.wiki, and so on. This is useful for compiling smaller snippets of LaTeX into wiki-tex.

The script is very basic. There’s many conversions it won’t handle, but there’s quite a bit of basic stuff that it copes with just fine. I’d suggest starting with some small examples, and gradually working up in complexity.

The script is also not very battle-tested. It’s undergone a little testing, but this is well and truly alpha software. Let me know in comments if you find bugs, and I’ll try to fix them.

Published
Categorized as polymath1

How changing the technology of collaboration can change the nature of collaboration

Update: Ilya (who made the video) reports in comments that some fraction of the effect described below is an artifact. It’s hard to say how much. Here’s Ilya’s comment:

Michael, apparently one of the reasons you see the explosion in commits is because Git correctly attributes the changeset to the author. In [Subversion] days, the patch would be submitted by some author, but then one of the core team members would merge it in (under his/her name). Basically, same workflow with Git, but with proper attribution.

Having said that, I think seeing other people commit and get their changes merged in also encourages other developers to join in on the fray!

So it may or may not be that what’s said in the post is true. But the video shown isn’t evidence for it. A pity. It’d be nice to have a clearly visualizable demonstration of this general phenomenon.

Ilya Grigorik recently pointed me to a great example which shows vividly how even relatively modest changes in the technology of collaboration can change the nature of collaboration. The example is from an open source project called Ruby on Rails. Ruby on Rails is a web development framework famous within the web development community – it’s been used to develop well-known sites such as twitter – but, unlike, say, Linux, it’s largely unknown outside its own community. The original developer of Ruby on Rails is a programmer named David Heinemeier Hansson who for a long time worked on the framework on his own, before other people gradually began to join him.

The short video below shows the history of the collaboration graphically – what you see are pieces of code being virtually shuttled backward and forward between different contributors to the collaboration. There’s no need to watch the whole video, although it’s fun to do so: in the bottom right of the video you’ll see a date ticking over, and you can simply fast forward to January 2008, and watch until June 2008. Here’s the video:

(Edit: It’s better in high definition at Vimeo. As it is, it’s hard to see the dates – the relevant part of the video is roughly from 4:00 to 5:30.)

What you see, very vividly, is that in April 2008, a qualitative change occurs in the collaboration. Before April, you see a relatively slow and stately development process. After April, that process explodes with vastly more contributors and activity. What happened in April was this: the Ruby on Rails developers changed the tool they used to share code. Before April they used a tool called Subversion. In April of 2008 they switched to a new tool called Git (managed through Github). As changes go, this was similar to a group of writers changing the wiki software they use to collaborate on a shared set of documents. What’s interesting is that the effect on the collaboration was so dramatic, out of proportion to our everyday experience; it’s almost as though Ernest Hemingway had gotten a qualitative improvement in his writing by changing the type of pen he used to write.

I won’t say much here about the details of what caused the change. Briefly, Git and Github are a lot more social than Subversion, making it easier for people to go off and experiment with code on their own, to merge useful changes back in, and to track the activity of other people. Git was, in fact, developed by Linus Torvalds, to help Linux development scale better.

The background to all this is that I’ve been collecting some thoughts about the ongoing Polymath project, an experiment in open source mathematics, and the question of how projects like Polymath can be scaled up further. I’ll have more to say about than in future posts, but for now it seemed worth describing this striking example of how changes in technology can result in changes in the nature of collaboration.

Update on the polymath project

A few brief comments on the first iteration of the polymath project, Tim Gowers’ ongoing experiment in collaborative mathematics:

  • The project is remarkably active, with nearly 300 substantive mathematical comments in just the first week. It shows few signs of slowing down.
  • It’s perhaps not (yet) a “massively” collaborative project, but many mathematicians are contributing – a quick pass over the comments suggests that so far 14 or so people have made substantive mathematical contributions, and it seems likely that number will rise further. Unsurprisingly, that number already rises considerably if you include people who have made comments on the collaborative process.
  • Regardless of the outcome of the project, I expect that many beginning research students in mathematics will find this a great resource for understanding what research is about. It’s a way of seeing research mathematicians as they work – trying ideas out, making occcasional errors, backtracking, and so on. I suspect many students will find this incredibly enlightening. To pick just one example of why this may be, my experience is that many beginning students assume that the key to research success lies in having great leaps of insight to solve difficult problems. The discussion shows something quite different: you see excellent mathematicians following up every little lead, trying out many different approaches to problems, seeing many, many ideas fail, and gradually aggregating small insights, as a bigger picture only very slowly emerges.
  • The discussion so far has been courteous and professional in the highest degree. I suspect such courteous and professional behaviour greatly increases the chances of success in such a collaboration. I’m reminded of the famous Hardy-Littlewood rules for collaboration. Tim Gowers’ rules of collaboration have something of the same flavour.
  • One might say that this courtesy and professionalism is only to be expected, given the many professional mathematicians participating. Unfortunately, it’s not difficult to find excellent blogs run by professional scientists where the comment sections are notably less courteous and professional. I’ll omit examples.
  • Initially, I wasn’t so sure about the idea of using the linear medium of blog comments to run such a project. It seemed restrictive to use anything less than a multi-threaded forum, if forum software could be found that was geared towards mathematics. (Something like Google Groups would be good, but it doesn’t provide any way to display mathematics, so far as I’m aware.) The linear format has worked much better than I thought it would. Although at times it makes the discussion difficult to follow, the linear format has the benefit of preventing the conversation (and the collaborative community) from fracturing too much. This may be something to think about for future projects.
  • Many large-scale collaborative projects make it easy for late entrants to make a contribution. For example, in the Kasparov versus the World chess game, new participants could enter late in the game and come up to speed quickly. This was in part because of the nature of chess (only the current board matters, not past positions), but it was also partially because of the public analysis tree maintained for much of the game by Irina Krush. This acted as a key reference point for World Team decisions, and summarized much of the then-current best thinking about the game. In a similar way, many open source projects encourage late entry, with new participants able to jump in after looking at the existing code base (analogous to the state of the chess board), and the project wiki (analogous to the analysis tree). As the polymath project continues, I hope similar points of entry will enable outsiders to follow what is happening, and to contribute, without necessarily having to follow the entire discussion to that point.

The polymath project

Tim Gower’s experiment in massively collaborative mathematics is now underway. He’s dubbed it the “polymath project” – if you want to see posts related to the project, I suggest looking here.

The problem to be attacked can be understood (though probably not solved) with only a little undergraduate mathematics. It concerns a result known as the Density Hales-Jewett theorem. This theorem asks us to consider the set [tex][ 3 ]^n[/tex] of all length [tex]n[/tex] strings over the alphabet [tex]1, 2, 3[/tex]. So, for example, [tex]11321[/tex] is in [tex][3]^5[/tex]. The theorem concerns the existence of combinatorial lines in subsets of [tex][3]^n[/tex]. A combinatorial line is a set of three points in [tex][3]^n[/tex], formed by taking a string with one or more wildcards in it, e.g., [tex]112*1**3\ldots[/tex], and replacing those wildcards by [tex]1[/tex], [tex]2[/tex] and [tex]3[/tex], respectively. In the example I’ve given, the resulting combinatorial line is:

[tex] \{ 11211113\ldots, 11221223\ldots, 11231333\ldots \} [/tex]

The Density Hales-Jewett theorem asserts that for any [tex]\delta > 0[/tex], for sufficiently large [tex]n = n(\delta)[/tex], all subsets of [tex][3]^n[/tex] of size at least [tex]\delta 3^n[/tex] contain a combinatorial line,

Apparently, the original proof of the Density Hales-Jewett theorem used ergodic theory; Gowers’ challenge is to find a purely combinatorial proof of the theorem. More background can be found here. Serious discussion of the problem starts here.

Published
Categorized as polymath1