Architecture is politics: community building and the success of Wikipedia

I find it surprising, almost shocking, that a system as apparently bottom up as Wikipedia works as well as it does. Anybody really can contribute, even anonymously, and without fear of damaging their reputation; while people can be banned, only a tiny fraction of users ever are; it’s easy and very common for people to vandalize pages. A priori it seems like a recipe for disaster, yet for the most part works quite well in practice.

I used to think the explanation for Wikipedia’s success was excellent leadership focused strongly on building a healthy and vibrant community. This certainly does play a significant role – check out this inspiring podcast from Wikipedia founder Jimmy Wales, who has thought long and hard about building great communities.

But great leadership and community building is only a partial explanation for Wikipedia’s success. It is clearly also the case that the design of the MediaWiki software underlying Wikipedia contributes in a major way to the success of the project.

This thought crystallized out for me in a particularly nice way this morning in the form of two fantastic slogans, Mitch Kapor’s “architecture is politics” and Lawrence Lessig’s “code is law” (book, free online).

Kapor’s observation, in particular, explains and neatly summarizes a great deal about community-building projects like Wikipedia. Tweak the software even a little, and one can cause enormous changes in how community interaction is mediated, and thus in how the community functions. Wikipedia is less bottom up than it appears, for it is a relatively small group of people who are effectively responsible for the design and development of the underlying software. Provided those people are competent and committed (I have no reason to doubt this), they can exert an enormous positive influence over the project as a whole, dwarfing the impact of random vandals.

An interesting consequence of this is that, in my opinion, other large projects like Wikipedia are going to need their own independent forks of the underlying software. Different communities have different needs, even during their own evolution, and the ability to change the underlying architecture of the system, and thus affect the politics, is an enormously powerful lever to have in forming a healthy community. In practice, I think this means that for ambitious projects the code used will need to be forked from an existing codebase (like MediaWiki), and developed alongside the community. Successful projects will require not just a healthy community, but a healthy co-evolution of community and codebase. To date this seems to have happened relatively rarely, and I wonder if this isn’t part of the reason why more Wiki projects haven’t succeeded on a large scale – maybe the architecture is mismatched to the needs of the community?

**

Related question: Mel Conway has observed that creative artifacts produced by organizations inevitably reflect the channels of communication and control in those organizations. What does this principle imply about Wikipedia?

Related posts: Kevin Kelly has recently written some great posts (here and here) about Wikipedia. Theresa Nielsen Hayden has written a nice post about building online communities. Clay Shirky also has some very interesting thoughts about the dynamics of online groups.

US Presidential Science Debate

I didn’t think the proposed Science Debate would ever happen, but it’s starting to look like a serious possibility. See here for more details.

Published

Changing my routine: a 4-week trial

I’m partway through a pretty major change in my creative focus at the moment. As part of this, I’m trialling some changes in my routine, over the four weeks from Mon Feb 11 through to Sun Mar 9. This is in part to trial how these new elements of my routine will go, and in part to help in building new habits. I’m posting about it here in order to up my commitment to the trial; I’ll also post an evaluation once I’m done. I’d also be very interested to hear about other people’s experiences in changing their routines.

Note that all quantities are per week, and, of course, this list is far from all I’m doing each week – part of the point is to see how these things integrate with everything else I’m doing.

  • 3000 words published on my blog, including at least one substantive essay.
  • 500 lines changed (either added, deleted, or edited) in my draft book “The Future of Science”.
  • 10 hours work on Academic Reader.
  • 3 hours learning new APIs.
  • 4 hours of vigorous exercise.

Tagging SciBarCamp

I notice some people are using “SciBarCamp” as a blog category for blog posts about SciBarCamp. I’m going to start tagging my posts in the same way, and strongly encourage others to do the same, to make it easy to find all the relevant posts at Technorati:

http://technorati.com/tag/SciBarCamp

I’m going to blog some of the things I’m particularly interested in hearing about, and I’d love it if other participants were to blog their ideas as well.

Published
Categorized as SciBarCamp

How we work

Here’s a wonderful site if, like me, you’re fascinated by how people do their work: rodcorp (via Kevin Kelly).

I love this kind of thing. If I can get a person talking about it, I’m yet to find a job for which the ins and outs were not fascinating.

My personal favourite was perhaps the Newcastle cabdriver who, it turned out, was a significant player in the local cab registration market, an artifact of fantastic (and fascinating) human and legal complexity. He was planning to shortly retire on the proceeds of a small collection of license plates that he’d purchased at opportune times; at the time the going rate was about $150,000, if I recall correctly. He informed me that driving a cab was for the birds; the real money was in buying and selling the plates.

Published

Intellectual property, automated contracts, and the free flow of information

How should society reward people who write books, paint pictures, make music, write programs, or who are otherwise creative? We have an established intellectual property tradition ensuring such people’s efforts are rewarded. That tradition is now breaking down, as the internet enables creative content to be duplicated and redistributed to anyone at near zero cost. As a result, there’s a lot of ferment in people’s thinking about intellectual property, resulting in rapid technological and commercial changes that will determine both the future incentives for people to create, and the extent to which we’ll take full advantage of digital technology.

This essay on intellectual property has been influenced by many people, especially Lawrence Lessig’s excellent books The Future of Ideas and Free Culture, the Creative Commons organization, and Cory Doctorow’s writing. I believe there’s also some new ideas here, especially my explanation of how automated contracts may help lead to a world which balances the goal of providing people with incentives to create, and the goal of fully developing new digital technologies.

The conventional narrative

There’s a conventional narrative about intellectual property and the internet that exerts a magnetic pull on discussion about these issues. I’m going to set this narrative out explicitly here, in part so we can set it aside, and head off in an apparently quite different direction, before circling around to address these issues.

What the conventional narrative says is that there’s a war going on between the pirates and the content creators, or, more precisely, the content distributors, who, at present, typically represent the (commercial) content creators. (The content creators and content distributors often work together, at least at present, and so I’ll use the catch-all term content producers for both groups.) Because the internet enables information to be copied at near-zero cost, the pirates are putting everything online willy-nilly, available to be freely downloaded by anyone. The content producers are fighting a rearguard action, lobbying for strengthened copyright laws, imposing digital rights management technologies designed to make it difficult to manipulate digital media, and aggressively pursuing people they suspect of violating their copyrights.

Along with this conventional narrative is a conventional set of arguments associated with each band of participants. I won’t go into the details of these arguments, except to note that on both sides it seems to me the arguments stem mostly from self-interest. This is transparently so in the case of content producers such as the RIAA and MPAA. It also seems to me that much of the rhetoric from those who download “free” music and video is ultimately rooted in self-interest. For the most part, we’re going to ignore these arguments, and focus
instead on the question of what’s best for society as a whole, a question whose answer must accommodate both the needs of the content creators, and also the full development of digital technology.

The creative commons

Rather than pick the conventional narrative and its protagonists’ arguments apart, let’s broaden our field of vision. In particular, let’s look at an important concept dubbed the creative commons by Lawrence Lessig. The creative commons is the set of creative ideas that are available for all of humanity to reuse. Examples of items in the creative commons are things like scientific formulae, open-source software (sometimes with some restrictions), out-of-copyright books and music, myths, and many other cultural traditions. Collectively, these items form a creative commons of ideas that can be drawn on and used to create new things.

What is the impact of the creative commons? On the scientific side, results in the creative commons have laid the foundation for much of the modern economy. An example is Maxwell’s equations, which govern electrical and magnetic phenomena, and which are at the heart of all uses of electricity. Another example is the invention of the periodic table, which is at the foundation of all applications of chemistry. Imagine how much benefit would have been lost to the world if the use of Maxwell’s equations or the periodic table had needed to be licensed under an intellectual property scheme! Fortunately, basic science has a tradition that results are put in the public domain where anyone can use them. Legally, it’s not possible to patent or copyright a mathematical theorem, or a formula in physics or chemistry. This tradition of putting things in the public domain has been eroded in recent years, as Universities in many countries are urged to be more corporate in their approach, but by and large the outcomes of pure research may be freely reused and built upon by others.

In the software world, the creative commons is also flourishing, in large part because of the free and open source software movement pioneered by Richard Stallman, and now being carried forward by projects such as GNU, Linux, Apache, Firefox, MySQL, and thousands of smaller projects. The impact of this software is enormous. Huge companies such as Google, Yahoo, eBay and Amazon run large parts of their operation using open source software such as Linux, Apache, MySQL, Perl, Python and PHP, the famous LAMP stack. The LAMP stack of software is ubiquitous in web startups; the combination of free open source software and cheap commodity hardware means that anyone with some programming talent, a few weeks or months to spare, and a few thousand dollars can launch a web startup. The result is an extraordinary explosion of innovation, built off this open source software.

On the cultural and entertainment side, the creative commons has not fared so well. Over the last fifty years copyright terms in many countries have been greatly extended, and made far more restrictive. This continues as content producers lobby for ever tighter restrictions on reuse, in part as a way of defeating the pirates. People making movies and documentaries routinely spend enormous amounts of time and money tracking down the rights for every object that appears in their production. Even when a case can be made for “fair use”, a slow and expensive legal system means that most creators are not willing to risk reusing someone else’s intellectual property without their explicit permissions. The result is that it is no
longer possible to freely build upon and extend past cultural products. In the short run, this benefits content producers. Over the long run, this contraction of the creative commons hurts everyone.

The conflict between content producers and content organizers

The conventional narrative I described earlier emphasizes the battle between pirates and content producers. There is a much less visible conflict that is also going on between content producers and content organizers, companies like Google, Technorati and Apple (through iTunes) who aggregate and structure information. In fact, these two conflicts are closely related, and, as we’ll see, understanding the producer-organizer conflict sheds light on the producer-pirate conflict.

As an example of the producer-organizer conflict, consider that in 2006 a group of Belgian newspapers sued Google, ostensibly to get snippets of their news stories removed from Google News (full story). In fact, the newspapers were well aware that this could be easily achieved by putting a simple file on their webservers that would instruct Google’s web crawler to ignore their site. It’s difficult to know what the real purpose of their lawsuit was, but it seems likely that it was part of a ploy to pressure Google into paying the newspapers for permission to reuse the newspapers’ content.

This story is only one of many examples of a growing tension between content producers and content organizers. Many producers view organizers as essentially stealing their content, in some cases regarding it as not dissimilar to file-sharing services for music and video. Furthermore, the tension is tightening sharply as people develop more services for organizing information, and profits increasingly flow toward the organizers rather than the producers.

As another example, in 2007 Google had advertising revenues of approximately 16billion dollars(!), most of it from search. Yet, according to one study, approximately twenty-five percent of the number one search results on Google led to Wikipedia. Wikipedia, of course, does not directly benefit from Google’s advertising profits. I don’t know what Wikipedia thinks of this situation, but I’ll bet that at least some of Google’s top content sources are not happy that Google reaps what may seem a disproportionately large share of the advertising dollar.

What can we learn from these examples? As content moves online, additional layers of organizing services are being built on top of that fundamental content layer. Although it’s still early days, with the present architecture of the internet most of the financial benefit is flowing to the higher level services. Understandably, this is making many of the content producers unhappy. Indeed, it’s not hard to imagine that if some different design decisions had been made in the early days of the web, those decisions would quite possibly have
changed the business models used on the web, and yielded quite different financial outcomes for the different parties.

These higher level organizing services are booming. Aside from Google, other examples of new niches in the organization of content include RSS readers (Bloglines, Netvibes); social news sites (Digg, Reddit); even my own Academic Reader is an example. Of course, these services are only the tip of the iceberg. There is so much unmet need for information organization that I expect organizing services will be the single largest growth area in the world’s economy for the next decade or two. This growth will only exacerbate the tension between the content producers and the content organizers.

At the moment, in the conflict between content producers and content organizers, the organizers are wining. The content producers don’t yet have much footing to fight the content organizers. Think about what Google’s search engine does: it copies pretty much the entire web to Google’s servers, then processes that information in a sophisticated way, and then, in response to user queries, produces a list of relevant links. In short, it’s making quite sophisticated use of other people’s content in order to derive commercial benefit. But copyright law wasn’t developed with vast data mining operations in mind, and so Google is immune from prosecution under current copyright law.

I described the producer-organizer and producer-pirate conflicts as separate, but in fact there’s a continuum between pirate file-sharing services and content organizers like Google. The pirates add much less value than Google, and make more explicit use of other people’s content, but both services are still fundamentally about offering an organizing service to the consumer that sits on top of a fundamental content layer. Other services are intermediate between the two. As an example, YouTube contains many videos which remix content from dozens or hundreds of other sources. In many instances, the original sources are transformed almost beyond recognition. Here again, as with Google, value at the organizing layer does not flow to the creators of the underlying content. Instead, all the value flows towards the higher level services.

The lesson here is that the conventional framing described at the beginning of this article is only a small part of a much larger issue. The question isn’t about pirates versus content creators or content distributors. The larger question is how content can be shared in such a way as to both provide incentives to the original creators of the content, as well as to enable people to add futher value to that content, by organizing it, making it accessible, and so on? How should we, as a society, best answer this question?

It is, of course, greatly to the public benefit for the information organizers to thrive. However, for this to happen a great deal of information must be made publicly available, preferably in a machine-readable format like RSS or OAI. If the information is partially or completely locked up (think, e.g., of Facebook’s friendship graph), then that enormously limits the web of value that can be built on top of the information. Yet organizations like Facebook are understandably wary of opening that information up, fearing that it would harm their business.

The situation is complicated by the fact that the best people to organize and add value to information are often not the original creators of that information. They may lack the expertise – think of all those terrible in-house search functions that used to appear on websites. Or they may have conflicts of interest – the New York Times would have a tough time running something like Google News, since other news organizations would be reluctant to co-operate with them.

There are two nightmare outcomes that might occur as the result of current trends. The first is where content is, by default, locked up, and can only be painstakingly unlocked. This is a world where both piracy and Google are impossible. Many content producers are keen on such a world, preferring that they maintain their portion of the pie, without regard for the growth of the pie as a whole. They have lobbied hard to achieve such a world, and the past few decades have seen many notable extensions of copyright and other intellectual property law, aimed at locking content up by default.

The second nightmare outcome is where content is by default freely available for anyone to reuse in arbitrary ways. This is a world in which both piracy and Google are possible, and it will also be a disaster, as it becomes much harder for content creators to make a living, and the quality of the content being produced drops.

What we have at present is an intermediate regime, where we’re seeing a blend of these two scenarios. Outright content sharing is banned, but the present regulatory and technical framework is insufficient to close down the pirates. Content organization is still okay. What we’re seeing as a result is a migration of value up the chain from content creators like the New York Times to content organizers, like Google. This, in turn, is causing the content creators to erect fences around their data. The net result is not in anybody’s best interest.

I think we can do much better than in either of these scenarios. In particular, I think that with the right tools in place, we can ensure that content creators and content organizers are both adequately rewarded, and the public gets the full benefit of digital technology. In the remainder of this essay, I’ll describe how I think this can be achieved, and the consequences for the different groups involved – the pirates, the content creators, content distributors, and the content organizers.

The confluence of digital rights management, contracts, and digital money

How can we reach a situation in which content creators have incentives to produce content, yet information is freely available for other people to organize and add value to? I don’t have a complete answer to this question. However, I think an outline of an answer can be given, which combines legal, technical, and financial innovation, as well as the development of appropriate community norms.

Here’s what I think we’ll eventually see: automated contracts negotiated and carried out machine-to-machine, allowing people to share and reuse information. The broad overall terms that may be set in such contracts will be governed by law, and will be validated by machine; many of the terms will be set by statute. The contracts will be enforced in large part by the design of the underlying technical protocols, using ideas from cryptography and digital rights management. We will see the emergence of an information market, in which these automated contracts play a key role in mediating transactions; I think it likely that we will also see the introduction of new financial instruments to assist in the functioning of this market.

(Digital rights management tools get a bad rap from many people, largely because many of the companies now using these tools do so with asinine intent, preventing people from doing perfectly reasonable things – I was pretty annoyed the first (and only) time I bought a pdf from amazon.com, and discovered that I couldn’t mark it up using my tablet PC, simply because of digital rights software. However, many of the technical ideas underlying digital rights management are quite powerful, and potentially useful in enforcing contracts, if deployed within a sensible regulatory framework, and within a set of sensible community norms.)

The move to automated contracts won’t happen in one step. What I expect is that over the coming years we’ll see this slowly happen in many tiny steps. A primitive example already in use is the automated payment option you can use at online stores like iTunes, so if a purchase is below some threshold cost, you don’t need to explicitly authorize it. Another existing example is the practice some companies have of offering a tiered way of accessing their data. For low usage, access is free through an open API, but for higher usage, one has to pay.

I’ve only described the barest outlines of a system that will balance the interests of content creators and people who would reuse and add extra value to that content. Yet I see reason for optimism that we will eventually arrive at such a system. In particular, I am hopeful that such a system will emerge from the balance of interests between the content producers, the newly emerging (but already very powerful) content organizers, and independent voices speaking for the public interest, such as Lawrence Lessig, Cory Doctorow, the Creative Commons, and the Free Software Foundation.

In my optimism, I am distinctly at odds with Lawrence Lessig’s more pessimistic analysis. Lessig sees the content producers as having so much power that they’ll inevitably enforce massive copyright restrictions on virtually all forms of content. This will result in the creative commons stagnating, which will greatly diminish our collective creativity. I don’t see it this way. In the short run, I think Lessig is right: the content producers will have some victories over the pirates, mostly phyrric victories whose main effect will be to stifle innovation. But I think that powerful content organizers such as Google and voices such as the Creative Commons will counteract this short-term effect, and over the long run it’s reasonably likely we’ll see a sensible and sane copyright system emerge out of the resulting balance of interests.

Consequences

What will happen to the pirates in a world of automatically negotiated contracts? They won’t be shut down completely — it’s too easy to put content online, so I expect they’ll remain a part of the ecosystem. However, a combination of three factors convince me that the importance of the pirates is going to greatly diminish in the near future.

First, there are always going to be better and better ways of organizing information; Google, eBay, Wikipedia, Amazon and the rest are just the beginning – tools like these are going to get far better, and many entirely new classes of tools will be developed. Second, organizing information well is hard. Google works far better than earlier search engines because the founders of Google had some very clever ideas about how to do it better; developing such ideas requires both brains and lots of hard work. Third, the fact that organizing information well is hard means that the best services will have a substantial lead over their nearest competitor (consider Google versus Yahoo); they’ll have an effective monopoly over a scarce resource. No matter what your goal (finding music, finding learning materials, whatever), people are going to be willing to pay to use the service that does the best job at making information more useful.

An example of this is the music recommendation engine last.fm, which does a great job of understanding its users’ musical tastes, and recommending music based on those tastes. People are willing to pay to use last.fm (indirectly, through advertising, or through their subscription service) in preference to other services, because it does a better job recommending music to them.

So what I expect to see is an arms race of people creating better and better services to organize information. People will tend to use those services in preference to the pirates, precisely because organization is valuable to them; witness the success of last.fm, which as of January 2008 had more than 15 million active users. Furthermore, the creators of those services will have to work hard, and most will hope to be paid for their labours; they thus have a vested interest in being legitimate. If the appropriate legal and technical framework of automated contracts is in place, the result will be that everyone gets well rewarded for their role – the content creators get paid, the content organizers get paid, and the public gets great content in an organized way.

As an aside, the cumulative nature of open source software means that there will be a gradual drive toward free services. We’re already seeing this today: a service which people are willing to pay for today often quickly becomes something that can be easily duplicated for free by building on open source software. This will produce a continual drive for people to innovate, in order to stay profitable. While this is good, it is possible that in some markets it may lead to an unfortunate situation where the capabilities of the systems effectively saturate, and pirates will be able to duplicate the features of for-profit systems. We’re a long way from this situation at present.

I said above that the content creators will get well rewarded for their labours in this world of automatically negotiated contracts. What about the content distributors? My opinion is that with their old business model, the current content distributors are toast. They either need to change their business model, or they will be replaced entirely by content organizers who actually add value to the content.

To pick an example, in the music industry the main advantage the big recording companies had fifteen years ago was their stable of artists and their distribution power. The latter advantage has evaporated with the advent of the internet. Meanwhile, my analysis above suggests that the main advantage the winners of the future will have is technical superiority in organizing and adding value to information, in large part driven by data they get from their users and other sources. At the moment, the RIAA and the recording companies are doing very little to build technical expertise, and they are more interested in suing their users than getting data from them.

The big remaining advantage the recording companies have is their existing stable of artists. Unfortunately, these people are increasingly likely to leave the major recording companies, as they can offer less and less to them. One possible option which one or more of the recording companies could take is to reinvent themselves as a kind of union or guild for the artists, essentially an organization for negotiating en masse with the content organizers.

The big winners in all of this will be the people and companies who are organizing and adding value to information. Humanity is well on its way to putting its collective wisdom online; now is the time to start organizing and connecting it in meaningful ways.

***

If you enjoyed this post, you might also enjoy my posts Open Source Google and The tension between information creators and information organizers. The latter post overlaps somewhat in content with the current post, but it has a different perspective.

Published

Google made my day

This made my day when I found it a few weeks back, so I hope you’ll forgive my sharing my enjoyment. According to Google Scholar, my book with Ike Chuang is one of the ten most cited physics books of all time. Here’s Google’s list (I omit one book, the math text by Kato which has unaccountably escaped into the physics category):

  • J. D. Jackson, Classical Electrodynamics
  • Benoit Mandelbrot, The Fractal Geometry of Nature
  • S. M. Sze, Physics of Semiconductor Devices
  • Charles Kittel, Introduction to Solid State Physics
  • C. W. Allen, Astrophysical Quantities
  • H. S. Carslaw, Introduction to the Mathematical Theory of the Conduction of Heat in Solids
  • P. R. Bevington, D. K. Robinson, and G. Bunce, Data Reduction and Error Analysis for the Physical Sciences
  • H. Schlichting and K. Gersten, Boundary-Layer Theory
  • P. M. Morse and H. Feshbach, Methods of Theoretical Physics
  • Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information

There’s many caveats. Google’s citation coverage is incomplete, they seem to double count some citations, and so on. And, of course, it’s not even clear what citation counts mean. Still, this brought a smile to my face; it’s enduringly pleasing to have done something that other people have apparently found quite useful.

Published

Organizing notes

If you’re working on a big project, how do you organize notes related to that project?

Certain outstanding writers – Steven Pinker, Richard Rhodes, or Malcolm Gladwell come to mind – pepper their writing with apposite stories drawn from thousands of sources. I’ve often wondered how they do it. Is it all from memory? Or do they have some way of filing interesting stories and facts as they read in such a way as to assist in their later use?

Published

SciBarCamp

Inspired by the wonderful SciFooCamp and BarCamps, I’m helping organize SciBarCamp, in Toronto, starting the evening of Friday, March 14, and continuing all day Saturday and Sunday, March 15 and 16.

The idea is to have a gathering of scientists, artists, and technologists for a weekend of talks and discussions. The goal is to create connections between science, entrepreneurs, arts and culture.

The themes are:

  • The edge of science (eg, synthetic biology, quantum gravity, cognitive science)
  • The edge of technology (eg, mobile web, ambient computing, nanotechnology, web 2.0)
  • Science 2.0 (eg open access, changing models of publication and collaboration)
  • Scientific literacy and public engagement (eg, one laptop per child project, policy and science, technology as legislation, science as culture, enfranchising the poor, the young, the old)
  • The interactions of science, art and culture: Scientists and artists as partners in the continuing evolution of the culture

The program will be decided by the participants at the beginning of the meeting, in the opening reception. Presentations and discussion topics can be proposed at the SciBarCamp website or on the opening night.

The talks will be informal and interactive; to encourage this, speakers who wish to give PowerPoint presentations will have ten minutes to present, while those without will have twenty minutes. Around half of the time will be dedicated to small group discussions on topics suggested by the participants. The social events and meals will make it easy to meet people from different fields and industries. The venue, Hart House, is a beautiful space in downtown Toronto, with plenty of informal areas to work or talk. There will be free wireless access throughout.

Our goals are:

  • Igniting new projects, collaborations, business opportunities, and further events.
  • Intellectual stimulation and good conversation.
  • Integrating science into Toronto’s cultural, entrepreneurial, and intellectual activites.
  • Protoyping a model that can be easily duplicated elsewhere.

Attendance is free, but there is only space for around 100 people, so please register by sending an email to Jen Dodd (dodd.jen@gmail.com) with your name and contact details. Please include a link to your blog or your organization’s webpage that we can display with your name on the participants list at www.SciBarCamp.org.

SciBarCamp is being organized by Eva Amsen, Jennifer Dodd, Jamie McQuay, Michael Nielsen, Karl Schroeder, and Lee Smolin. More information about the event can be found at www.SciBarCamp.org.

Published
Categorized as SciBarCamp

Science in the 21st Century: Science, Society, and Information Technology

Together with Sabine Hossenfelder and Lee Smolin I’m helping organize a conference at Perimeter Institute on Science in the 21st Century: Science, Society, and Information Technology. A preliminary (and rapidly growing) list of invited participants is here, and it looks like it’s going to be a very exciting event. Registration is not yet open, but we’ll be opening it up shortly, so if you think you might be interested, keep Sep 8-12th free!

Published