Expander graphs I: Introduction

Note: This post is one in a series introducing one of the deepest ideas in modern computer science, expander graphs. Expanders are one of those powerful ideas that crop up in many apparently unrelated contexts, and that have a phenomenally wide range of uses. The goal of the posts is to explain what an expander is, and to learn just enough about them that we can start to understand some of their amazing uses. The posts require a little background in graph theory, computer science, linear algebra and Markov chains (all at about the level of a first course) to be comprehensible. I am not an expert on expanders, and the posts are just an introduction. They are are mostly based on some very nice 2003 lecture notes by Nati Linial and Avi Wigderson, available on the web at http://www.math.ias.edu/~boaz/ExpanderCourse/.

Introduction to expanders

Expander graphs are one of the deepest tools of theoretical computer science and discrete mathematics, popping up in all sorts of contexts since their introduction in the 1970s. Here’s a list of some of the things that expander graphs can be used to do. Don’t worry if not all the items on the list make sense: the main thing to take away is the sheer range of areas in which expanders can be applied.

  • Reduce the need for randomness: That is, expanders can be used to reduce the number of random bits needed to make a probabilistic algorithm work with some desired probability.
  • Find good error-correcting codes: Expanders can be used to construct error-correcting codes for protecting information against noise. Most astonishingly for information theorists, expanders can be used to find error-correcting codes which are efficiently encodable and decodable, with a non-zero rate of transmission. This is astonishing because finding codes with these properties was one of the holy grails of coding theory for decades after Shannon’s pioneering work on coding and information theory back in the 1940s.
  • A new proof of PCP: One of the deepest results in computer science is the PCP theorem, which tells us that for all languages [tex]L[/tex] in NP there is a randomized polyonomial-time proof verifier which need only check a constant number of bits in a purported proof that [tex]x \in L[/tex] or [tex]x \not \in L[/tex], in order to determine (with high probability of success) whether the proof is correct or not. This result, originally established in the earlier 1990s, has recently been given a new proof based on expanders.

What’s remarkable is that none of the topics on this list appear to be related, a priori, to any of the other topics, nor do they appear to be related to graph theory. Expander graphs are one of these powerful unifying tools, surprisingly common in science, that can be used to gain insight into an an astonishing range of apparently disparate phenomena.

I’m not an expert on expanders. I’m writing these notes to help myself (and hopefully others) to understand a little bit about expanders and how they can be applied. I’m not learning about expanders with any specific intended application in mind, but rather because they seem to behind some of the deepest insights we’ve had in recent years into information and computation.

What is an expander graph? Informally, it’s a graph [tex]G = (V,E)[/tex] in which every subset [tex]S[/tex] of vertices expands quickly, in the sense that it is connected to many vertices in the set [tex]\overline S[/tex] of complementary vertices. Making this definition precise is the main goal of the remainder of this post.

Suppose [tex]G = (V,E)[/tex] has [tex]n[/tex] vertices. For a subset [tex]S[/tex] of [tex]V[/tex] we define the edge boundary of [tex]S[/tex], [tex]\partial S[/tex], to be the set of edges connecting [tex]S[/tex] to its complement, [tex]\overline S[/tex]. That is, [tex]\partial S[/tex] consists of all those edges [tex](v,w)[/tex] such that [tex]v \in S[/tex] and [tex]w \not \in S[/tex]. The expansion parameter for [tex]G[/tex] is defined by

[tex] h(G) \equiv \min_{S: |S| \leq n/2} \frac{|\partial S|}{|S|}, [/tex]

where [tex]|X |[/tex] denotes the size of a set [tex]X[/tex].

One standard condition to impose on expander graphs is that they be [tex]d[/tex]-regular graphs, for some constant [tex]d[/tex], i.e., they are graphs in which every vertex has the same degree, [tex]d[/tex]. I must admit that I’m not entirely sure why this [tex]d[/tex]-regularity condition is imposed. One possible reason is that doing this simplifies a remarkable result which we’ll discuss later, relating the expansion parameter [tex]h(G)[/tex] to the eigenvalues of the adjacency matrix of [tex]G[/tex]. (If you don’t know what the adjacency matrix is, we’ll give a definition later.)

Example: Suppose [tex]G[/tex] is the complete graph on [tex]n[/tex] vertices, i.e., the graph in which every vertex is connected to every other vertex. Then for any vertex in [tex]S[/tex], each vertex in [tex]S[/tex] is connected to all the vertices in [tex]\overline S[/tex], and thus [tex]|\partial S| = |S| \times |\overline S| = |S|(n-|S|)[/tex]. It follows that the expansion parameter is given by

[tex] h(G) = \min_{S: |S|\leq n/2} n-|S| = \left\lceil \frac{n}{2} \right\rceil. [/tex]

For reasons I don’t entirely understand, computer scientists are most interested in the case when the degree, [tex]d[/tex], is a small constant, like [tex]d = 2,3[/tex] or [tex]4[/tex], not [tex]d=n-1[/tex], as is the case for the complete graph. Here’s an example with constant degree.

Example: Suppose [tex]G[/tex] is an [tex]n \times n[/tex] square lattice in [tex]2[/tex] dimensions, with periodic boundary conditions (so as to make the graph [tex]4[/tex]-regular). Then if we consider a large connected subset of the vertices, [tex]S[/tex], it ought to be plausible that that the edge boundary set [tex]\partial S[/tex] contains roughly one edge for each vertex on the perimeter of the region [tex]S[/tex]. We expect there to be roughly [tex]\sqrt{|S|}[/tex] such vertices, since we are in two dimensions, and so [tex]|\partial S|/|S| \approx 1/\sqrt{|S|}[/tex]. Since the graph can contain regions [tex]S[/tex] with up to [tex]O(n^2)[/tex] vertices, we expect

[tex] h(G) = O\left( \frac{1}{n} \right) [/tex]

for this graph. I do not know the exact result, but am confident that this expression is correct, up to constant factors and higher-order corrections. It’d be a good exercise to figure out exactly what [tex]h(G)[/tex] is. Note that as the lattice size is increased, the expansion parameter decreases, tending toward [tex]0[/tex] as [tex]n\rightarrow \infty[/tex].

Example: Consider a random [tex]d[/tex]-regular graph, in which each of [tex]n[/tex] vertices is connected to [tex]d[/tex] other vertices, chosen at random. Let [tex]S[/tex] be a subset of at most [tex]n/2[/tex] vertices. Then a typical vertex in [tex]S[/tex] will be connected to roughly [tex]d \times |\overline S|/n[/tex] vertices in [tex]\overline S[/tex], and thus we expect [tex]|\partial S| \approx d \times |S| |\overline S|/n[/tex], and so

[tex] \frac{|\partial S|}{|S|} \approx d \frac{|\overline S|}{n}. [/tex]

Since [tex]|\overline S|[/tex] has its minimum at approximately [tex]n/2[/tex] it follows that [tex]h(G) \approx d/ 2[/tex], independent of the size [tex]n[/tex].

Exercise: Show that a disconnected graph always has expansion parameter [tex]0[/tex].

In each of our examples, we haven’t constructed just a single graph, but rather an entire family of graphs, indexed by some parameter [tex]n[/tex], with the property that as [tex]n[/tex] gets larger, so too does the number of vertices in the graph. Having access to an entire family in this way turns out to be much more useful than having just a single graph, a fact which motivates the definition of expander graphs, which we now give.

Suppose we have a family [tex]G_j = (V_j,E_j)[/tex] of [tex]d[/tex]-regular graphs, indexed by [tex]j[/tex], and such that [tex]|V_j| = n_j[/tex] for some increasing function [tex]n_j[/tex]. Then we say that the family [tex]\{ G_j \}[/tex] is a family of expander graphs if the expansion parameter is bounded strictly away from [tex]0[/tex], i.e., there is some small constant [tex]c[/tex] such that [tex]h(G_j) \geq c > 0[/tex] for all [tex]G_j[/tex] in the family. We’ll often abuse nomenclature slightly, and just refer to the expander [tex]\{ G_j \}[/tex], or even just [tex]G[/tex], omitting explicit mention of the entire family of graphs.

In this post we’ve defined expanders, said a little about what they’re useful for, and given an example – [tex]d[/tex]-regular random graphs – of an expander. In the next post, I’ll describe some more explicit and much more useful examples of expander graphs.

Published
Categorized as General

Experimental blogging

Over the next few weeks I’m going to try out an experimental blogging style, posting a series of rather more technical expository notes than has been my wont in the past.

I’m going to try to keep the tone light, but inevitably these posts will be more technical and demanding than most – though not all – of what I’ve posted in the past. I’ll also continue to post less technical stuff.

I expect it’ll take me a little while to settle into a style that works well. I’m aiming to keep things light and personal, but must admit that after drafting material for the past couple of weeks, I’m finding this more difficult than I imagined it would be.

I don’t think this difficulty is because it’s intrinsically especially difficult to do, but rather a reflection of habit: I’m most used to writing technical material – even expository material – in the impersonal manner common to journals. Getting out of that habit is going to take some work to get right. Feedback will help me figure out what works and what doesn’t, so keep the comments coming!

The initial topics will be: (1) a series on expander graphs, which are one of the most useful and beautiful tools in computer science, and (2) a series on Fermi algebras and the Jordan-Wigner transform, which are extremely useful tools in condensed matter physics.

These are, obviously, pretty specialized topics. I’ve tried to choose topics that are simultaneously: (1) subjects I wish to understand better; and (2) contain deep ideas of reasonably broad and current interest. I don’t expect that they’d be of much interest to, say, philosophers, but expect that the central ideas are of interest to a wide cross-section of physicists, mathematicians, and computer scientists. Furthermore, I’ll try focus on big ideas, and their relationship to the rest of science. Once again, learning to do this well is partially a question of style, and I expect I’ll get better at making these connections as I go along.

Depending on how things go, I may post further expository series in the future. I expect they’ll cover topics in some mixture of physics, computer science, mathematics, and self-development, with occasional digressions into other areas. I expect to range around a lot, through whatever interests me.

On a personal note, I’ve occasionally gotten queries from people about my blog, wondering if it “takes time away from your research”.

With this experiment I’m trying to integrate my blog more thoroughly with my research. Certainly, I find that writing expository notes like these is a good way of building up my own understanding of new subjects, and of preparing the way for new research projects. Indeed, I expect that some of these pieces will be useful warm-ups for some expository journal articles (and possibly books) that I hope to write in the future. I also feel very strongly about the value of synthesis and exposition for Science as a whole, especially the value of trying to explain things as simply as possible.

Finally, I must admit that I’m a little nervous about how this goes. These subjects are more specialized than much of what I’ve posted in the past, and I do enjoy having at least some regular readers! Here’s hoping I don’t drive you all away!

Published
Categorized as General

Optimal photons

Optimal photons for quantum information processing, joint with Peter Rohde and Tim Ralph.

Producing single photons is hard work, and there’s no really good single photon sources available, but a plethora of theoretical proposals for sources.

Perhaps somewhat surprisingly, not all photons are created equal. In particular, getting the kind of interference effects necessary for quantum information processing depends a lot on the shape of the wavepacket produced by the source: if you have the wrong shape wavepacket, even a tiny amount of noise may destroy the interference effects. For this reason, it’s important to understand which sources produce photons for which interferenceis stable against the noise.

That’s the subject of this paper. In particular, the main result is to show that for a wide variety of possible applications, Gaussian wavepackets produce the most stable interference effects, with the implication that people designing sources should look for sources which produce Gaussian or near Gaussian wavepackets.

Why blog?

An email correspondent asks:

Is it valuable to have an “internal” blog area accessible only by members of your research group, for the sake of internal communication?

I’m about to set this up for my group, so I can’t give a definite yes or no from personal experience. I certainly think it’s worth trying, and I’ve heard from others who’ve tried it that it can be an effective way of helping keep people within a research group aware of what everyone else is doing. Wikis may also be useful for this purpose.

Aside from the warm feeling from having made a positive contribution to the community, has running a public blog been helpful to you?

I’ve spent quite a bit of time drafting and redfrafting responses to that. It was more difficult to answer than I expected, because while it seems like a simple question, it’s really a whole bundle of different questions. Blogging combines: (1) the discipline, clarity of thought, and creative impulse that comes from writing things down; (2) interaction with friends and colleagues; (3) interaction with a wider audience; (4) education and outreach; (5) publicity for one’s work, field, organization and profession; (6) the excitement of being part of a new medium, which is evolving quickly; (7) the opportunity to pontificate; (8) a bunch of other things that I’ve either forgotten or are ignorant of.

When you look at it this way, blogging combines a very wide range of activities, and all of these in some sense benefit various people (not always, but often, including me). So, yes, there is definitely a considerable benefit. Whether it is worth the time and effort is difficult to say. Personally, I think I’m only just now really figuring out how to blog in a way that I feel is effective, and fully justifies the investment of time.

Published
Categorized as General

Cognitive dissonance

My partner and I took up dancing recently (rhumba, waltz, jive, etc), and it’s been great fun. The music varies quite a bit, but never more so than in last night’s first choice of song, that well-known piece of music for progressive dance, AC/DC’s “Thunderstruck”.

Published
Categorized as General

Zeroth order approximation

My friend Ben Schumacher has started a blog, Zeroth order approximation. Actually, Ben started it a few months back, and has been merrily posting away, and I simply discovered his blog yesterday. This was great for me, because it meant I got to read several months of back posts.

Here’s a little sample:

But you know, so much of academic writing is bad. It is banal, orotund, unmusical, and stuffed with wads of unnecessary jargon. It is the sort of writing that does more to obscure meaning than to convey it. I see this stuff almost every day. I swim in it. OK, maybe I do exaggerate, a little. After all, I teach at a liberal arts college that is moderately well-known for teaching people how to write. Our faculty is full of novelists and poets and whatnot. But let me tell you, it’s here, too. It’s everywhere. It is like a fungus growing over all things, blurring their shapes — the verbal equivalent, maybe, of the ivy on academic buildings. And like the ivy, I guess, its main purpose is to conceal the shabby edifices beneath.

I live in fear that one day I will write ponderous, weedy, soporific academic prose. And worse, I fear that a day will come when I will simply not know the difference.

Published
Categorized as General

Changing times

What’s the greatest single human artifact in the world?

At different points in history I think my answer would probably be objects like the Pyramids of Egypt, the Great Wall, the Suez Canal, or the Hoover Dam.

Nowadays, though, it’s not even an object, exactly. I can’t think of anything that tops the Google database.

Published
Categorized as General

The Solovay-Kitaev algorithm

New paper: “The Solovay-Kitaev algorithm”, jointly with Chris Dawson, submitted as a tutorial / review article to the journal “Quantum Information and Computation”. I expect the paper to appear at the preprint arxiv this coming Monday, as I write.

The Solovay-Kitaev theorem is an important result in quantum computing. In a classical computer, it’s well-known that the same computation can be built up out of many different basic gate sets. For example, you can build up a circuit to add two numbers either using OR and NOT gates, or out of NAND gates alone. Furthermore, you can use OR and NOT gates to simulate NAND gates (or vice versa), so, roughly speaking, these two different circuits can have the same size, up to some constant of proportionality.

In the case of quantum computers, things are more complicated. The set of possible quantum gates forms a continuum, and it’s not necessarily possible to use one gate set to simulate another exactly. Instead, some approximation may be necessary.

The Solovay-Kitaev theorem guarantees that different universal gate sets can simulate one another exceedingly quickly, to a very good approximation. From a practical point of view, this is important in compiling quantum algorithms (like Shor’s) into a form that can be implemented fault-tolerantly. From a more mathematical point of view, the Solovay-Kitaev theorem is a remarkable general statement about how quickly the group SU(d) is “filled in” by a universal set of gates.

This paper aims to provide a simple exposition of the Solovay-Kitaev theorem, which we hope is accessible to people who’ve just started working on quantum computing. We explain the proof in the form of an algorithm (just nine simple lines of pseudocode!) which, given a goal unitary operation U, builds up a good approximation to U using some prespecified set of universal gates. Chris has also prepared an online implementation of the algorithm.

Small world networks

Attention conservation notice: Just over 2000 words, requires a little graph theory.

Just for fun, I’ve been doing a little reading on small world networks, a topic that started to attract a lot of interest after a 1998 paper in Nature, by Duncan Watts and Stephen Strogatz.

The idea of small world networks goes back to (among other places) the well-known experiments demonstrating “six degrees of separation”, conducted in the 1960s by the astonishing psychologist Stanley Milgram. This is the idea that while you personally only know a relatively small sample of the total population, your extended friendship network – the people who are “friends of a friend of a…” – grows extremely rapidly as the number of links in the chain increases. Milgram’s results suggested that most of the US population is within six degrees of separation from one another, i.e., that it’s truly a small world.

(It is worth noting that doubts have been cast recently on Milgram’s results, notably by Judith Kleinfield. See the account in Duncan Watts’ recent book “Six Degrees”.)

The use of “networks” in the term “small world networks” seems to me a bit peculiar, since, at least among the people I know, the mathematical object being referred to as a network is what we’d call a graph, consisting of a set of vertices, V, and a set of edges, E, which are just pairs of vertices. I’ll use the terms “small world network” and “small world graph” interchangeably, usually preferring the former in applications, and the latter in describing mathematical objects.

What is a small world network? Stated another way, what does it mean to say that a graph has the small world property?

It’s a graph G = (V,E) with two properties that a priori might appear to be contradictory: (1) typical pairs of vertices in the graph have short paths between them, and (2) the graph is highly clustered.

To make these ideas precise, Watts and Strogatz introduce mathematical definitions for the average path length and clustering in a graph.

The average path length is defined as you would expect: define the length [tex]l(v,w)[/tex] between two vertices, v and w, as the minimum number of edges that need to be traversed to pass from v to w (or vice versa). Then the average path length L is simply the average value of [tex]l(v,w)[/tex], with v and w chosen uniformly at random,

[tex]L \equiv \frac{\sum_{vw} l(v,w)}{{n \choose 2}}, [/tex]

where the sum is over all distinct pairs of vertices, v and w, and n is the number of vertices in G. Note that for these definitions to make sense we require that the graph be connected.

Watts and Strogatz’ clustering coefficient is defined in a less intuitive fashion. The goal is to define a quantity C which varies between 0 (unclustered) and 1 (highly clustered).

To do this, first fix a vertex, v. Suppose we consider the set of vertices S which are neighbours of v. Suppose S has k elements. How connected are the vertices in S to one another? If every vertex in S is connected to every other vertex in S, then there are [tex]{k \choose 2}[/tex] edges between the elements of S. In such a case we would say that the graph is highly clustered about the vertex v. We capture this idea by defining the clustering coefficient near v, [tex]C_v[/tex], to be the number of edges between vertices in S, divided by the maximum possible number, [tex]{k \choose 2}[/tex]. The clustering coefficient for the entire graph is then defined to be the average of [tex]C_v[/tex] over all vertices in the graph.

Obviously, the clustering coefficient varies between 0 and 1, with a value near 0 meaning that most of the vertices connected to any given vertex v are not connected to each other; conversely, a value near 1 mean that most of the neighbours of any given vertex v will tend to be connected to one another.

I must admit, I harbour doubts both about how mathematically natural the clustering coefficient is, andabout how well it captures the concept of the graph being highly clustered. In particular, I would not be surprised to discover that there are other quantities which capture the same intuition, but which are far more powerful mathematically as tools for investigating small world graphs. Nonetheless, this was the definition that Watts and Strogatz used, and I will follow their lead here.

Several questions about small world graphs naturally suggest themselves:

  1. Do graphs with the small world property exist? The answer is yes, or I wouldn’t be writing these notes, but it’s not a priori obvious that this ought to be the case. How can we construct examples of graphs with the small world property?
  2. How commonly do small networks arise in nature? Might they arise in collaboration networks for movie actors, musicians, or scientists, in friendship networks, or link patterns in the internet? Under what general circumstances can small world networks arise?
  3. What, if any, consequences follow from a network being a small world network? E.g., if people’s network of acquaintances forms a small world network, does this have any implications for the spread of disease, or of memes?
  4. Can we formulate a mathematically interesting theory of small world graphs?

Constructing small world graphs

Watts and Strogatz suggest a simple method for constructing graphs with the small world property. The method is as follows.

We start by imagining a set of n vertices arranged in a ring in the plane.

Now connect each vertex to its k nearest-neighbours, in both the clockwise and anti-clockwise directions. Here, k is just some small constant, e.g., k = 2 or k = 3, whose exact value doesn’t much matter.

We complete the construction of the graph by adding a small amount of disorder. More precisely, let p > 0 be a small parameter. For every single edge in the lattice, with probability p we leave it alone, and with probability 1-p we “rewire” the edge, changing one of the end vertices to a randomly chosen vertex.

What happens to the average length and clustering coefficient in this graph as p is increased away from zero?

At p = 0 we see that the average path length between two vertices is roughly n / 4k, where n is the number of vertices in the graph. The reason is that if we choose two vertices at random on the ring, then typically they’ll be about a quarter of the way around the ring from one another, and we can get from one to the other in roughly n / 4k hops, hopping k vertices at a time.

As p increases away from zero, however, the average path length drops extremely quickly.

One way of seeing this is to do a computer simulation. Watts and Strogatz do this, finding that the average path length plummets, quickly becoming logarithmic in n, rather than linear in n.

Another way of seeing that path lengths should drop very quickly when disorder is introduced is the following heuristic argument, inspired by the renormalization group technique from statistical physics.

Suppose we subdivide the vertices in the graph up into groups of [tex]m \approx 2 / p,[/tex] vertices. We will treat those groups as single vertices in a new graph, G’, and define two vertices in G’ to be connected if the corresponding groups of vertices in G are connected through at least one edge.

A little thought about the “rewiring” prescription above shows that vertices in G’ are typically connected to approximately 2 other randomly chosen vertices in G’. There will also be some additional edges between neighbouring elements of G’, but these won’t make any difference in the subsequent analysis, and we’ll simply ignore them.

It follows from the presence of these random edges in G’ that the average path length in G’ scales roughly as [tex]\log_2(q)[/tex], where [tex]q = n / m[/tex] is the number of vertices in G’. It follows that average path lengths in G must scale roughly as [tex]m \log_2 (n/m) \approx 2 \log_2(np/2) / p[/tex].

Define [tex]N \equiv np[/tex], so, up to a small constant of proportionality, N scales as the average number of rewirings in the graph. When [tex]N > 2[/tex] we see that the average path length scales as:

[tex]\frac{2 \log N }{N} n. [/tex]

Thus, as N rises, we see that the linear term in the average path length is very rapidly suppressed. This occurs for any value of p much larger than 1/n.

It ought to be noted that this analysis is quite general: we didn’t make use of any special properties of the initial graph. In general, then, it follows that adding a small amount of disorder to any graph will strongly suppress any linear component in the average path length. In the event it hasn’t already been done, it’d be fun to extend this analysis using a more detailed renormalization group style analysis.

The behaviour of the clustering coefficient is quite different. At p = 0 the clustering coefficient is obviously quite close to one. But as disorder is added, and p is increased, the clustering coefficient remains close to one, changing only very slowly.

Watts and Strogatz show this using computer simulations, but the fundamental reason isn’t difficult to understand: even when a little bit of disorder is added, most edges in the graph are not changed, and so most clusters in the graph are not affected.

In other words, when p is small but non-zero, the resulting graph is highly clustered, but also has short average path lengths. In short, it has the small world property.

Why is this interesting?

Why is this type of graph interesting? Sure, the average path length drops rapidly while the clustering coefficient remains high. But so what? It’s surely not that difficult to come up with other graph properties, some of which change precipitously when disorder is added to a regular graph, and others of which change only very slightly.

There’s a couple of reasons to care.

First, Watts and Strogatz (and many others) have shown that there are a lot of small world networks which arise in nature. Many different types of human (and non-human) relationship network have the small world property. So too do many artificial networks, like the power grid, or the pattern of links on the internet.

This is not surprising: based on the argument above we’d expect that any network which starts out highly clustered, and which has a little disorder added, will have the small world property. Most natural systems are noisy, so we’d expect that pretty much any highly clustered network arising in nature would possess the small world property.

Of course, the fact that small world networks are common doesn’t necessarily make them interesting. Are there any interesting consequences which follow from a network having the small world property?

As an example where this is the case, Watts and Strogatz present a simple model of the spread of infectious diseases through a network, and argue that diseases ought to spread much more rapidly in a small world network than they would through a network that does not have the small world property. It’s not difficult to think up other potential consequences of a network having the small world property.

More generally, there’s an extremely interesting idea here: the idea that we can understand things about the function of a network (i.e., its dynamics), simply by looking at its static structure. This is, I gather, quite an old idea, in certain specific contexts – epidemiology, social networks – but the Watts and Strogatz paper emphasizes that it may be possible to study this connection from a more abstract point of view, and reach more general conclusions, while at the same time making the connection to concrete examples.

Analogous ideas are, of course, used in physics, where simple static properties of a system (e.g., its dimensionality) can be used to make predictions about its dynamical properties (e.g., whether it can undergo a phase transition at nonzero temperature). But it’s by no means so clear a priori that the same idea can be successfully applied to more complex systems, and I think that one nice thing about the Watts and Strogatz paper is that it suggests a wide class of examples where this can potentially be done.

Indeed, these ideas suggest an entire broad research program, with several interlocking tasks:

  • Building a taxonomy of static and dynamic properties whose presence or absence in a network can be easily checked. Some of these properties might be rather specialized, while others might be quite generally applicable in all networks.
  • Collecting data about real networks.
  • Guessing at possible connections between static and dynamic properties.
  • Checking whether these connections actually hold, both through empirical work, and through theoretical argument.
  • Ideally, this will then allow us to make predictions about the properties of other networks, and perhaps exert some kind of control over those properties.

For me personally, I can see a lot of interesting avenues over on the theoretical side, developing the appropriate mathematical tools to illuminate these kinds of questions. Of course, I’m very ignorant of what has already been done in this vein, and it would be useful to spend some time (a) figuring out what the important problems to address are, and (b) learning a bit about what has already been done.

RSS feed

One unanticipated effect of my move from Movable Type to WordPress was that the old RSS feed stopped working. There is a new feed at:

http://www.qinfo.org/people/nielsen/blog/wp-rss2.php

that seems to function correctly, at least so far as bloglines is concerned.

Comments or technical assistance from anyone who uses RSS would be very greatly appreciated! It’d be good to know if other people see the feed working properly, or not.

Published
Categorized as General