Skip to content

Expander graphs I: Introduction

by Michael Nielsen on May 24, 2005

Note: This post is one in a series introducing one of the deepest ideas in modern computer science, expander graphs. Expanders are one of those powerful ideas that crop up in many apparently unrelated contexts, and that have a phenomenally wide range of uses. The goal of the posts is to explain what an expander is, and to learn just enough about them that we can start to understand some of their amazing uses. The posts require a little background in graph theory, computer science, linear algebra and Markov chains (all at about the level of a first course) to be comprehensible. I am not an expert on expanders, and the posts are just an introduction. They are are mostly based on some very nice 2003 lecture notes by Nati Linial and Avi Wigderson, available on the web at http://www.math.ias.edu/~boaz/ExpanderCourse/.

Introduction to expanders

Expander graphs are one of the deepest tools of theoretical computer science and discrete mathematics, popping up in all sorts of contexts since their introduction in the 1970s. Here’s a list of some of the things that expander graphs can be used to do. Don’t worry if not all the items on the list make sense: the main thing to take away is the sheer range of areas in which expanders can be applied.

  • Reduce the need for randomness: That is, expanders can be used to reduce the number of random bits needed to make a probabilistic algorithm work with some desired probability.
  • Find good error-correcting codes: Expanders can be used to construct error-correcting codes for protecting information against noise. Most astonishingly for information theorists, expanders can be used to find error-correcting codes which are efficiently encodable and decodable, with a non-zero rate of transmission. This is astonishing because finding codes with these properties was one of the holy grails of coding theory for decades after Shannon’s pioneering work on coding and information theory back in the 1940s.
  • A new proof of PCP: One of the deepest results in computer science is the PCP theorem, which tells us that for all languages L in NP there is a randomized polyonomial-time proof verifier which need only check a constant number of bits in a purported proof that x \in L or x \not \in L, in order to determine (with high probability of success) whether the proof is correct or not. This result, originally established in the earlier 1990s, has recently been given a new proof based on expanders.

What’s remarkable is that none of the topics on this list appear to be related, a priori, to any of the other topics, nor do they appear to be related to graph theory. Expander graphs are one of these powerful unifying tools, surprisingly common in science, that can be used to gain insight into an an astonishing range of apparently disparate phenomena.

I’m not an expert on expanders. I’m writing these notes to help myself (and hopefully others) to understand a little bit about expanders and how they can be applied. I’m not learning about expanders with any specific intended application in mind, but rather because they seem to behind some of the deepest insights we’ve had in recent years into information and computation.

What is an expander graph? Informally, it’s a graph G = (V,E) in which every subset S of vertices expands quickly, in the sense that it is connected to many vertices in the set \overline S of complementary vertices. Making this definition precise is the main goal of the remainder of this post.

Suppose G = (V,E) has n vertices. For a subset S of V we define the edge boundary of S, \partial S, to be the set of edges connecting S to its complement, \overline S. That is, \partial S consists of all those edges (v,w) such that v \in S and w \not \in S. The expansion parameter for G is defined by

   h(G) \equiv \min_{S: |S| \leq n/2} \frac{|\partial S|}{|S|},

where |X | denotes the size of a set X.

One standard condition to impose on expander graphs is that they be d-regular graphs, for some constant d, i.e., they are graphs in which every vertex has the same degree, d. I must admit that I’m not entirely sure why this d-regularity condition is imposed. One possible reason is that doing this simplifies a remarkable result which we’ll discuss later, relating the expansion parameter h(G) to the eigenvalues of the adjacency matrix of G. (If you don’t know what the adjacency matrix is, we’ll give a definition later.)

Example: Suppose G is the complete graph on n vertices, i.e., the graph in which every vertex is connected to every other vertex. Then for any vertex in S, each vertex in S is connected to all the vertices in \overline S, and thus |\partial S| = |S| \times |\overline S| = |S|(n-|S|). It follows that the expansion parameter is given by

   h(G) = \min_{S: |S|\leq n/2} n-|S| = \left\lceil \frac{n}{2} \right\rceil.

For reasons I don’t entirely understand, computer scientists are most interested in the case when the degree, d, is a small constant, like d = 2,3 or 4, not d=n-1, as is the case for the complete graph. Here’s an example with constant degree.

Example: Suppose G is an n \times n square lattice in 2 dimensions, with periodic boundary conditions (so as to make the graph 4-regular). Then if we consider a large connected subset of the vertices, S, it ought to be plausible that that the edge boundary set \partial S contains roughly one edge for each vertex on the perimeter of the region S. We expect there to be roughly \sqrt{|S|} such vertices, since we are in two dimensions, and so |\partial S|/|S| \approx 1/\sqrt{|S|}. Since the graph can contain regions S with up to O(n^2) vertices, we expect

   h(G) = O\left( \frac{1}{n} \right)

for this graph. I do not know the exact result, but am confident that this expression is correct, up to constant factors and higher-order corrections. It’d be a good exercise to figure out exactly what h(G) is. Note that as the lattice size is increased, the expansion parameter decreases, tending toward 0 as n\rightarrow \infty.

Example: Consider a random d-regular graph, in which each of n vertices is connected to d other vertices, chosen at random. Let S be a subset of at most n/2 vertices. Then a typical vertex in S will be connected to roughly d \times |\overline S|/n vertices in \overline S, and thus we expect |\partial S| \approx d \times |S| |\overline S|/n, and so

   \frac{|\partial S|}{|S|} \approx d \frac{|\overline S|}{n}.

Since |\overline S| has its minimum at approximately n/2 it follows that h(G) \approx d/ 2, independent of the size n.

Exercise: Show that a disconnected graph always has expansion parameter 0.

In each of our examples, we haven’t constructed just a single graph, but rather an entire family of graphs, indexed by some parameter n, with the property that as n gets larger, so too does the number of vertices in the graph. Having access to an entire family in this way turns out to be much more useful than having just a single graph, a fact which motivates the definition of expander graphs, which we now give.

Suppose we have a family G_j = (V_j,E_j) of d-regular graphs, indexed by j, and such that |V_j| = n_j for some increasing function n_j. Then we say that the family \{ G_j \} is a family of expander graphs if the expansion parameter is bounded strictly away from 0, i.e., there is some small constant c such that h(G_j) \geq c > 0 for all G_j in the family. We’ll often abuse nomenclature slightly, and just refer to the expander \{ G_j \}, or even just G, omitting explicit mention of the entire family of graphs.

In this post we’ve defined expanders, said a little about what they’re useful for, and given an example – d-regular random graphs – of an expander. In the next post, I’ll describe some more explicit and much more useful examples of expander graphs.

From → General

4 Comments
  1. R.R. Tucci permalink

    Expander Graphs, Coding/Complexity Theory/Information Theory on Graphs, Bayesian Networks, and last, but not least, Quantum Networks, are all intimately related topics.
    Two bigwigs that have worked on both Bayesian Networks and Coding on Graphs are
    Brendan J. Frey
    and David MacKay.

  2. Another important use of expanders in computational complexity is a reduction from general instances of problems to instances with bounded occurences of variables.
    For example, MAX 3SAT (optimization version of 3SAT in which it is required to find the maximal number of satisfied clauses) can be reduced MAX 3SAT-29 (MAX 3SAT, such that no variable occurs in more then 29 subformulae) using expanders. Then MAX 3SAT-29 can be reduced to MAX 3SAT-5.
    This technique quite important to prove that certain problems can be approximated better then with certain approximation factors.
    This reduction is given in a very easy to understand text in D. Hochbaum, Approximation Algorithms for NP-Hard Problems, chapter written by C. Lund (as far as I remember Chaper 11 or 10)
    Or in a survey by “Inapproximability of Combinatorial Optimization Problems” by Luca Trevisan, arxiv, cs.CC/0409043, Part 4

    I think it is a very good idea to put more technical content into your blog.

  3. Sorry, the first link is incorrect
    it must be
    Hochbaum’s book

  4. Guilherme Mota permalink

    this introduction me was very useful, would like being thankful for facilitating my agreement about expander graphs.

Comments are closed.