Expander graphs IV: Expansion and the eigenvalue gap
I return to expanders today, with a discussion of expansion and eigenvalue gaps. Thanks to all the commenters on earlier posts for all the proof ideas and simplifications!
Note: This post is one in a series introducing one of the deepest ideas in modern computer science, expander graphs. Expanders are one of those powerful ideas that crop up in many apparently unrelated contexts, and that have a phenomenally wide range of uses. The goal of the posts is to explain what an expander is, and to learn just enough about them that we can start to understand some of their amazing uses. The posts require a little background in graph theory, computer science, linear algebra and Markov chains (all at about the level of a first course) to be comprehensible. I am not an expert on expanders, and the posts are just an introduction. They are are mostly based on some very nice 2003 lecture notes by Nati Linial and Avi Wigderson, available on the web at http://www.math.ias.edu/~boaz/ExpanderCourse/.
In today’s post we really reap the benefits of recasting graph theory in linear algebraic terms, showing that the the expansion parameter can be related to the gap between the largest and second largest eigenvalue of the graph. This ought to surprise you, perhaps even shock you. It’s a highly non-trivial result, with the big payoff of relating something we didn’t really understand all that well (the expansion parameter) to eigenvalue gaps, about which an enormous amount it known.
Expansion and the eigenvalue gap
Let’s return our attention to expander graphs, and see what the eigenvalues of a graph have to do with its expansion parameter. We define the gap for the graph to be the difference between the largest and second-largest eigenvalues. The expansion parameter and the gap are connected by the following theorem:
Theorem: The expansion parameter for a -regular graph is related to the gap by:
Thus, properties of the eigenvalue gap can be used to deduce properties of the expansion parameter. For example, if the eigenvalue gap for a family of -regular graphs is bounded below by a positive constant, then the expansion parameter must also be bounded below by a positive constant, and so the family is an expander.
One reason for finding the connection between the gap and the expansion parameter interesting is that it is far easier to estimate the gap of an by matrix than it is to enumerate the exponentially many subsets of the vertex set , and compute for each one.
Proof discussion: We already understand that for this graph, with eigenvector . So we’ll concentrate on trying to understand the behaviour of the second largest eigenvalue, . The theorem tells us that the difference between and is controlled both above and below by the expansion parameter .
How can we get control over the second largest eigenvalue of ? One way is to observe that is just the maximum of the expression , where is the adjacency matrix of , and we maximize over all vectors orthogonal to the eigenvector . An encouraging fact is that this expression is quite easy to deal with, because the condition that be orthogonal to is actually equivalent to the sum of ’s entries being equal to , so we have
where is just the sum of the entries of the vector .
We’re going to provide a lower bound on by simply guessing a good choice of satisfying , and using the fact that
To make a good guess, it helps to have a way of thinking about expressions like , where . A convenient way of thinking is to rewrite as the difference of two disjoint probability distributions, and , i.e., , where and are non-negative vectors each summing to , and with disjoint support. This results in terms like , which we can think of in terms of transition probabilities between and . This will allow us to apply the expansion properties of the graph.
Let’s make these ideas a little more concrete. The key is to define to be the vector whose entries are on , and elsewhere, and to observe that
where is the number of edges between the vertex sets and . This suggests that we should choose and in terms of vectors like , since it will enable us to relate expressions like to the sizes of various edge sets, which, in turn, can be related to the expansion parameter.
Suppose in particular that we choose
This satisfies the condition , and gives
The definition of an expander graph gives us control over , so it is convenient to rewrite and in terms of , using the -regularity of the graph:
A straightforward substitution and a little algebra gives:
Comparing with the earlier expression for the denominator , we obtain
Now choose so that , and , giving after a little algebra
which was the first of the two desired inequalities in the theorem.
The proof of the second inequality is a little more complicated. Unfortunately, I haven’t managed to boil the proof down to a form that I’m really happy with, and for this reason I won’t describe the details. If you’re interested, you should try to prove it yourself, or refer to the notes of Linial and Wigderson.
Problem: Can we generalize this result so that it applies to a general undirected graph , not just to -regular graphs? Can we prove an analogous statement for directed graphs, perhaps in terms of singular values? Can we define a a generalized notion of “expansion” which can be applied to any symmetric matrix with non-negative entries, and connect that notion of expansion to the eigenvalue gap? Can we generalize even further? What happens if we change the field over which the matrix is considered?