Polylog parameterizability: Difference between revisions
my take on polylog parameterisability |
No edit summary |
||
Line 7: | Line 7: | ||
: <math> \mu(x_1,\ldots,x_n) = \prod_{j=1}^J p_j( x_{C_j} )</math> | : <math> \mu(x_1,\ldots,x_n) = \prod_{j=1}^J p_j( x_{C_j} )</math> | ||
where <math>J = \exp(O(polylog(n))</math>, the <math>C_j</math> are subsets of <math>\{1,\ldots,n\}</math> of size O(polylog(n)), the <math>x_{C_j} := (x_i)_{i \in C_j} \in \{0,1\}^{C_j}</math> are the literals in <math>C_j</math>, and <math>p_j: \{0,1\}^{C_j} \to {\mathbf R}</math> is a function. The point is that each factor <math>p_j</math> only takes <math>\exp(O(polylog(n))</math> real numbers to describe, and so the entire distribution also only requires <math>\exp(O(polylog(n))</math> real numbers to describe. | where <math>J = \exp(O(polylog(n))</math>, the <math>C_j</math> are subsets of <math>\{1,\ldots,n\}</math> of size O(polylog(n)), the <math>x_{C_j} := (x_i)_{i \in C_j} \in \{0,1\}^{C_j}</math> are the literals in <math>C_j</math>, and <math>p_j: \{0,1\}^{C_j} \to {\mathbf R}</math> is a function. The point is that each factor <math>p_j</math> only takes <math>\exp(O(polylog(n))</math> real numbers to describe, and so the entire distribution also only requires <math>\exp(O(polylog(n))</math> real numbers to describe. We also ask that the <math>C_j</math> are organised along a directed acyclic graph in some technical sense which will not be detailed here. | ||
The purpose of the FO(LFP) machinery is (I think) ultimately to obtain a result of the following form: | The purpose of the FO(LFP) machinery is (I think) ultimately to obtain a result of the following form: |
Revision as of 09:42, 14 August 2010
In this post I will try to describe what I believe Deolilakar means by "polylog parameterizability", a key concept in his work.
Let [math]\displaystyle{ \mu }[/math] be a probability distribution on the cube [math]\displaystyle{ \{0,1\}^n }[/math]. Then to describe this distribution fully, one needs [math]\displaystyle{ 2^n-1 }[/math] real parameters; one real number [math]\displaystyle{ \mu(\{x\}) }[/math] for each of the [math]\displaystyle{ 2^n }[/math] elements of the cube, minus one for the constraint [math]\displaystyle{ \sum_{x \in \{0,1\}^n} \mu(\{x\}) = 1 }[/math].
Roughly speaking, a probability distribution is "polylog parameterizable" if it requires much fewer than [math]\displaystyle{ 2^n }[/math] real numbers to describe, and more precisely it needs only [math]\displaystyle{ \exp( O( polylog(n) ) ) }[/math] many real numbers. More precisely still, we ask that the distribution function [math]\displaystyle{ \mu(x_1,\ldots,x_n): \{0,1\}^n \to [0,1] }[/math] admit a factorisation into potentials
- [math]\displaystyle{ \mu(x_1,\ldots,x_n) = \prod_{j=1}^J p_j( x_{C_j} ) }[/math]
where [math]\displaystyle{ J = \exp(O(polylog(n)) }[/math], the [math]\displaystyle{ C_j }[/math] are subsets of [math]\displaystyle{ \{1,\ldots,n\} }[/math] of size O(polylog(n)), the [math]\displaystyle{ x_{C_j} := (x_i)_{i \in C_j} \in \{0,1\}^{C_j} }[/math] are the literals in [math]\displaystyle{ C_j }[/math], and [math]\displaystyle{ p_j: \{0,1\}^{C_j} \to {\mathbf R} }[/math] is a function. The point is that each factor [math]\displaystyle{ p_j }[/math] only takes [math]\displaystyle{ \exp(O(polylog(n)) }[/math] real numbers to describe, and so the entire distribution also only requires [math]\displaystyle{ \exp(O(polylog(n)) }[/math] real numbers to describe. We also ask that the [math]\displaystyle{ C_j }[/math] are organised along a directed acyclic graph in some technical sense which will not be detailed here.
The purpose of the FO(LFP) machinery is (I think) ultimately to obtain a result of the following form:
- Claim. If a solution space in [math]\displaystyle{ \{0,1\}^n }[/math] can be "explored by a P algorithm" in a suitable sense, then the uniform distribution [math]\displaystyle{ mu of that space is the _projection_ of a polylog-parameterisable measure \lt math\gt \mu' }[/math] on a subset of a polynomially larger space [math]\displaystyle{ \{0,1\}^{n^{O(1)}} }[/math].
However, a key point that does not seem to be sufficiently addressed in the paper is whether the projection of a polylog-parameterisable measure remains polylog-parameterisable, and which I believe could in fact be a serious problem with the proof.
Let me illustrate the above Claim with a simple example, that of a straight line program. Let [math]\displaystyle{ N := n^{100} }[/math], and consider a boolean function [math]\displaystyle{ f: \{0,1\}^n \to \{0,1\} }[/math] obtained by the formula
- [math]\displaystyle{ f(x_1,\ldots,x_n) := x_N }[/math]
where [math]\displaystyle{ x_{n+1},\ldots,x_N }[/math] are obtained from previous literals by formulae of the form
- [math]\displaystyle{ x_i = f_i( x_{a_i}, x_{b_i} ) }[/math] (1)
for [math]\displaystyle{ 1 \leq a_i,b_i \lt i }[/math] and some logic gates [math]\displaystyle{ f_i : \{0,1\}^2 \to \{0,1\} }[/math]. Now let
- [math]\displaystyle{ S := \{ (x_1,\ldots,x_n, f(x_1,\ldots,x_n)): (x_1,\ldots,x_n) \in \{0,1\}^n \} \subset \{0,1\}^{n+1} }[/math]
be the graph of f, and let [math]\displaystyle{ \mu }[/math] be the uniform distribution on S, thus [math]\displaystyle{ \mu }[/math] assigns a probability [math]\displaystyle{ 2^{-n} }[/math] to each element of S.
As f is fairly complicated (indeed, it is not obviously distinguishable from a random function), it does not seem that [math]\displaystyle{ \mu }[/math] itself is polylog parameterisable. However, [math]\displaystyle{ \mu }[/math] can be lifted to a probability measure [math]\displaystyle{ \mu' }[/math] on the larger space [math]\displaystyle{ \{0,1\}^N }[/math] which _is_ polylog parameterisable. Namely, let [math]\displaystyle{ S' \subset \{0,1\}^N }[/math] be the set of all tuples [math]\displaystyle{ (x_1,\ldots,x_N) \in \{0,1\}^N }[/math] for which (1) holds for all [math]\displaystyle{ i=n+1,\ldots,N }[/math]. Clearly, the map [math]\displaystyle{ (x_1,\ldots,x_N) \mapsto (x_1,\ldots,x_n,x_N) }[/math] from [math]\displaystyle{ \{0,1\}^N }[/math] to [math]\displaystyle{ \{0,1\}^{n+1} }[/math] maps S' to S in a bijective manner, and so the uniform measure [math]\displaystyle{ \mu' }[/math] on S' projects to the uniform measure [math]\displaystyle{ \mu }[/math] on S.
On the other hand, [math]\displaystyle{ \mu' }[/math] is polylog-parameterisable. Indeed, we have the explicit factorisation
- [math]\displaystyle{ \mu'(x_1,\ldots,x_N) := 2^{-n} \prod_{i=n+1}^N 1_{x_i = f_i(x_{a_i},x_{b_i})} }[/math].
In a similar spirit, consider now the solution space
- [math]\displaystyle{ T := \{ (x_1,\ldots,x_n): f(x_1,\ldots,x_n) = 1 \} \subset \{0,1\}^n }[/math]
of f, and assume that this space is non-empty. Let [math]\displaystyle{ \nu }[/math] be the uniform measure on this solution space, thus [math]\displaystyle{ \nu(\{x\}) = 1/|T| }[/math] for all [math]\displaystyle{ x \in T }[/math]. Then [math]\displaystyle{ \nu }[/math] is the projection under the map [math]\displaystyle{ (x_1,\ldots,x_N) \mapsto (x_1,\ldots,x_n,x_N) }[/math] of the polylog parameterisable measure
- [math]\displaystyle{ \nu'(x_1,\ldots,x_N) := |T|^{-1} 1_{x_n=1} \prod_{i=n+1}^N 1_{x_i = f_i(x_{a_i},x_{b_i})} }[/math].
So once again, the complicated measure [math]\displaystyle{ \nu }[/math] can be viewed as the projection of a much simpler measure [math]\displaystyle{ \nu' }[/math].