Deolalikar P vs NP paper: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
No edit summary
 
(78 intermediate revisions by 9 users not shown)
Line 1: Line 1:
{{RightTOC}}
This is a clearinghouse wiki page for aggregating the following types of items:
This is a clearinghouse wiki page for aggregating the following types of items:


Line 18: Line 19:
* [http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%E2%89%A0np/ Update on Deolalikar's Proof that P≠NP], August 10 2010. (Inactive)
* [http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%E2%89%A0np/ Update on Deolalikar's Proof that P≠NP], August 10 2010. (Inactive)
* [http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p%e2%89%a0np-proof/ Deolalikar Responds To Issues About His P≠NP Proof] August 11, 2010 (Inactive)
* [http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p%e2%89%a0np-proof/ Deolalikar Responds To Issues About His P≠NP Proof] August 11, 2010 (Inactive)
* [http://rjlipton.wordpress.com/2010/08/12/fatal-flaws-in-deolalikars-proof Fatal Flaws in Deolalikar’s Proof?] August 12, 2010 ('''Active''')
* [http://rjlipton.wordpress.com/2010/08/12/fatal-flaws-in-deolalikars-proof Fatal Flaws in Deolalikar’s Proof?] August 12, 2010 (Inactive)
* [http://rjlipton.wordpress.com/2010/08/15/the-p%e2%89%a0np-proof-is-one-week-old/ The P≠NP "Proof" Is One Week Old] August 15, 2010 (Inactive)
* [http://rjlipton.wordpress.com/2010/08/18/proofs-proofs-who-needs-proofs Proofs, Proofs, Who Needs Proofs?] August 18, 2010 (Inactive)
* [http://rjlipton.wordpress.com/2010/08/19/projections-can-be-tricky Projections can be tricky] August 19, 2010 ('''Active''')


== The paper ==
== The paper ==
Line 24: Line 28:
These links are taken from [http://www.hpl.hp.com/personal/Vinay_Deolalikar/ Vinay Deolalikar's web page].
These links are taken from [http://www.hpl.hp.com/personal/Vinay_Deolalikar/ Vinay Deolalikar's web page].


* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp12pt.pdf First draft], Aug 6, 2010  '''Has since been replaced with the most recent version.'''
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp12pt.pdf First draft], Aug 6, 2010. '''File overwritten several times and then finally removed,''' Aug 17 2010.
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_updated.pdf Second draft] Aug 9, 2010.  '''File removed,''' Aug 10 2010.  
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_updated.pdf Second draft] Aug 9, 2010.  '''File removed,''' Aug 10 2010.  
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_updated-mjr.pdf draft 2 + ε], Aug 9 2010. '''File removed,''' Aug 11 2010.
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_updated-mjr.pdf draft 2 + ε], Aug 9 2010. '''File removed,''' Aug 11 2010.
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_8_11.pdf Third draft], Aug 11 2010.  
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_8_11.pdf Third draft], Aug 11 2010.  '''File removed,''' Aug 17 2010.  
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_synopsis.pdf Synopsis of proof], Aug 13 2010.
* [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_synopsis.pdf Synopsis of proof], Aug 13 2010. '''File removed'''
* [http://www.hpl.hp.com/techreports/2010/HPL-2010-95.html HP Technical report], Aug 20, 2010.  '''File removed,''' Aug 23 2010?


Here is the list of [[Update|updates]] between the different versions.
Here is the list of [[Update|updates]] between the different versions.
Line 47: Line 52:
* (draft 2 + e, p.10): "We reproduce the rigorously proved picture of the 1RSB ansatz that we will need in Chapter 5."  The phrasing makes it sound like we will need the 1RSB ansatz in Chapter 5 instead of saying that it is reproduced in Chapter 5 (which I think is what the author intended).  One fix is to move "in Chapter 5" to the beginning of the sentence.
* (draft 2 + e, p.10): "We reproduce the rigorously proved picture of the 1RSB ansatz that we will need in Chapter 5."  The phrasing makes it sound like we will need the 1RSB ansatz in Chapter 5 instead of saying that it is reproduced in Chapter 5 (which I think is what the author intended).  One fix is to move "in Chapter 5" to the beginning of the sentence.
** Still present in third draft, but now on page 16 (and referring to Chapter 6 instead).
** Still present in third draft, but now on page 16 (and referring to Chapter 6 instead).
* (Third draft, p. 102): "inspite" → "in spite"
* (Third draft, p. 102): "inspite" → "in spite".
* (Third draft, p. 27, first paragraph): "is to obtain"  → "to obtain".
* (Third draft, p. 19) Should the neighborhood <math>{\mathcal G}_i</math> be <math>{\mathcal N}_i</math> instead (or vice versa)?  Also, this neighborhood contains self-loops and so is technically not a neighborhood system in the sense of Definition 2.3.
* (Third draft, p. 20) Some braces appear to be missing in the first display (just before Definition 2.6).
* (Third draft, p. 24) Some text appears to be obscured by Figure 2.2.
* (Third draft, p. 25) In (2.1) and (2.2) <math>\phi</math> should probably be <math>\phi_3</math>.
* (Third draft, p. 37) Some text appears to be obscured by Figure 3.2.
* (Third draft, p. 43) "The Tarski-Knaster"  → "The Tarski-Knaster theorem".
* (Third draft, p. 44) Right parenthesis missing in (4.3) and immediately afterwards.
* (Third draft, p. 99) Some italicized text appears to be obscured by Figure 8.2.


== Proof strategy ==
== Proof strategy ==
Line 77: Line 91:
== Specific issues ==
== Specific issues ==


* '''P=NP only gives polylog parameterizability property of a lift of the k-SAT solution space, rather than that solution space itself''' 
If P=NP, then the procedure in Section 8.3 of the third draft does not create a [[polylog parameterizability|polylog parameterizable]] distribution on the solution space of a hard phase k-SAT problem itself, but rather on a <I>lift</I> of that solution space in which many additional variables are added, so that one is now working in <math>\{0,1\}^N</math> for some <math>N=n^{O(1)}</math> rather than in <math>\{0,1\}^n</math>.  Of course, one can forget all but the original n variables and return to a distribution on the solution space by projecting, but it is not clear that the polylog parameterizable property is preserved by this (there is no reason why the directed acyclic graph on the N literals has any meaningful projection onto the original n literals).  In particular, the projected distribution is not in a recursively factorizable form, and the cluster geometry of the lifted solution space could be quite different from that of the original k-SAT problem (e.g. the set of frozen variables could change, as could the definition of a cluster, and number of flips required to get from one cluster to the next).  This issue does not seem to be discussed at all in the paper.
To phrase it another way, consider the following excerpt from the paper describing part of the strategy (Page 92, third draft):
:: We have embedded our original set of variates into a <I>polynomially larger</I> product space, and obtained a directed graphical model on this larger space. This product space has a nice factorization due to the directed graph structure. This is what we will exploit.
It is clear here that the problem is being lifted from the original space <math>\{0,1\}^n</math> to a larger space <math>\{0,1\}^N</math>, but then the solution space of (say) k-SAT on <math>\{0,1\}^n</math> also gets lifted to a solution space which could have a completely different geometry.  Given that the proof sketch of P != NP (pages 99-100, third draft) is based on the cluster geometry of the solution space of k-SAT itself, rather than a lift thereof, this is a significant gap in the argument.  (Specifically, the variables <math>\alpha,\beta,\gamma</math> etc. appearing on page 100 are implicitly assumed to belong to the original set of n literals <math>x_1,\ldots,x_n</math>, but due to the embedding into the polynomially larger space, they are in fact likely to instead be scattered among a much larger set <math>x_1,\ldots,x_N</math> of literals.
* '''The problem in the LFP section of the proof:'''  
* '''The problem in the LFP section of the proof:'''  
::[http://www.cl.cam.ac.uk/~ad260/ Anuj Dawar] [http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%E2%89%A0np/#comment-4914 raises a point] about order-invariance in intermediate stages of the LFP computation:  
::[http://www.cl.cam.ac.uk/~ad260/ Anuj Dawar] [http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%E2%89%A0np/#comment-4914 raises a point] about order-invariance in intermediate stages of the LFP computation:  
Line 87: Line 112:
::As phrased by [http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p%e2%89%a0np-proof#comment-5138 Albert Atserias] and [[Lindell's Critique|Steven Lindell]]:
::As phrased by [http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p%e2%89%a0np-proof#comment-5138 Albert Atserias] and [[Lindell's Critique|Steven Lindell]]:
<blockquote>
<blockquote>
Let “polylog-parametrizable” be a property of solution spaces yet to be defined.
Let “[[Polylog parameterizability|polylog-parametrizable]]” be a property of solution spaces yet to be defined.


* Claim 1: The solution spaces of problems in P are polylog-parametrizable.
* Claim 1: The solution spaces of problems in P are polylog-parametrizable.
Line 113: Line 138:
:: [http://www.cs.umass.edu/~immerman/ Neil Immerman] has written a very interesting letter to Deolalikar critiquing his proof. With Immerman's permission, this letter is reproduced on the following page: [[Immerman's letter]]. Immerman also raises the problem of order-invariance mentioned by Anuj Dawar above.  
:: [http://www.cs.umass.edu/~immerman/ Neil Immerman] has written a very interesting letter to Deolalikar critiquing his proof. With Immerman's permission, this letter is reproduced on the following page: [[Immerman's letter]]. Immerman also raises the problem of order-invariance mentioned by Anuj Dawar above.  


* ''Placeholder for specific critique based on the structure of k-XORSAT - it seems like we're ready to phrase this one.''
* '''The XORSAT objection'''  The claimed proof of P!=NP on pages 99-100 of the third draft assumes for contradiction that the extension problem for k-SAT is in P, concludes that the solution space of a random k-SAT problem is polylog parameterizable (in the ppp sense), and then shows that this is in contradiction with the known cluster geometry of that solution space at the hard phase, so that the k-SAT extension problem is in fact not in P.  However, if this argument were valid, one could substitute k-SAT for k-XORSAT throughout, and obtain the false conclusion that the extension problem for k-XORSAT were similarly not in P.  Thus, if the argument were valid, there must be a specific step in the argument which worked for k-SAT but not for k-XORSAT; but no such specific step has been identified.  (It is indeed true that k-XORSAT is in ppp, while k-SAT is not believed to be in ppp, but this is not a specific step in the argument.)  Indeed, it appears that the argument fails for both k-XORSAT and k-SAT at the same place, namely the projection issue mentioned earlier in which the variables <math>\alpha,\beta,\gamma</math> of the ENSP model are erroneously assumed to belong to the initial set of n variables, rather than the polynomially expanded set.


== General issues ==
== General issues ==
Line 143: Line 168:


===Issues with random k-SAT===
===Issues with random k-SAT===
* '''Complex solution spaces are uncorrelated with time complexity'''. (The below is a greatly expanded version of a series of twitter comments by Ryan Williams, on [http://twitter.com/rrwilliams/status/20741046788 twitter]) The author tries to use the fact that for certain distributions of random k-SAT, the solution space has a "hard structure". For certain parameterizations, the space of satisfying assignments to a random k-SAT instance has some intriguing structure. If SAT is in P, then SAT can be captured in a certain logic (equivalent to P in some sense). The author claims that anything captured in this logic can't have a solution space with this intriguing structure. There are two "meta" objections to this. One is that "intriguing structure in the solution space is not sufficient for NP hardness". The second is that "intriguing structure is not necessary for NP hardness". They don't actually point to a place where the proof is wrong. But they do appear to give an obstacle to the general proof method.
** '''How to map satisfiable formulas to trivial problems with exactly the same solution space structure''' Very easy problems can also have complicated solution distributions. The following arose from discussions with Jeremiah Blocki. Here is a very easy way to (non-uniformly) map any infinite collection of satisfiable formulas into <math>SAT0</math>, the set of formulas satisfied by the all-zeroes assignment. The map completely preserves the number of variables, the number of satisfying assignments, the distances between assignments, the clusters--as far as I can tell it preserves '''every property''' that has been studied about random k-SAT. Observe <math>SAT0</math> is trivially decidable in polynomial time. This should be a definitive barrier to any proof of P vs NP that attempts to argue that the solution space of an NP-hard problem is "too complex" for P. (It even blocks any proof of AC0 not equal NP with this pattern!) The map is the following: for a satisfiable formula F on n variables, let <math>(A_1,\ldots,A_n)</math> be a satisfying assignment for it, and define F' to be the formula obtained by the following procedure: '''flip the signs of all literals involving <math>x_i</math> in F iff <math>A_i = 1</math>'''. Now, the space of satisfying assignments to F is transformed by the map <math>\phi_A(x_1,\ldots,x_n) = (x_1 \oplus A_1,\ldots,x_n \oplus A_n)</math>. Observe this map completely preserves all distances between assignments, clusters, etc., and that the all-zeroes assignment satisfies F'. So for any infinite collection <math>{C}</math> of satisfiable formulas with "hard solution structure" there is also an infinite collection of formulas <math>{C'}</math> with analogous solution structure, where <math>C'</math> is a subset of <math>SAT0</math>. This is the objection which seems most germane to the current proposed proof: it opposes the claim that "anything in P can't have a solution space with complicated structure". The map above is a distance-preserving isomorphism between satisfying assignments. It appears there must be some error in either the translation to this logic, or the analysis of solution spaces that this logic permits.


** Moreover, it's also worth pointing out that a hard distribution of solutions is not ''necessary'' for NP-hardness, either. A weird distribution is ''not'' what makes a problem hard, it's the representation of that solution space (e.g., a 3-CNF formula, a 2-CNF formula, etc.). The "hard" case of 3-SAT is the case where there is ''at most one'' satisfying assignment, since there is a randomized reduction from 3-SAT to 3-SAT with at most one satisfying assignment ([http://en.wikipedia.org/wiki/Valiant%E2%80%93Vazirani_theorem Valiant-Vazirani]). This reduction increases the number of clauses and the number of variables, but that doesn't really matter. The point is that you can always reduce 3-SAT with a "complex" solution space to one with an "easy" solution space, so how can a proof separating P from NP rely on the former? (Note that, if plausible circuit lower bounds hold up, then Valiant-Vazirani can be derandomized to run in deterministic polynomial time.) To summarize, there is essentially no correlation between the "hard structure" of the solution space for instances of some problem, and the NP-hardness of that problem.
'''Complex solution spaces are uncorrelated with time complexity'''. (The below is a greatly expanded version of a series of twitter comments by Ryan Williams, on [http://twitter.com/rrwilliams/status/20741046788 twitter]) The author tries to use the fact that for certain parameterizations, the space of satisfying assignments to a random k-SAT instance (the "solution space") has some intriguing structure. If SAT is in P, then SAT can be captured in a certain logic (equivalent to P in some sense). The author claims that anything captured in this logic can't be used to generate a solution space with this intriguing structure. There are two "meta" objections to this. One is that "intriguing structure in the solution space is not sufficient for NP hardness". The second is that "intriguing structure is not necessary for NP hardness". They don't actually point to a place where the proof is wrong. But they do appear to give an obstacle to the general proof method.
*** However this point on its own does not on its own invalidate Deolaliker's approach.   In principle it's possible that some NP-complete problems have complex solutions spaces and some do not.    To prove just one NP-complete problem has a complex solution space would be enough ''if'' it was also proved that all P problems have simple solution spaces.   For this to work, it would clearly be necessary that reductions between NP problems need not preserve complexity of solution space, but the examples given above indicate this is the case - of course provided that the validity of the complex-solution-space approach was established.
 
:'''How to map satisfiable formulas to trivial problems with exactly the same solution space structure.''' Very easy problems can also have complicated solution distributions. The following arose from discussions with Jeremiah Blocki. Here is one easy way to (non-uniformly!) map any infinite collection of satisfiable formulas into <math>SAT0</math>, the set of formulas satisfied by the all-zeroes assignment. The map completely preserves the number of variables, the number of satisfying assignments, the distances between assignments, the clusters--as far as I (Ryan Williams) can tell it preserves '''every property''' that has been studied about random k-SAT. Observe <math>SAT0</math> is trivially decidable in polynomial time. This should be a barrier to any proof of P vs NP that attempts to argue that the solution space of an NP-hard problem is "too complex" for P. This is the objection which is most germane to the current proposed proof.
 
::The map is simple: for a satisfiable formula <math>F</math> on <math>n</math> variables, let <math>(A_1,\ldots,A_n)</math> be a satisfying assignment for it, and define F' to be the formula obtained by the following procedure: ''flip the signs of all literals involving <math>x_i</math> in <math>F</math> iff <math>A_i = 1</math>''. Now the space of satisfying assignments to <math>F</math> is transformed by the map <math>\phi_A(x_1,\ldots,x_n) = (x_1 \oplus A_1,\ldots,x_n \oplus A_n)</math>. Observe this map completely preserves all distances between assignments, clusters, etc., and that the all-zeroes assignment satisfies <math>F'</math>. So for any infinite collection <math>{C}</math> of satisfiable formulas with "hard solution structure" there is also an infinite collection of formulas <math>{C'}</math> with analogous solution structure, where <math>C'</math> is a subset of <math>SAT0</math>. That is, while the solution spaces of some random formulas may look complicated, it could still be that these formulas constitute a polynomial-time solvable problem for trivial reasons, independently of what the solution space looks like.
 
::The map easily extends to the generalization of k-SAT which consists of pairs <math>(F,A)</math>, where <math>F</math> is a k-SAT formula, <math>A</math> is a partial assignment to some variables, and we wish to know if it is possible to complete the partial assignment into a full satisfying assignment. By flipping literals appropriately in <math>F</math>, there is always an instance <math>(G,A)</math> where <math>G</math> can be satisfied by setting the rest of the variables to zero, and yet <math>G|_{A}</math> has an isomorphic solution space to <math>F|_{A}</math>. So for every instance of "k-SAT-Partial-Assignment" we can find an instance of "SAT0-Partial-Assignment" (which is again a trivial problem) with the same solution space structure.
 
::One can in fact get away with a map that just permutes the names of variables. Take a satisfiable <math>F</math> and assignment <math>A</math>, and let <math>\pi: \{1,\ldots,n\} \rightarrow \{1,\ldots,n\}</math> be such that <math>A' = (A_{\pi(1)},\ldots,A_{\pi(n)}) \isin L(0^{\star} 1^{\star})</math>, i.e., <math>A'</math> has the form "a bunch of zeroes followed by a bunch of ones". Now to every <math>x_i</math> occurring in <math>F</math>, replace it with <math>x_{\pi(i)}</math>. All formulas in the image of this map are satisfied by a (polynomial time) algorithm that tries all <math>n</math> assignments of the form <math>0^{i} 1^{n-i}</math>, and again the distances between assignments are preserved.
 
:::The above arguments look like cheats. Of course they are! We are exploiting the fact that, for arbitrary problems in NP (and therefore P as well), ''the solution space of that problem is not uniquely well-defined in general''. (Hubie Chen raised a similar point in email correspondence.) A solution space for an instance can only be defined with respect to some polynomial time verifier for the problem: this space is the set of witnesses that make the verifier accept on the instance. ''If you change the verifier, you change the solution space.''  The usual solution space for SAT arises from the verifier (call it <math>V</math>) that checks a candidate assignment against a given formula. If <math>V</math> is used on <math>SAT0</math>, we get a complex solution space. If we use a sane verifier for <math>SAT0</math> instead (the verifier that checks all-zeroes), the solution space becomes trivial. However, if <math>P=NP</math>, then there's also a verifier such that ''SAT has a trivial solution space'', namely the verifier which ignores its witness and just runs a polynomial time algorithm for SAT. The above argument only arises because the notion of solution space was forced to be verifier-independent over P and NP problems (which looks critical to the P vs NP paper).
 
::Note there are also ways to construct infinitely many 2-CNF formulas and XOR-SAT formulas with "complex" distributions, in that the solution space (with respect to the verifier <math>V</math> above) has many clusters, large distances between clusters, frozen variables--the kind of properties one finds with random k-SAT. For example, for any <math>k</math> and <math>n</math> one can easily construct 2-CNF formulas on <math>n</math> variables with <math>2^k</math> satisfying assignments, each of which have <math>n</math> frozen variables and every pair has hamming distance at least <math>n/k</math>. (See also "The XOR-SAT Objection" above.)
 
:'''Satisfiable formulas with 'complex' solution spaces can be efficiently mapped to satisfiable formulas with 'simple' solution spaces.''' A hard distribution of solutions is not ''necessary'' for NP-hardness, either. A weird distribution is ''not'' what makes a problem hard, it's the representation of that solution space (e.g., a 3-CNF formula, a 2-CNF formula, etc.). In fact the "hard" case of 3-SAT is the case where there is ''at most one'' satisfying assignment, since there is a randomized reduction from 3-SAT to 3-SAT with at most one satisfying assignment ([http://en.wikipedia.org/wiki/Valiant%E2%80%93Vazirani_theorem Valiant-Vazirani]). This reduction increases the number of clauses and the number of variables, but that doesn't really matter: you can always reduce 3-SAT with a "complex" solution space to one with an "easy" solution space, so how can a proof separating P from NP rely on the former? (Note that, if plausible circuit lower bounds hold up, then Valiant-Vazirani can be derandomized to run in deterministic polynomial time.)
 
::Of course, this point on its own does not invalidate Deolalikar's approach. To prove just one NP-complete problem has a complex solution space would be enough ''if'' it was also proved that all P problems have simple solution spaces. But it is hard to make sense of what this proposition could really mean, in light of the above.
 
:To summarize, there is little correlation between the "hard structure" of the solution space for instances of some problem, and the NP-hardness of that problem. The "boundary" between P and NP does not lie between "hard" and "easy" satisfiable formulas, but rather it lies between the satisfiable and unsatisfiable formulas. More precisely, it is the difficulty of distinguishing between the satisfiable and unsatisfiable that makes SAT hard, rather than the layout of satisfying assignments in some satisfiable formulas. (It may be that lower bounds on specific kinds of SAT algorithms can be proved using solution space structure, but given the above, it is very hard to believe that a lower bound for all algorithms could possibly work this way.)


=== Uniformity issues ===
=== Uniformity issues ===
Line 209: Line 249:
In [http://gowers.wordpress.com/2010/08/13/could-anything-like-deolalikars-strategy-work this blog post], Tim Gowers points out an "illegitimate sibling" of the natural proofs obstruction that seems to apply to any attempt to separate complexity classes by trying to exploit the fact that the solution spaces of problems Q in easy complexity classes enjoy some simple structural property (which for sake of argument we will call "property A").  To begin with, let us use a simple notion of solution space, namely the space of all x for which the problem Q(x) has an affirmative answer (but note that this is not the notion of solution space used in Deolalikar's paper).  Then the argument goes as follows.  Consider a "pseudorandom polynomial circuit" <math>f: \{0,1\}^n \to \{0,1\}</math> formed by applying a polynomially large (e.g. <math>n^{100}</math>) number of reversible logical gate operations on the n-bit input string, and then outputting (say) the first bit of the final state of that n-bit string as output.  It is plausible that this function is pseudorandom enough that it is indistinguishable from a random function <math>g: \{0,1\}^n \to \{0,1\}</math>.  In particular, if f obeys property A, then a random function should also obey property A.  However, most reasonable candidates for property A constrain the structure of the solution space to the extent that the random function should not obey property A.  This strongly suggests that property A cannot contain all problems in P.  To put it another way, any argument that proceeds by distinguishing easy and hard solution spaces should be able to describe a criterion that can distinguish the functions f and g from each other.
In [http://gowers.wordpress.com/2010/08/13/could-anything-like-deolalikars-strategy-work this blog post], Tim Gowers points out an "illegitimate sibling" of the natural proofs obstruction that seems to apply to any attempt to separate complexity classes by trying to exploit the fact that the solution spaces of problems Q in easy complexity classes enjoy some simple structural property (which for sake of argument we will call "property A").  To begin with, let us use a simple notion of solution space, namely the space of all x for which the problem Q(x) has an affirmative answer (but note that this is not the notion of solution space used in Deolalikar's paper).  Then the argument goes as follows.  Consider a "pseudorandom polynomial circuit" <math>f: \{0,1\}^n \to \{0,1\}</math> formed by applying a polynomially large (e.g. <math>n^{100}</math>) number of reversible logical gate operations on the n-bit input string, and then outputting (say) the first bit of the final state of that n-bit string as output.  It is plausible that this function is pseudorandom enough that it is indistinguishable from a random function <math>g: \{0,1\}^n \to \{0,1\}</math>.  In particular, if f obeys property A, then a random function should also obey property A.  However, most reasonable candidates for property A constrain the structure of the solution space to the extent that the random function should not obey property A.  This strongly suggests that property A cannot contain all problems in P.  To put it another way, any argument that proceeds by distinguishing easy and hard solution spaces should be able to describe a criterion that can distinguish the functions f and g from each other.


As pointed out by [http://rjlipton.wordpress.com/2010/08/12/fatal-flaws-in-deolalikars-proof/#comment-5326 Jun Tarui], Deolalikar's argument is not directly analysing the solution space <math>\{x: Q(x)=1\}</math> of problems in P, but rather the solution spaces  <math>\{ y: R(x,y)=1\}</math> to ''instances'' of satisfiability problems "Given x, does there exist y for which R(x,y)=1" which are in P.  However, one can extend Gowers' objection to this case also.  For instance, let <math>R: \{0,1\}^n \times \{0,1\}^{2n} \to \{0,1\}</math> be a pseudorandom circuit (so x has n bits and y has 2n bits).  Then for n large enough, the satisfiability problem should always have an affirmative answer (and in particular is trivially in P); but the solution spaces <math>\{ y: R(x,y)=1\}</math> are still indistinguishable from random subsets of <math>\{0,1\}^n</math>.
As pointed out by [http://rjlipton.wordpress.com/2010/08/12/fatal-flaws-in-deolalikars-proof/#comment-5326 Jun Tarui], Deolalikar's argument is not directly analysing the solution space <math>\{x: Q(x)=1\}</math> of problems in P, but rather the solution spaces  <math>\{ y: R(x,y)=1\}</math> to ''instances'' of satisfiability problems "Given x, does there exist y for which R(x,y)=1" which are in P (and moreover, that the same should hold even when some partial assignment s of y has already been fixed).  However, one can extend Gowers' objection to this case also.  For instance, let <math>R: \{0,1\}^n \times \{0,1\}^{n} \to \{0,1\}</math> be a pseudorandom circuit (so x has n bits and y has n bits).  Then for n large enough, the satisfiability problem should always have an affirmative answer (and in particular is trivially in P) provided that at least <math>100 \log n</math> of the bits of y are left unassigned (this is basically because <math>\sum_n 2^{-2^{100 \log n}} 2^{O(n)} < \infty</math>), and the remaining cases when there are at most <math>100 \log n</math> bits to assign can be verified by brute force.  On the other hand the solution spaces <math>\{ y: R(x,y)=1\}</math> are still indistinguishable from random subsets of <math>\{0,1\}^n</math>.


==== Algebrization ====
==== Algebrization ====


See Aaronson and Widgerson, "[http://www.scottaaronson.com/papers/alg.pdf Algebrization: A New Barrier in Complexity Theory]" ''ACM Transactions on Computation Theory'' (2009).
See Aaronson and Wigderson, "[http://www.scottaaronson.com/papers/alg.pdf Algebrization: A New Barrier in Complexity Theory]" ''ACM Transactions on Computation Theory'' (2009).


:The paper is all about the local properties of a specific NP-complete problem (k-SAT), and for that reason, I don't think relativization is relevant. Personally, I'm more interested in why the argument makes essential use of uniformity (which is apparently why it's supposed to avoid Razborov-Rudich). ([http://scottaaronson.com/blog/?p=456#comment-44649 Scott Aaronson])
:The paper is all about the local properties of a specific NP-complete problem (k-SAT), and for that reason, I don't think relativization is relevant. Personally, I'm more interested in why the argument makes essential use of uniformity (which is apparently why it's supposed to avoid Razborov-Rudich). ([http://scottaaronson.com/blog/?p=456#comment-44649 Scott Aaronson])
Line 227: Line 267:
Followup by [http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%e2%89%a0np#comment-4988 Ryan Williams]:
Followup by [http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%e2%89%a0np#comment-4988 Ryan Williams]:
<blockquote>
<blockquote>
It is a great idea to try to formally define this barrier and develop its properties. I think the “not necessary” part is pretty well-understood, thanks to Valiant-Vazirani. But the “not sufficient” part, the part relevant to the current paper under discussion, still needs some more rigor behind it. As I related to Lenka Zdeborova, it is easy to construct, for every n, a 2-CNF formula on n variables which has many “clusters” of solutions, where each cluster has large hamming distance from each other, and within the cluster there are a lot of satisfying assignments. But one would like to say something stronger, e.g. “for any 3-CNF formula with solution space S, that space S can be very closely simulated by the solution space S’ for some CSP instance variables that is polytime solvable”.
It is a great idea to try to formally define this barrier and develop its properties. I think the “not necessary” part is pretty well-understood, thanks to Valiant-Vazirani. But the “not sufficient” part, the part relevant to the current paper under discussion, still needs some more rigor behind it. As I related to Lenka Zdeborova, it is easy to construct, for every n, a 2-CNF formula on n variables which has many “clusters” of solutions, where each cluster has large hamming distance from each other, and within the cluster there are a lot of satisfying assignments. But one would like to say something stronger, e.g. “for any 3-CNF formula with solution space S, that space S can be very closely simulated by the solution space S’ for some CSP instance that is polytime solvable”.
</blockquote>
</blockquote>


Line 238: Line 278:
* [[Immerman-Vardi theorem]]
* [[Immerman-Vardi theorem]]
* [http://en.wikipedia.org/wiki/Least_fixed_point Least fixed point] (LFP) in general, and in a [http://en.wikipedia.org/wiki/FO_(complexity)#Least_Fixed_Point_is_PTIME descriptive complexity setting]
* [http://en.wikipedia.org/wiki/Least_fixed_point Least fixed point] (LFP) in general, and in a [http://en.wikipedia.org/wiki/FO_(complexity)#Least_Fixed_Point_is_PTIME descriptive complexity setting]
* [[Polylog parameterizability]]
* [[Random k-SAT]]
* [[Random k-SAT]]
* [[The complexity class NP]]
* [[The complexity class NP]]
* [[The complexity class P]]
* [[The complexity class P]]


== Online reactions ==
{{:Online reactions to Deolalikar P vs NP paper}}
 
=== Theory blogs ===
 
* [http://gregbaker.ca/blog/author/greg/ P ≠ NP], Greg Baker, Greg and Kat’s blog, August 7 2010.
* [http://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-not-equal-to-np/ A proof that P is not equal to NP?], Richard Lipton, Gödel’s lost letter and P=NP, August 8 2010.
* [http://dabacon.org/pontiff/?p=4286 P<>NP?], Dave Bacon, The Quantum Pontiff, August 8 2010.
* [http://geomblog.blogspot.com/2010/08/on-deolalikar-proof-crowdsourcing.html On the Deolalikar proof: Crowdsourcing the discussion ?], Suresh Venkatasubramanian, The Geomblog, August 9 2010.
* [http://scottaaronson.com/blog/?p=456 Putting my money where my mouth isn’t], Scott Aaronson, Shtetl-Optimized, August 9 2010.
* [http://blog.computationalcomplexity.org/2010/08/that-p-ne-np-proof-whats-up-with-that.html That P ne NP proof- whats up with that?], Bill Gasarch, Computational Complexity, August 9 2010.
* [http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%e2%89%a0np/ Issues In The Proof That P≠NP], Richard Lipton and Ken Regan, Gödel’s lost letter and P=NP, August 9 2010.
* [http://constraints.wordpress.com/2010/08/09/deolalikars-manuscript/ Deolalikar's manuscript], András Salamon, Constraints, August 9 2010.
* [http://aeporreca.org/2010/08/09/proof-that-p-isnt-np/ A relatively serious proof that P != NP ?], Antonio E. Porreca, August 9 2010 (aggregated many of the comments).
* [http://geomblog.blogspot.com/2010/08/polymath-home-for-analysis-of.html A 'polymath' home for analysis of the Deolalikar proof], Suresh Venkatasubramanian, The Geomblog, August 10 2010.
* [http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%E2%89%A0np/ Update on Deolalikar's Proof that P≠NP], Richard Lipton and Ken Regan, Gödel’s lost letter and P=NP, August 10 2010.
* [http://dabacon.org/pontiff/?p=4292 P<>NP Hype], Dave Bacon, The Quantum Pontiff, August 10 2010.
* [http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p%e2%89%a0np-proof/ Deolalikar Responds To Issues About His P≠NP Proof] Richard Lipton, Gödel’s lost letter and P=NP, August 11, 2010
* [http://scottaaronson.com/blog/?p=457 The ethics of scientific betting], Scott Aaronson, Shtetl-Optimized, August 11 2010.
* [http://geomblog.blogspot.com/2010/08/p-vs-np-what-ive-learnt-so-far.html P vs NP: What I've learnt so far...], Suresh Venkatasubramanian, The Geomblog, August 11 2010.
* [http://gowers.wordpress.com/2010/08/11/my-pennyworth-about-deolalikar/ My pennyworth about Deolalikar], Timothy Gowers, Aug 11 2010.
* [http://rjlipton.wordpress.com/2010/08/12/fatal-flaws-in-deolalikars-proof Fatal Flaws in Deolalikar’s Proof?], Richard Lipton, Gödel’s lost letter and P=NP, August 12 2010.
* [http://gowers.wordpress.com/2010/08/13/could-anything-like-deolalikars-strategy-work/ Could anything like Deolalikar’s strategy work?], Timothy Gowers, August 13, 2010.
 
=== Media and aggregators ===
 
====8th August====
* [http://news.ycombinator.com/item?id=1585850 P ≠ NP], Hacker News, August 8 2010.
* [http://science.slashdot.org/story/10/08/08/226227/Claimed-Proof-That-P--NP Claimed Proof That P != NP], Slashdot, August 8 2010.
* [http://www.heise.de/newsticker/meldung/P-NP-moeglicherweise-bewiesen-1052857.html P != NP möglicherweise bewiesen], heise online, August 8 2010.
* [http://www.reddit.com/r/programming/comments/cytlp/serious_attempt_that_p_np_originally_posted_in/ Serious attempt that P != NP], Reddit, August 8 2010.
 
====9th August====
 
* [http://www.aolnews.com/surge-desk/article/p-np-wtf-a-short-guide-to-understanding-vinay-deolalikars-mat/19586401 P=NP=WTF?: A Short Guide to Understanding Vinay Deolalikar's Mathematical Breakthrough], Dana Chivvis, AolNews, August 9 2010.
* [http://www.pcworld.com/article/202950/hp_researcher_claims_to_crack_compsci_complexity_conundrum.html HP Researcher Claims to Crack Compsci Complexity Conundrum], Joab Jackson, IDG News, August 9 2010.
* [http://digg.com/general_sciences/PnNP_has_been_proven P≠NP has been proven?], Digg, August 9 2010.
 
====10th August====
 
* [http://www.nature.com/news/2010/100810/full/news.2010.398.html Million-dollar problem cracked?] Geoff Brumfiel, nature news, August 10 2010.
* [http://www.newscientist.com/article/dn19287-p--np-its-bad-news-for-the-power-of-computing.html P ≠ NP? It's bad news for the power of computing] Richard Elwes, New Scientist, August 10 2010.
* [http://blogs.forbes.com/leegomes/2010/08/10/the-non-flaming-of-an-hp-mathematician/?boxes=Homepagechannels The Non-Flaming of an HP Mathematician], Lee Gomes, Digital tools, Forbes blogs, August 10 2010.
* [http://blogs.discovermagazine.com/80beats/2010/08/10/has-the-devilish-math-problem-p-vs-np-finally-been-solved/ Has the Devilish Math Problem “P vs NP” Finally Been Solved?], Andrew Moseman, 80 beats, Discover blogs, August 10 2010.
 
====11th August====
 
* [http://www.telegraph.co.uk/science/science-news/7938238/Computer-scientist-Vinay-Deolalikar-claims-to-have-solved-maths-riddle-of-P-vs-NP.html Computer scientist Vinay Deolalikar claims to have solved maths riddle of P vs NP], Alastair Jamieson, The Daily Telegraph, August 11 2010.
* [http://science.slashdot.org/story/10/08/11/0239209/Possible-Issues-With-the-P--NP-Proof Possible issues with the P!=NP proof], Slashdot, August 11 2010.
* [http://www.bbc.co.uk/news/science-environment-10938302 Million dollar maths puzzle sparks row], Victoria Gill, BBC News, August 11 2010.
* [http://www.infoworld.com/t/code-analysis/computer-science-breakthrough-the-end-p-np-583 Computer science breakthrough: The end of P = NP?], Woody Leonhard, InfoWorld, August 11 2010.
* [http://www.wired.co.uk/news/archive/2010-08/11/p-vs-np-solved Proof offered for P vs NP mathematical puzzle], Duncan Geere, Wired.co.uk, August 11 2010.
* [http://blogs.forbes.com/leegomes/2010/08/11/a-beautiful-sausage/ A beautiful sausage], Lee Gomes, Digital tools, Forbes blogs, August 11 2010.
* [http://www.psychologytoday.com/blog/nature-brain-and-culture/201008/p-np-and-is-academia-inhospitable-big-discoveries P, NP, And Is Academia Inhospitable to Big Discoveries?], Mark Changizi, Psychology Today, August 11 2010.
* [http://www.livemint.com/2010/08/10230113/Indian-scientist-offers-proof.html?h=E Indian scientist offers proof for P=NP riddle], Samanth Subramanian, Livemint.com, August 11 2010.
* [http://www.theregister.co.uk/2010/08/11/the_p_versus_np_problem/ HP boffin claims million-dollar maths prize], Rik Myslewski, The Register, August 11 2010.
* [http://en.wikinews.org/wiki/P%3DNP_math_problem_reported_solved?dpl_id=200299 P=NP math problem reported solved], Wikinews, August 11 2010.
* [http://www.zeit.de/wissen/2010-08/milleniumsproblem-beweis-mathematik Ist ein Jahrtausendproblem der Mathematik gelöst? (in German)], Christoph Drösser, Zeit online, August 11 2010.
 
====12th August====
* [http://www.hindu.com/2010/08/12/stories/2010081257181300.htm Indian scientist proposes solution to math problem], Shyam Ranganathan, The Hindu, Aug 12, 2010.
* [http://www.i-programmer.info/news/112-theory/1161-update-on-the-proof-of-pnp.html Update on the Proof of P != NP?] Mike James, I Programmer, August 12, 2010.
* [http://science.orf.at/stories/1657054 Millenium riddle of mathematics solved? (Milleniumsrätsel der Mathematik gelöst? (in German)], Lukas Wieselberg, science.orf.at, August 12, 2010.
* [http://blogs.forbes.com/leegomes/2010/08/12/how-to-think-like-a-mathematician How to think like a mathematician] Lee Gomes, Digital tools, Forbes blogs, August 12, 2010
* [http://arstechnica.com/science/news/2010/08/new-proof-unlocks-answer-to-the-p-versus-np-problemmaybe.ars New proof unlocks answer to the P versus NP problem—maybe], Matt Ford, Ars Technica, August 12, 2010.
* [http://techcrunch.com/2010/08/12/fuzzy-math/ Attempt At P ≠ NP Proof Gets Torn Apart Online], Alexia Totsis, Techcrunch, August 12, 2010.
 
====13th August ====
* [http://www.newscientist.com/article/dn19313-tide-turns-against-milliondollar-maths-proof.html Tide turns against million-dollar maths proof], Jacob Aron, New Scientist, August 13, 2010.
 
=== Real-time searches ===
 
* [http://search.twitter.com/search?q=P%20NP Current Twitter search for "P NP"]
* [http://twitter.com/search?q=Deolalikar Current Twitter search for "Deolalikar"]
* [http://news.google.com/news/search?aq=f&pz=1&cf=all&ned=us&hl=en&q=P+NP Current Google News search for "P NP"]
 
=== Other ===
 
* [http://twitter.com/fortnow/statuses/20673954168 Twitter], Lance Fortnow, August 8 2010.
* [http://www.daniel-lemire.com/blog/archives/2010/08/09/how-to-get-everyone-talking-about-your-research/ How to get everyone talking about your research], Daniel Lemire, August 9 2010.
* [http://twitter.com/rrwilliams/status/20741046788 Twitter], Ryan Williams, August 9 2010.
* [http://www.google.com/buzz/114134834346472219368/1vfSCPtRQZf/Vinay-Deolaikar-recently-released-a-102-page Google Buzz], Terence Tao, August 9 2010.
* [http://www.schneier.com/blog/archives/2010/08/p_np_1.html P ≠ NP?], Bruce Schneier, Schneier on Security, August 9 2010.
* [http://blog.vixra.org/2010/08/09/vinay-deolalikar-says-p-%E2%89%A0-np/ Vinay Deolalikar says P ≠ NP], Philip Gibbs, vixra log, August 9 2010.
* [http://cameronneylon.net/blog/p-%E2%89%A0-np-and-the-future-of-peer-review/ P ≠ NP and the future of peer review] Cameron Neylon, Science in the Open, August 10 2010.
* [http://computacioncuantica.blogspot.com/2010/08/una-prueba-de-que-p-np.html Una prueba de que P≠NP? (Spanish)], Alejandro Díaz-Caro, Computación Cuántica, Aug 10 2010.
* [http://www.jtolds.com/newsletter/2010/8/11/watching-great-mathematics-unfold Watching great mathematics unfold], JT Olds, Aug 10, 2010.
 
* [http://blog.oddhead.com/2010/08/11/betting-on-p-equals-np/ Where is the betting market for P=NP when you need it?], David Pennock, Oddhead Blog, August 11 2010
 
 
Additions to the above list of links are of course very welcome.


== Timeline ==
== Timeline ==
Line 347: Line 298:
* August 12: Neil Immerman [[Immerman's letter|describes what appear to be fatal flaws in the argument]].
* August 12: Neil Immerman [[Immerman's letter|describes what appear to be fatal flaws in the argument]].
* August 13: A three-page [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_synopsis.pdf synopsis of the proof] is posted on Deolalikar's [http://www.hpl.hp.com/personal/Vinay_Deolalikar/ home page].
* August 13: A three-page [http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_synopsis.pdf synopsis of the proof] is posted on Deolalikar's [http://www.hpl.hp.com/personal/Vinay_Deolalikar/ home page].
* August 15: Further difficulties in the proof are summarized in another [http://rjlipton.wordpress.com/2010/08/15/the-p%e2%89%a0np-proof-is-one-week-old/ post of Dick Lipton and Ken Regan].
* August 17: All drafts of the paper are now removed from Deolalikar's [http://www.hpl.hp.com/personal/Vinay_Deolalikar/ home page], leaving only the synopsis and the comment "The preliminary version was meant to solicit feedback from a few researchers as is customarily done. It illustrated the interplay of principles from various areas, which was the major effort in constructing the proof. I have fixed all the issues that were raised about the preliminary version in a revised manuscript (126 pages); clarified some concepts; and obtained simpler proofs of several claims. This revised manuscript has been sent to a small number of researchers.  I will send the manuscript to journal review this week. Once I hear back from the journal as part of due process, I will put up the final version on this website."
* August 20: The paper temporarily resurfaces as a [http://www.hpl.hp.com/techreports/2010/HPL-2010-95.html HP Technical report].
* August 23: The HP Technical report is no longer available.


== Bibliography ==
== Bibliography ==


===Papers===
* [AVV1997] S. Abiteboul, M. Y. Yardi, V. Vianu, "[http://portal.acm.org/citation.cfm?id=256295 Fixpoint logics, relational machines, and computational complexity]", Journal of the ACM (JACM) Volume 44,  Issue 1  (January 1997), 30-56.
* [AVV1997] S. Abiteboul, M. Y. Yardi, V. Vianu, "[http://portal.acm.org/citation.cfm?id=256295 Fixpoint logics, relational machines, and computational complexity]", Journal of the ACM (JACM) Volume 44,  Issue 1  (January 1997), 30-56.
* [AM2003] D. Achlioptas, C. Moore, "[http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WJ0-49M05RK-3&_user=10&_coverDate=09%2F30%2F2003&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=13eae49445b87797b1f90aa42e54b5a5 Almost all graphs with average degree 4 are 3-colorable]", Journal of Computer and System Sciences 67, Issue 2, September 2003, 441-471.
* [AM2003] D. Achlioptas, C. Moore, "[http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WJ0-49M05RK-3&_user=10&_coverDate=09%2F30%2F2003&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=13eae49445b87797b1f90aa42e54b5a5 Almost all graphs with average degree 4 are 3-colorable]", Journal of Computer and System Sciences 67, Issue 2, September 2003, 441-471.
Line 355: Line 311:
* [VV1986] L. G. Valiant, V. V. Vazirani, "[http://www.cs.princeton.edu/courses/archive/fall05/cos528/handouts/NP_is_as.pdf NP is as easy as detecting unique solutions]", Theoretical Computer Science (North-Holland) 47: 85–93 (1986). doi:10.1016/0304-3975(86)90135-0.
* [VV1986] L. G. Valiant, V. V. Vazirani, "[http://www.cs.princeton.edu/courses/archive/fall05/cos528/handouts/NP_is_as.pdf NP is as easy as detecting unique solutions]", Theoretical Computer Science (North-Holland) 47: 85–93 (1986). doi:10.1016/0304-3975(86)90135-0.
* [V1982] M. Vardi, "[http://www.dcc.uchile.cl/~cgutierr/cursos/CC/vardi-complex-bd.pdf Complexity of Relational Query Languages]", 14th Symposium on Theory of Computation (1982), 137-146.
* [V1982] M. Vardi, "[http://www.dcc.uchile.cl/~cgutierr/cursos/CC/vardi-complex-bd.pdf Complexity of Relational Query Languages]", 14th Symposium on Theory of Computation (1982), 137-146.
===Books===
* [http://books.google.com/books?id=jhCM7i0a6UUC Information, Physics, and Computation], Marc Mézard, Andrea Montanari, Oxford University Press, 2009, ISBN-13: 978-0198570837, - the interface between statistical physics, theoretical computer science/discrete mathematics, and coding/information theory.


== Other links ==
== Other links ==
Line 373: Line 332:
* [http://kintali.wordpress.com/2009/09/01/relativization-barrier/ Relativization Barrier]
* [http://kintali.wordpress.com/2009/09/01/relativization-barrier/ Relativization Barrier]
* [http://terrytao.wordpress.com/2009/08/01/pnp-relativisation-and-multiple-choice-exams P=NP, relativisation, and multiple choice exams], Terence Tao
* [http://terrytao.wordpress.com/2009/08/01/pnp-relativisation-and-multiple-choice-exams P=NP, relativisation, and multiple choice exams], Terence Tao
* [http://scottaaronson.com/blog/?p=459 P=NP for dummies], Scott Aaronson, August 15 2010


* There are many other complexity classes besides P and NP. See the [http://qwiki.stanford.edu/wiki/Complexity_Zoo Complexity Zoo]
* There are many other complexity classes besides P and NP. See the [http://qwiki.stanford.edu/wiki/Complexity_Zoo Complexity Zoo]
Line 385: Line 345:
* Here is a [http://wiki.henryfarrell.net/wiki/index.php/Computer_Science list of blogs in computer science] (including several blogs who have been actively posting on this topic).
* Here is a [http://wiki.henryfarrell.net/wiki/index.php/Computer_Science list of blogs in computer science] (including several blogs who have been actively posting on this topic).


* [http://area51.stackexchange.com/proposals/8766/theoretical-computer-science Theoretical Computer Science] Q&A site launching soon from stackexchange.
* [http://area51.stackexchange.com/proposals/8766/theoretical-computer-science Theoretical Computer Science] Q&A stackexchange site site launching currently in beta.

Latest revision as of 15:42, 30 September 2011

This is a clearinghouse wiki page for aggregating the following types of items:

  1. Analysis of Vinay Deolalikar's recent preprint claiming to prove that P != NP;
  2. News and information about this preprint;
  3. Background material for the various concepts used in the preprint; and
  4. Evaluation of the feasibility and limitations of the general strategies used to attack P != NP, including those in the preprint.

It is hosted by the polymath project wiki, but is not a formal polymath project.

Corrections and new contributions to this page are definitely welcome. Of course, any new material should be sourced whenever possible, and remain constructive and objectively neutral; in particular, personal subjective opinions or speculations are to be avoided. This page is derived from an earlier collaborative document created by Suresh Venkatasubramanian.

Discussion threads

The main discussion threads are being hosted on Dick Lipton's blog. Several of the posts were written jointly with Ken Regan.

The paper

These links are taken from Vinay Deolalikar's web page.

Here is the list of updates between the different versions.

Typos and minor errors

Any typos appearing in an earlier draft that no longer appear in the latest draft should be struck out.

  • (Second draft, page 31, Definition 2.16): "Perfect man" should be "Perfect map". (via Blake Stacey)
  • (Second draft) Some (but not all) of the instances of the [math]\displaystyle{ O() }[/math] notation should probably be [math]\displaystyle{ \Theta() }[/math] or [math]\displaystyle{ \Omega() }[/math] instead, e.g. on pages 4, 9, 16, 28, 33, 57, 68, etc. (via András Salamon)
    • Still present in the third draft, e.g. "O(n) Hamming separation between clusters" occurs on page 68 and similarly in several other places.
  • (Second draft, page 27) [math]\displaystyle{ n 2^n }[/math] independent parameters → [math]\displaystyle{ n 2^k }[/math] independent parameters
    • Still present in third draft, but now on page 33.
  • (draft 2 + e, p.34, Def. 3.8): [math]\displaystyle{ n }[/math][math]\displaystyle{ k }[/math]
    • Still present in third draft, but now on page 49 and Def 4.8.
  • (Second draft, page 52) [math]\displaystyle{ \sum C_{li}S_i-k\gt 0 }[/math][math]\displaystyle{ \sum C_{li}S_i+k\gt 0 }[/math]
    • Still present in third draft, but now on page 70.
  • (draft 2 + e, p.10): "We reproduce the rigorously proved picture of the 1RSB ansatz that we will need in Chapter 5." The phrasing makes it sound like we will need the 1RSB ansatz in Chapter 5 instead of saying that it is reproduced in Chapter 5 (which I think is what the author intended). One fix is to move "in Chapter 5" to the beginning of the sentence.
    • Still present in third draft, but now on page 16 (and referring to Chapter 6 instead).
  • (Third draft, p. 102): "inspite" → "in spite".
  • (Third draft, p. 27, first paragraph): "is to obtain" → "to obtain".
  • (Third draft, p. 19) Should the neighborhood [math]\displaystyle{ {\mathcal G}_i }[/math] be [math]\displaystyle{ {\mathcal N}_i }[/math] instead (or vice versa)? Also, this neighborhood contains self-loops and so is technically not a neighborhood system in the sense of Definition 2.3.
  • (Third draft, p. 20) Some braces appear to be missing in the first display (just before Definition 2.6).
  • (Third draft, p. 24) Some text appears to be obscured by Figure 2.2.
  • (Third draft, p. 25) In (2.1) and (2.2) [math]\displaystyle{ \phi }[/math] should probably be [math]\displaystyle{ \phi_3 }[/math].
  • (Third draft, p. 37) Some text appears to be obscured by Figure 3.2.
  • (Third draft, p. 43) "The Tarski-Knaster" → "The Tarski-Knaster theorem".
  • (Third draft, p. 44) Right parenthesis missing in (4.3) and immediately afterwards.
  • (Third draft, p. 99) Some italicized text appears to be obscured by Figure 8.2.

Proof strategy

(Excerpted from this comment of Ken Regan)

Deolalikar has constructed a vocabulary V which apparently obeys the following properties:

  1. Satisfiability of a k-CNF formula can be expressed by NP-queries over V—in particular, by an NP-query Q over V that ties in to algorithmic properties.
  2. All P-queries over V can be expressed by FO(LFP) formulas over V.
  3. NP = P implies Q is expressible by an FO(LFP) formula over V.
  4. If Q is expressible by an LFP formula over V, then by the algorithmic tie-in, we get a certain kind of polynomial-time LFP-based algorithm.
  5. Such an algorithm, however, contradicts known statistical properties of randomized k-SAT when k >= 9.

An alternate perspective

Leonid Gurvits:

...the discrete probabilistic distributions in the paper can be viewed as tensors, or very special multilinear polynomials. The assumptions “P=NP” somehow gives a (polynomial?) upper bound on the tensor rank. And finally, using known probabilistic results, he gets nonmatching (exponential?) lower bound on the same rank.

If I am right, then this approach is a very clever, in a good sense elementary, way to push the previous algebraic-geometric approaches.

Specific issues

  • P=NP only gives polylog parameterizability property of a lift of the k-SAT solution space, rather than that solution space itself

If P=NP, then the procedure in Section 8.3 of the third draft does not create a polylog parameterizable distribution on the solution space of a hard phase k-SAT problem itself, but rather on a lift of that solution space in which many additional variables are added, so that one is now working in [math]\displaystyle{ \{0,1\}^N }[/math] for some [math]\displaystyle{ N=n^{O(1)} }[/math] rather than in [math]\displaystyle{ \{0,1\}^n }[/math]. Of course, one can forget all but the original n variables and return to a distribution on the solution space by projecting, but it is not clear that the polylog parameterizable property is preserved by this (there is no reason why the directed acyclic graph on the N literals has any meaningful projection onto the original n literals). In particular, the projected distribution is not in a recursively factorizable form, and the cluster geometry of the lifted solution space could be quite different from that of the original k-SAT problem (e.g. the set of frozen variables could change, as could the definition of a cluster, and number of flips required to get from one cluster to the next). This issue does not seem to be discussed at all in the paper.

To phrase it another way, consider the following excerpt from the paper describing part of the strategy (Page 92, third draft):

We have embedded our original set of variates into a polynomially larger product space, and obtained a directed graphical model on this larger space. This product space has a nice factorization due to the directed graph structure. This is what we will exploit.

It is clear here that the problem is being lifted from the original space [math]\displaystyle{ \{0,1\}^n }[/math] to a larger space [math]\displaystyle{ \{0,1\}^N }[/math], but then the solution space of (say) k-SAT on [math]\displaystyle{ \{0,1\}^n }[/math] also gets lifted to a solution space which could have a completely different geometry. Given that the proof sketch of P != NP (pages 99-100, third draft) is based on the cluster geometry of the solution space of k-SAT itself, rather than a lift thereof, this is a significant gap in the argument. (Specifically, the variables [math]\displaystyle{ \alpha,\beta,\gamma }[/math] etc. appearing on page 100 are implicitly assumed to belong to the original set of n literals [math]\displaystyle{ x_1,\ldots,x_n }[/math], but due to the embedding into the polynomially larger space, they are in fact likely to instead be scattered among a much larger set [math]\displaystyle{ x_1,\ldots,x_N }[/math] of literals.


  • The problem in the LFP section of the proof:
Anuj Dawar raises a point about order-invariance in intermediate stages of the LFP computation:

Another problem I see is the order-invariance. Deolalikar (after justifying the inclusion of the successor relation R_E) that *each stage* of any LFP formula defining k-SAT would be order-invariant. While it is clearly true that the LFP formula itself would be order-invariant, this does not mean that each stage would.

So, it seems to me that this proof shows (at best) that k-SAT is not definable by an LFP each stage of which is order-invariant. This has an easy direct proof. Moreover, it is also provable that XOR-SAT (or indeed parity) is not LFP-definable in this sense.

As phrased by Albert Atserias and Steven Lindell:

Let “polylog-parametrizable” be a property of solution spaces yet to be defined.

  • Claim 1: The solution spaces of problems in P are polylog-parametrizable.
  • Claim 2: The solution space of k-SAT is not polylog-parametrizable.

(Caution: the term "solution spaces" requires some clarification. More precisely, we are looking at satisfiability problems of the form "Given x, does there exist y such that R(x,y)=1?". The solution space here refers not to the set [math]\displaystyle{ \{ x: R(x,y)=1 \hbox{ for some } y \} }[/math] of feasible x, but rather on a "typical" fibre [math]\displaystyle{ \{ y: R(x,y) = 1 \} }[/math] for a "typical" feasible x.)

Now, Claim 1 could be split as follows:

  • Claim 1.1: The solution spaces to problems in Monadic LFP are polylog-parametrizable.
  • Claim 1.2: Every problem in LFP reduces to a problem Monadic LFP by a reduction that preserves polylog-parametrizability of the solution spaces.

From these Claim 1 would follow from the Immerman-Vardi Theorem (we need successor or linear order of course).

Perhaps Claim 1.1 could be true (we know a lot about Monadic LFP, in particular unconditional lower bounds using Hanf-Gaifman locality).

But Claim 1.2 looks very unlikely. The reduction that Deolalikar proposes is standard but definitely does not preserve anything on which we can apply Hanf-Gaifman locality.


Specifically, (and this is adapted from Lindell's Critique) on Remark 8.5 on page 85 (third draft), it is asserted that "the [successor] relation... is monadic, and so does not introduce new edges into the Gaifman graph". However, this is only true in the original LFP. When encoding the LFP into a monadic LFP as is done immediately afterwards in the same remark, the relation becomes a section of a relation of higher arity (as mentioned in page 87 and Appendix A), using an induction relation. However, this induction relation itself must be monadically encoded, and it may not have bounded degree. In fact, because of successor, it could define an ordering in two of its columns, which would render the Gaifman graph useless (indeed, the Gaifman graph could even collapse to have diameter one).
Hanf-Gaifman locality also does not apply if we work with k-ary LFP directly without the reduction to the monadic case. Here the answer is that if we work with k-ary LFP directly then the Gaifman graph changes after each iteration of the LFP-computation because we need to take care of the new tuples that are added to the k-ary inductively defined relation. And in fact, after O(log n) iterations, the inductively defined relation could very well include the transitive closure of the given successor relation R_E, in which case the Gaifman graph is again fully connected (a clique). Once we have a clique for a Gaifman graph, we have local = global.

Neil Immerman has written a very interesting letter to Deolalikar critiquing his proof. With Immerman's permission, this letter is reproduced on the following page: Immerman's letter. Immerman also raises the problem of order-invariance mentioned by Anuj Dawar above.
  • The XORSAT objection The claimed proof of P!=NP on pages 99-100 of the third draft assumes for contradiction that the extension problem for k-SAT is in P, concludes that the solution space of a random k-SAT problem is polylog parameterizable (in the ppp sense), and then shows that this is in contradiction with the known cluster geometry of that solution space at the hard phase, so that the k-SAT extension problem is in fact not in P. However, if this argument were valid, one could substitute k-SAT for k-XORSAT throughout, and obtain the false conclusion that the extension problem for k-XORSAT were similarly not in P. Thus, if the argument were valid, there must be a specific step in the argument which worked for k-SAT but not for k-XORSAT; but no such specific step has been identified. (It is indeed true that k-XORSAT is in ppp, while k-SAT is not believed to be in ppp, but this is not a specific step in the argument.) Indeed, it appears that the argument fails for both k-XORSAT and k-SAT at the same place, namely the projection issue mentioned earlier in which the variables [math]\displaystyle{ \alpha,\beta,\gamma }[/math] of the ENSP model are erroneously assumed to belong to the initial set of n variables, rather than the polynomially expanded set.

General issues

Issues with LFP

Erich Grädel has an extensive review of finite model theory and descriptive complexity.

There appear to be three issues related to the use of the characterization of P in terms of first order logic, an ordering and a least fixed point operator. All of these are discussed in the Lipton/Regan post, with contributions from David Barrington, Paul Christiano, Lance Fortnow, James Gate, Arthur Milchior, Charanjit Jutla, Julian Bradfield and Steven Lindell.

  • Ordering, or lack thereof: Is the lack of ordering in the logical structures used to define the LFP structure a problem (since parity can not be expressed without an ordering even with LFP, hence P is not captured without order).
In chapter 7 this issue seems to disappear since he introduces a successor relation over the variables [math]\displaystyle{ x_1\lt \dots\lt x_n\lt \neg x_1\lt \dots\lt \neg x_n }[/math].
If it was possible to express k-SAT in FO(NLFP,without succ) (NLFP=non deterministic LFP) or in relational-NP, as introduced in [AVV1997] then by an extension of the Abiteboul-Vianu theorem it would be enough to prove that k-SAT is not in FO(LFP,without succ). This would avoid the problem of the order
  • The issue of tupling: The paper requires that a certain predicate in the FO(LFP) formula be unary, and forces this by expanding neighborhoods and constructing k-tuples of parameters to act as single parameters. It is not clear how this affects the arguments about the propagation of local neighborhoods.
Albert Atserias says, "...for someone knowing the finite model theory used in the paper, there is a jump in the reasoning that lacks justification. This is the jump from Monadic LFP to full LFP. The only justification for this crucial step seems to be Remark 7.4 in page 70 of the original manuscript (and the vague statement in point 3 of page 49), but this is far from satisfactory. The standard constructions of the so-called canonical structures that Vinay refers to (see Ebbinghaus and Flum book in page 54) have a Gaifman graph of constant diameter, even without the linear order, due to the generalized equalities that allow the decoding of tuples into its components. Issues along these lines were raised before here and in comment 54 here
Steven Lindell presents a detailed critique of this problem, with an indication that there might be insurmountable problems. It is reproduced here for completeness.
  • Boundedness and greatest fixed points: Charanjit Jutla has pointed out that the argument in section 4.3 (with which several other people have also had issues) depends on the absence of a greatest fixed point. "This is a usual mistake most people new to fixed-point logics fall prey to. For example, now he has to deal with formulas of the kind [math]\displaystyle{ \nu x (f(y, x) \and g(y, x)). }[/math] Section 4.2 deals with just one least fixed point operator…where his idea is correct. But, in the next section 4.3, where he deals with complex fixed point iterations, he is just hand waving, and possibly way off."
A few comments later, he appears to revise this objection, while bringing up a new issue about the boundedness of the universe relating to the LFP operator.

Issues with phase transitions

A brief synopsis of the terms discussed can be found here

  • The nomenclature of phase transitions: In the statistical physics picture, there is not a single phase transition, but rather a set of different well defined transitions called clustering (equivalently d1RSB), condensation, and freezing (Florent Krzakala and Lenka Zdeborova). In the current version of the paper, properties of d1RSB (clustering), and freezing are mixed-up. Whereas following the established definitions, and contrary to some earlier conjectures, it is now agreed that some polynomial algorithms work beyond the d1RSB (clustering) or condensation thresholds. Graph coloring provides some evidence of this when one compares the performance of algorithms with the statistical physics predictions. The property of the solution space of random K-SAT the paper is actually using is called freezing. It was conjectured in the statistical physics community (Florent Krzakala, Lenka Zdeborova and Cris Moore) that really hard instances appears in the frozen phase, i.e. when all solutions have non-trivial cores. Existence of such a region was proven rigorously by Achlioptas and Ricci-Tersenghi and their theorem appears as Theorem 5.1 in the paper.
  • The XOR-SAT objection : The conjecture that frozen variables make a problem hard is however restricted to NP-complete problems such as K-SAT and Q-COL. Indeed a linear problem such as random k-XORSAT also has a clustering transition, frozen variables, etc., and is not easy to solve with most algorithms, but is of course in P as one can use Gauss elimination and exploit the linear structure to solve it in polynomial time (Cris Moore, Alif Wahid, and Lenka Zdeborova). Similar problem might exist in other restricted CSP which are in P, but may exhibit freezing stage, as pointed by several other people.

More specifically: in the portion of the paper that is devoted to analyzing the k-SAT problem, what is the first step which works for k-SAT but breaks down completely for k-XORSAT?

  • The error-correcting codes objection: Initiated in a comment by harrison: If I understand his argument correctly, Deolalikar claims that the polylog-conditional independence means that the solution space of a poly-time computation can’t have Hamming distance O(n) [presumably he means \theta(n)], as long as there are “sufficiently many solution clusters.” This would preclude the existence of efficiently decodable codes at anything near the Gilbert-Varshamov bound when the minimum Hamming distance is large enough.

Issues with random k-SAT

Complex solution spaces are uncorrelated with time complexity. (The below is a greatly expanded version of a series of twitter comments by Ryan Williams, on twitter) The author tries to use the fact that for certain parameterizations, the space of satisfying assignments to a random k-SAT instance (the "solution space") has some intriguing structure. If SAT is in P, then SAT can be captured in a certain logic (equivalent to P in some sense). The author claims that anything captured in this logic can't be used to generate a solution space with this intriguing structure. There are two "meta" objections to this. One is that "intriguing structure in the solution space is not sufficient for NP hardness". The second is that "intriguing structure is not necessary for NP hardness". They don't actually point to a place where the proof is wrong. But they do appear to give an obstacle to the general proof method.

How to map satisfiable formulas to trivial problems with exactly the same solution space structure. Very easy problems can also have complicated solution distributions. The following arose from discussions with Jeremiah Blocki. Here is one easy way to (non-uniformly!) map any infinite collection of satisfiable formulas into [math]\displaystyle{ SAT0 }[/math], the set of formulas satisfied by the all-zeroes assignment. The map completely preserves the number of variables, the number of satisfying assignments, the distances between assignments, the clusters--as far as I (Ryan Williams) can tell it preserves every property that has been studied about random k-SAT. Observe [math]\displaystyle{ SAT0 }[/math] is trivially decidable in polynomial time. This should be a barrier to any proof of P vs NP that attempts to argue that the solution space of an NP-hard problem is "too complex" for P. This is the objection which is most germane to the current proposed proof.
The map is simple: for a satisfiable formula [math]\displaystyle{ F }[/math] on [math]\displaystyle{ n }[/math] variables, let [math]\displaystyle{ (A_1,\ldots,A_n) }[/math] be a satisfying assignment for it, and define F' to be the formula obtained by the following procedure: flip the signs of all literals involving [math]\displaystyle{ x_i }[/math] in [math]\displaystyle{ F }[/math] iff [math]\displaystyle{ A_i = 1 }[/math]. Now the space of satisfying assignments to [math]\displaystyle{ F }[/math] is transformed by the map [math]\displaystyle{ \phi_A(x_1,\ldots,x_n) = (x_1 \oplus A_1,\ldots,x_n \oplus A_n) }[/math]. Observe this map completely preserves all distances between assignments, clusters, etc., and that the all-zeroes assignment satisfies [math]\displaystyle{ F' }[/math]. So for any infinite collection [math]\displaystyle{ {C} }[/math] of satisfiable formulas with "hard solution structure" there is also an infinite collection of formulas [math]\displaystyle{ {C'} }[/math] with analogous solution structure, where [math]\displaystyle{ C' }[/math] is a subset of [math]\displaystyle{ SAT0 }[/math]. That is, while the solution spaces of some random formulas may look complicated, it could still be that these formulas constitute a polynomial-time solvable problem for trivial reasons, independently of what the solution space looks like.
The map easily extends to the generalization of k-SAT which consists of pairs [math]\displaystyle{ (F,A) }[/math], where [math]\displaystyle{ F }[/math] is a k-SAT formula, [math]\displaystyle{ A }[/math] is a partial assignment to some variables, and we wish to know if it is possible to complete the partial assignment into a full satisfying assignment. By flipping literals appropriately in [math]\displaystyle{ F }[/math], there is always an instance [math]\displaystyle{ (G,A) }[/math] where [math]\displaystyle{ G }[/math] can be satisfied by setting the rest of the variables to zero, and yet [math]\displaystyle{ G|_{A} }[/math] has an isomorphic solution space to [math]\displaystyle{ F|_{A} }[/math]. So for every instance of "k-SAT-Partial-Assignment" we can find an instance of "SAT0-Partial-Assignment" (which is again a trivial problem) with the same solution space structure.
One can in fact get away with a map that just permutes the names of variables. Take a satisfiable [math]\displaystyle{ F }[/math] and assignment [math]\displaystyle{ A }[/math], and let [math]\displaystyle{ \pi: \{1,\ldots,n\} \rightarrow \{1,\ldots,n\} }[/math] be such that [math]\displaystyle{ A' = (A_{\pi(1)},\ldots,A_{\pi(n)}) \isin L(0^{\star} 1^{\star}) }[/math], i.e., [math]\displaystyle{ A' }[/math] has the form "a bunch of zeroes followed by a bunch of ones". Now to every [math]\displaystyle{ x_i }[/math] occurring in [math]\displaystyle{ F }[/math], replace it with [math]\displaystyle{ x_{\pi(i)} }[/math]. All formulas in the image of this map are satisfied by a (polynomial time) algorithm that tries all [math]\displaystyle{ n }[/math] assignments of the form [math]\displaystyle{ 0^{i} 1^{n-i} }[/math], and again the distances between assignments are preserved.
The above arguments look like cheats. Of course they are! We are exploiting the fact that, for arbitrary problems in NP (and therefore P as well), the solution space of that problem is not uniquely well-defined in general. (Hubie Chen raised a similar point in email correspondence.) A solution space for an instance can only be defined with respect to some polynomial time verifier for the problem: this space is the set of witnesses that make the verifier accept on the instance. If you change the verifier, you change the solution space. The usual solution space for SAT arises from the verifier (call it [math]\displaystyle{ V }[/math]) that checks a candidate assignment against a given formula. If [math]\displaystyle{ V }[/math] is used on [math]\displaystyle{ SAT0 }[/math], we get a complex solution space. If we use a sane verifier for [math]\displaystyle{ SAT0 }[/math] instead (the verifier that checks all-zeroes), the solution space becomes trivial. However, if [math]\displaystyle{ P=NP }[/math], then there's also a verifier such that SAT has a trivial solution space, namely the verifier which ignores its witness and just runs a polynomial time algorithm for SAT. The above argument only arises because the notion of solution space was forced to be verifier-independent over P and NP problems (which looks critical to the P vs NP paper).
Note there are also ways to construct infinitely many 2-CNF formulas and XOR-SAT formulas with "complex" distributions, in that the solution space (with respect to the verifier [math]\displaystyle{ V }[/math] above) has many clusters, large distances between clusters, frozen variables--the kind of properties one finds with random k-SAT. For example, for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ n }[/math] one can easily construct 2-CNF formulas on [math]\displaystyle{ n }[/math] variables with [math]\displaystyle{ 2^k }[/math] satisfying assignments, each of which have [math]\displaystyle{ n }[/math] frozen variables and every pair has hamming distance at least [math]\displaystyle{ n/k }[/math]. (See also "The XOR-SAT Objection" above.)
Satisfiable formulas with 'complex' solution spaces can be efficiently mapped to satisfiable formulas with 'simple' solution spaces. A hard distribution of solutions is not necessary for NP-hardness, either. A weird distribution is not what makes a problem hard, it's the representation of that solution space (e.g., a 3-CNF formula, a 2-CNF formula, etc.). In fact the "hard" case of 3-SAT is the case where there is at most one satisfying assignment, since there is a randomized reduction from 3-SAT to 3-SAT with at most one satisfying assignment (Valiant-Vazirani). This reduction increases the number of clauses and the number of variables, but that doesn't really matter: you can always reduce 3-SAT with a "complex" solution space to one with an "easy" solution space, so how can a proof separating P from NP rely on the former? (Note that, if plausible circuit lower bounds hold up, then Valiant-Vazirani can be derandomized to run in deterministic polynomial time.)
Of course, this point on its own does not invalidate Deolalikar's approach. To prove just one NP-complete problem has a complex solution space would be enough if it was also proved that all P problems have simple solution spaces. But it is hard to make sense of what this proposition could really mean, in light of the above.
To summarize, there is little correlation between the "hard structure" of the solution space for instances of some problem, and the NP-hardness of that problem. The "boundary" between P and NP does not lie between "hard" and "easy" satisfiable formulas, but rather it lies between the satisfiable and unsatisfiable formulas. More precisely, it is the difficulty of distinguishing between the satisfiable and unsatisfiable that makes SAT hard, rather than the layout of satisfying assignments in some satisfiable formulas. (It may be that lower bounds on specific kinds of SAT algorithms can be proved using solution space structure, but given the above, it is very hard to believe that a lower bound for all algorithms could possibly work this way.)

Uniformity issues

The following is a lightly edited excerpt from a comment of Russell Impagliazzo:

The general approach of this paper is to try to characterize hard instances of search problems by the structure of their solution spaces. The problem is that this intuition is far too ambitious. It is talking about what makes INSTANCES hard, not about what makes PROBLEMS hard. Since in say, non-uniform models, individual instances or small sets of instances are not hard, this seems to be a dead-end. There is a loophole in this paper, in that he’s talking about the problem of extending a given partial assignment. But still, you can construct artificial easy instances so that the solution space has any particular structure. That solutions fall in well-separated clusters cannot really imply that the search problem is hard. Take any instance with exponentially many solutions and perform a random linear transformation on the solution space, so that solution y is “coded” by Ay. Then the complexity of search hasn’t really changed, but the solution space is well-separated. So the characterization this paper is attempting does not seem to me to be about the right category of object.

Locality issues

From a comment of Thomas Schwentick:

There is an another issue with the locality in remark 3 of Section 4.3. Moving from singletons to tuples destroys locality: this is because the distance of two tuples is defined on the basis of its participating elements. For example, if two tuples have a common element then their distance is (<=) 1. Thus, even if in the "meta-structure" two tuples are far apart, they can be neighbors because of their singular elements.

See also the tupling issues mentioned in an earlier section.

From a comment of Russell Impagliazzo:

The paper talks about “factoring” the computation into steps that are individually “local”. There are many ways to formalize that steps of computation are indeed local, with FO logic one of them. However, that does not mean that polynomial-time computation is local, because the composition of local operations is not local. I’ve been scanning the paper for any lemma that relates the locality or other weakness of a composition to the locality of the individual steps. I haven’t seen it yet.

Does the argument prove too much?

From a comment of Cristopher Moore:

The proof, if correct, shows that there is a poly-time samplable distribution on which k-SAT is hard on average — that hard instances are easy to generate. In Impagliazzo’s “five possible worlds” of average-case complexity, this puts us at least in Pessiland. If hard _solved_ instances are easy to generate, say through the “quiet planting” models that have been proposed, then we are in Minicrypt or Cryptomania.

Barriers

Any P vs NP proof must deal with the three known barriers described below. The concerns around this paper have, for the most part, not yet reached this stage yet.

Relativization

Quick overview of Relativization Barrier at Shiva Kintali's blog post

Natural proofs

See Razborov and Rudich, "Natural proofs" Proceedings of the twenty-sixth annual ACM symposium on Theory of computing (1994).

(Some discussion on the uniformity vs. non-uniformity distinction seems relevant here; the current strategy does not, strictly speaking, trigger this barrier so long as it exploits uniformity in an essential way.)

In this blog post, Tim Gowers points out an "illegitimate sibling" of the natural proofs obstruction that seems to apply to any attempt to separate complexity classes by trying to exploit the fact that the solution spaces of problems Q in easy complexity classes enjoy some simple structural property (which for sake of argument we will call "property A"). To begin with, let us use a simple notion of solution space, namely the space of all x for which the problem Q(x) has an affirmative answer (but note that this is not the notion of solution space used in Deolalikar's paper). Then the argument goes as follows. Consider a "pseudorandom polynomial circuit" [math]\displaystyle{ f: \{0,1\}^n \to \{0,1\} }[/math] formed by applying a polynomially large (e.g. [math]\displaystyle{ n^{100} }[/math]) number of reversible logical gate operations on the n-bit input string, and then outputting (say) the first bit of the final state of that n-bit string as output. It is plausible that this function is pseudorandom enough that it is indistinguishable from a random function [math]\displaystyle{ g: \{0,1\}^n \to \{0,1\} }[/math]. In particular, if f obeys property A, then a random function should also obey property A. However, most reasonable candidates for property A constrain the structure of the solution space to the extent that the random function should not obey property A. This strongly suggests that property A cannot contain all problems in P. To put it another way, any argument that proceeds by distinguishing easy and hard solution spaces should be able to describe a criterion that can distinguish the functions f and g from each other.

As pointed out by Jun Tarui, Deolalikar's argument is not directly analysing the solution space [math]\displaystyle{ \{x: Q(x)=1\} }[/math] of problems in P, but rather the solution spaces [math]\displaystyle{ \{ y: R(x,y)=1\} }[/math] to instances of satisfiability problems "Given x, does there exist y for which R(x,y)=1" which are in P (and moreover, that the same should hold even when some partial assignment s of y has already been fixed). However, one can extend Gowers' objection to this case also. For instance, let [math]\displaystyle{ R: \{0,1\}^n \times \{0,1\}^{n} \to \{0,1\} }[/math] be a pseudorandom circuit (so x has n bits and y has n bits). Then for n large enough, the satisfiability problem should always have an affirmative answer (and in particular is trivially in P) provided that at least [math]\displaystyle{ 100 \log n }[/math] of the bits of y are left unassigned (this is basically because [math]\displaystyle{ \sum_n 2^{-2^{100 \log n}} 2^{O(n)} \lt \infty }[/math]), and the remaining cases when there are at most [math]\displaystyle{ 100 \log n }[/math] bits to assign can be verified by brute force. On the other hand the solution spaces [math]\displaystyle{ \{ y: R(x,y)=1\} }[/math] are still indistinguishable from random subsets of [math]\displaystyle{ \{0,1\}^n }[/math].

Algebrization

See Aaronson and Wigderson, "Algebrization: A New Barrier in Complexity Theory" ACM Transactions on Computation Theory (2009).

The paper is all about the local properties of a specific NP-complete problem (k-SAT), and for that reason, I don't think relativization is relevant. Personally, I'm more interested in why the argument makes essential use of uniformity (which is apparently why it's supposed to avoid Razborov-Rudich). (Scott Aaronson)

Average to Worst-case?

A possible new barrier implied by the discussion here, framed by Terry Tao:

If nothing else, this whole experience has highlighted a “philosophical” barrier to P != NP which is distinct from the three “hard” barriers of relativisation, natural proofs, and algebraisation, namely the difficulty in using average-case (or “global”) behaviour to separate worst-case complexity, due to the existence of basic problems (e.g. k-SAT and k-XORSAT) which are expected to have similar average case behaviour in many ways, but completely different worst case behaviour. (I guess this difficulty was well known to the experts, but it is probably good to make it more explicit.)

Note that "average case behaviour" here refers to the structure of the solution space, as opposed to the difficulty of solving a random instance of the problem.

Followup by Ryan Williams:

It is a great idea to try to formally define this barrier and develop its properties. I think the “not necessary” part is pretty well-understood, thanks to Valiant-Vazirani. But the “not sufficient” part, the part relevant to the current paper under discussion, still needs some more rigor behind it. As I related to Lenka Zdeborova, it is easy to construct, for every n, a 2-CNF formula on n variables which has many “clusters” of solutions, where each cluster has large hamming distance from each other, and within the cluster there are a lot of satisfying assignments. But one would like to say something stronger, e.g. “for any 3-CNF formula with solution space S, that space S can be very closely simulated by the solution space S’ for some CSP instance that is polytime solvable”.

See also the previous section on issues with random k-SAT for closely related points.

Terminology


Online reactions

Additions to the list are of course very welcome.

Theory blogs

Media and aggregators

8th August

9th August

10th August

11th August

12th August

13th August

15th August

16th August

17th August

19 August

20 August

23 August

9 September

Real-time searches

Other

Timeline

Bibliography

Papers

Books

  • Information, Physics, and Computation, Marc Mézard, Andrea Montanari, Oxford University Press, 2009, ISBN-13: 978-0198570837, - the interface between statistical physics, theoretical computer science/discrete mathematics, and coding/information theory.

Other links

Further reading

Given the current interest in the subject matters discussed on this page, it would be good to start collecting a list of references where people can learn more about such topics. Please add liberally to the list below.

  • There are many other complexity classes besides P and NP. See the Complexity Zoo