Deolalikar P vs NP paper: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
→‎The paper: only the latest draft is still publicly available.
Geomblog (talk | contribs)
Line 83: Line 83:
[http://www.logic.rwth-aachen.de/~graedel/ Erich Grädel] has an [http://www.logic.rwth-aachen.de/pub/graedel/FMTbook-Chapter3.pdf extensive review of finite model theory and descriptive complexity].   
[http://www.logic.rwth-aachen.de/~graedel/ Erich Grädel] has an [http://www.logic.rwth-aachen.de/pub/graedel/FMTbook-Chapter3.pdf extensive review of finite model theory and descriptive complexity].   


There appear to be 4 issues related to the use of the characterization of P in terms of first order logic, an ordering and a least fixed point operator. All of these are discussed in the [http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%E2%89%A0np/ Lipton/Regan post], with contributions from David Barrington, Paul Christiano, Lance Fortnow, James Gate, Arthur Milchior, Charanjit Jutla, Julian Bradfield and Steven Lindell.
There appear to be three issues related to the use of the characterization of P in terms of first order logic, an ordering and a least fixed point operator. All of these are discussed in the [http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%E2%89%A0np/ Lipton/Regan post], with contributions from David Barrington, Paul Christiano, Lance Fortnow, James Gate, Arthur Milchior, Charanjit Jutla, Julian Bradfield and Steven Lindell.


* '''Ordering, or lack thereof''': Is the lack of ordering in the logical structures used to define the LFP structure a problem (since parity can not be expressed without an ordering even with LFP, hence P is not captured without order).  
* '''Ordering, or lack thereof''': Is the lack of ordering in the logical structures used to define the LFP structure a problem (since parity can not be expressed without an ordering even with LFP, hence P is not captured without order).  

Revision as of 07:47, 12 August 2010

This is a clearinghouse wiki page for aggregating the following types of items:

  1. Analysis of Vinay Deolalikar's recent preprint claiming to prove that P != NP;
  2. News and information about this preprint;
  3. Background material for the various concepts used in the preprint; and
  4. Evaluation of the feasibility and limitations of the general strategies used to attack P != NP, including those in the preprint.

It is hosted by the polymath project wiki, but is not a formal polymath project.

Corrections and new contributions to this page are definitely welcome. Of course, any new material should be sourced whenever possible, and remain constructive and objectively neutral; in particular, personal subjective opinions or speculations are to be avoided. This page is derived from an earlier collaborative document created by Suresh Venkatasubramanian.

Discussion threads

The main discussion threads are being hosted on Dick Lipton's blog. Several of the posts were written jointly with Ken Regan.

The paper

These links are taken from Vinay Deolalikar's web page.

Here is the list of updates between the different versions.

Typos and minor errors

Any typos appearing in an earlier draft that no longer appear in the latest draft should be struck out.

  • (Second draft, page 31, Definition 2.16): "Perfect man" should be "Perfect map". (via Blake Stacey)
  • (Second draft) Some (but not all) of the instances of the [math]\displaystyle{ O() }[/math] notation should probably be [math]\displaystyle{ \Theta() }[/math] or [math]\displaystyle{ \Omega() }[/math] instead, e.g. on pages 4, 9, 16, 28, 33, 57, 68, etc. (via András Salamon)
    • Still present in the third draft, e.g. "O(n) Hamming separation between clusters" occurs on page 68 and similarly in several other places.
  • (Second draft, page 27) [math]\displaystyle{ n 2^n }[/math] independent parameters → [math]\displaystyle{ n 2^k }[/math] independent parameters
    • Still present in third draft, but now on page 33.
  • (draft 2 + e, p.34, Def. 3.8): [math]\displaystyle{ n }[/math][math]\displaystyle{ k }[/math]
    • Still present in third draft, but now on page 49 and Def 4.8.
  • (Second draft, page 52) [math]\displaystyle{ \sum C_{li}S_i-k\gt 0 }[/math][math]\displaystyle{ \sum C_{li}S_i+k\gt 0 }[/math]
    • Still present in third draft, but now on page 70.
  • (draft 2 + e, p.10): "We reproduce the rigorously proved picture of the 1RSB ansatz that we will need in Chapter 5." The phrasing makes it sound like we will need the 1RSB ansatz in Chapter 5 instead of saying that it is reproduced in Chapter 5 (which I think is what the author intended). One fix is to move "in Chapter 5" to the beginning of the sentence.
    • Still present in third draft, but now on page 16 (and referring to Chapter 6 instead).
  • (Third draft, p. 102): "inspite" → "in spite"

Proof strategy

(Excerpted from this comment of Ken Regan)

Deolalikar has constructed a vocabulary V which apparently obeys the following properties:

  1. Satisfiability of a k-CNF formula can be expressed by NP-queries over V—in particular, by an NP-query Q over V that ties in to algorithmic properties.
  2. All P-queries over V can be expressed by FO(LFP) formulas over V.
  3. NP = P implies Q is expressible by an FO(LFP) formula over V.
  4. If Q is expressible by an LFP formula over V, then by the algorithmic tie-in, we get a certain kind of polynomial-time LFP-based algorithm.
  5. Such an algorithm, however, contradicts known statistical properties of randomized k-SAT when k >= 9.

An alternate perspective

Leonid Gurvits:

...the discrete probabilistic distributions in the paper can be viewed as tensors, or very special multilinear polynomials. The assumptions “P=NP” somehow gives a (polynomial?) upper bound on the tensor rank. And finally, using known probabilistic results, he gets nonmatching (exponential?) lower bound on the same rank.

If I am right, then this approach is a very clever, in a good sense elementary, way to push the previous algebraic-geometric approaches.

Specific issues

  • (Adapted from Lindell's Critique) On Remark 8.5 on page 85 (third draft), it is asserted that "the [successor] relation... is monadic, and so does not introduce new edges into the Gaifman graph". However, this is only true in the original LFP. When encoding the LFP into a monadic LFP as is done immediately afterwards in the same remark, the relation becomes a section of a relation of higher arity (as mentioned in page 87 and Appendix A), using an induction relation. However, this induction relation itself must be monadically encoded, and it may not have bounded degree. In fact, because of successor, it could define an ordering in two of its columns, which would render the Gaifman graph useless (indeed, the Gaifman graph could even collapse to have diameter one).

General issues

Issues with LFP

Erich Grädel has an extensive review of finite model theory and descriptive complexity.

There appear to be three issues related to the use of the characterization of P in terms of first order logic, an ordering and a least fixed point operator. All of these are discussed in the Lipton/Regan post, with contributions from David Barrington, Paul Christiano, Lance Fortnow, James Gate, Arthur Milchior, Charanjit Jutla, Julian Bradfield and Steven Lindell.

  • Ordering, or lack thereof: Is the lack of ordering in the logical structures used to define the LFP structure a problem (since parity can not be expressed without an ordering even with LFP, hence P is not captured without order).
In chapter 7 this issue seems to disappear since he introduces a successor relation over the variables [math]\displaystyle{ x_1\lt \dots\lt x_n\lt \neg x_1\lt \dots\lt \neg x_n }[/math].
If it was possible to express k-SAT in FO(NLFP,without succ) (NLFP=non deterministic LFP) or in relational-NP, as introduced in [AVV1997] then by an extension of the Abiteboul-Vianu theorem it would be enough to prove that k-SAT is not in FO(LFP,without succ). This would avoid the problem of the order
  • The issue of tupling: The paper requires that a certain predicate in the FO(LFP) formula be unary, and forces this by expanding neighborhoods and constructing k-tuples of parameters to act as single parameters. It is not clear how this affects the arguments about the propagation of local neighborhoods.
Albert Atserias says, "...for someone knowing the finite model theory used in the paper, there is a jump in the reasoning that lacks justification. This is the jump from Monadic LFP to full LFP. The only justification for this crucial step seems to be Remark 7.4 in page 70 of the original manuscript (and the vague statement in point 3 of page 49), but this is far from satisfactory. The standard constructions of the so-called canonical structures that Vinay refers to (see Ebbinghaus and Flum book in page 54) have a Gaifman graph of constant diameter, even without the linear order, due to the generalized equalities that allow the decoding of tuples into its components. Issues along these lines were raised before here and in comment 54 here
Steven Lindell presents a detailed critique of this problem, with an indication that there might be insurmountable problems. It is reproduced here for completeness.
  • Boundedness and greatest fixed points: Charanjit Jutla has pointed out that the argument in section 4.3 (with which several other people have also had issues) depends on the absence of a greatest fixed point. "This is a usual mistake most people new to fixed-point logics fall prey to. For example, now he has to deal with formulas of the kind [math]\displaystyle{ \nu x (f(y, x) \and g(y, x)). }[/math] Section 4.2 deals with just one least fixed point operator…where his idea is correct. But, in the next section 4.3, where he deals with complex fixed point iterations, he is just hand waving, and possibly way off."
A few comments later, he appears to revise this objection, while bringing up a new issue about the boundedness of the universe relating to the LFP operator.

Issues with phase transitions

A brief synopsis of the terms discussed can be found here

  • The nomenclature of phase transitions: In the statistical physics picture, there is not a single phase transition, but rather a set of different well defined transitions called clustering (equivalently d1RSB), condensation, and freezing (Florent Krzakala and Lenka Zdeborova). In the current version of the paper, properties of d1RSB (clustering), and freezing are mixed-up. Whereas following the established definitions, and contrary to some earlier conjectures, it is now agreed that some polynomial algorithms work beyond the d1RSB (clustering) or condensation thresholds. Graph coloring provides some evidence of this when one compares the performance of algorithms with the statistical physics predictions. The property of the solution space of random K-SAT the paper is actually using is called freezing. It was conjectured in the statistical physics community (Florent Krzakala, Lenka Zdeborova and Cris Moore) that really hard instances appears in the frozen phase, i.e. when all solutions have non-trivial cores. Existence of such a region was proven rigorously by Achlioptas and Ricci-Tersenghi and their theorem appears as Theorem 5.1 in the paper.
  • The XOR-SAT objection : The conjecture that frozen variables make a problem hard is however restricted to NP-complete problems such as K-SAT and Q-COL. Indeed a linear problem such as random k-XORSAT also has a clustering transition, frozen variables, etc., and is not easy to solve with most algorithms, but is of course in P as one can use Gauss elimination and exploit the linear structure to solve it in polynomial time (Cris Moore, Alif Wahid, and Lenka Zdeborova). Similar problem might exist in other restricted CSP which are in P, but may exhibit freezing stage, as pointed by several other people.

More specifically: in the portion of the paper that is devoted to analyzing the k-SAT problem, what is the first step which works for k-SAT but breaks down completely for k-XORSAT?

  • The error-correcting codes objection: Initiated in a comment by harrison: If I understand his argument correctly, Deolalikar claims that the polylog-conditional independence means that the solution space of a poly-time computation can’t have Hamming distance O(n) [presumably he means \theta(n)], as long as there are “sufficiently many solution clusters.” This would preclude the existence of efficiently decodable codes at anything near the Gilbert-Varshamov bound when the minimum Hamming distance is large enough.

Issues with random k-SAT

  1. Complex solution spaces are uncorrelated with time complexity. (The below is a greatly expanded version of a series of twitter comments by Ryan Williams, on twitter) The author tries to use the fact that for certain distributions of random k-SAT, the solution space has a "hard structure". For certain parameterizations, the space of satisfying assignments to a random k-SAT instance has some intriguing structure. If SAT is in P, then SAT can be captured in a certain logic (equivalent to P in some sense). The author claims that anything captured in this logic can't have a solution space with this intriguing structure. There are two "meta" objections to this. One is that "intriguing structure in the solution space is not sufficient for NP hardness". The second is that "intriguing structure is not necessary for NP hardness". They don't actually point to a place where the proof is wrong. But they do appear to give an obstacle to the general proof method.
    1. Polytime solvable problems (such as perfect matching on random graphs) can also have complicated solution distributions. In fact it is not hard to design 2-SAT formulas (in this case not random, but specifically designed ones) so that they have exponentially many clusters of solutions, each cluster being "far" from the others, etc. That is, the fact that random k-SAT has a "hard" distribution of solutions does not seem to be relevant for proving a time lower bound on k-SAT. It is not sufficient to use a problem with a hard distribution of solutions, if you're separating P from NP. This is the objection which seems most germane to the current proposed proof: it opposes the claim that "anything in P can't have a solution space with this intriguing structure". It appears there must be some error in either the translation to this logic, or the analysis of solution spaces that this logic permits.
    2. Moreover, it's also worth pointing out that a hard distribution of solutions is not necessary for NP-hardness, either. A weird distribution is not what makes a problem hard, it's the representation of that solution space (e.g., a 3-CNF formula, a 2-CNF formula, etc.). The "hard" case of 3-SAT is the case where there is at most one satisfying assignment, since there is a randomized reduction from 3-SAT to 3-SAT with at most one satisfying assignment (Valiant-Vazirani). This reduction increases the number of clauses and the number of variables, but that doesn't really matter. The point is that you can always reduce 3-SAT with a "complex" solution space to one with an "easy" solution space, so how can a proof separating P from NP rely on the former? (Note that, if plausible circuit lower bounds hold up, then Valiant-Vazirani can be derandomized to run in deterministic polynomial time.) To summarize, there is essentially no correlation between the "hard structure" of the solution space for instances of some problem, and the NP-hardness of that problem.

Uniformity issues

The following is a lightly edited excerpt from a comment of Russell Impagliazzo:

The general approach of this paper is to try to characterize hard instances of search problems by the structure of their solution spaces. The problem is that this intuition is far too ambitious. It is talking about what makes INSTANCES hard, not about what makes PROBLEMS hard. Since in say, non-uniform models, individual instances or small sets of instances are not hard, this seems to be a dead-end. There is a loophole in this paper, in that he’s talking about the problem of extending a given partial assignment. But still, you can construct artificial easy instances so that the solution space has any particular structure. That solutions fall in well-separated clusters cannot really imply that the search problem is hard. Take any instance with exponentially many solutions and perform a random linear transformation on the solution space, so that solution y is “coded” by Ay. Then the complexity of search hasn’t really changed, but the solution space is well-separated. So the characterization this paper is attempting does not seem to me to be about the right category of object.

Locality issues

From a comment of Thomas Schwentick:

There is an another issue with the locality in remark 3 of Section 4.3. Moving from singletons to tuples destroys locality: this is because the distance of two tuples is defined on the basis of its participating elements. For example, if two tuples have a common element then their distance is (<=) 1. Thus, even if in the "meta-structure" two tuples are far apart, they can be neighbors because of their singular elements.

See also the tupling issues mentioned in an earlier section.

From a comment of Russell Impagliazzo:

The paper talks about “factoring” the computation into steps that are individually “local”. There are many ways to formalize that steps of computation are indeed local, with FO logic one of them. However, that does not mean that polynomial-time computation is local, because the composition of local operations is not local. I’ve been scanning the paper for any lemma that relates the locality or other weakness of a composition to the locality of the individual steps. I haven’t seen it yet.

Does the argument prove too much?

From a comment of Cristopher Moore:

The proof, if correct, shows that there is a poly-time samplable distribution on which k-SAT is hard on average — that hard instances are easy to generate. In Impagliazzo’s “five possible worlds” of average-case complexity, this puts us at least in Pessiland. If hard _solved_ instances are easy to generate, say through the “quiet planting” models that have been proposed, then we are in Minicrypt or Cryptomania.

Barriers

Any P vs NP proof must deal with the three known barriers described below. The concerns around this paper have, for the most part, not yet reached this stage yet.

Relativization

Quick overview of Relativization Barrier at Shiva Kintali's blog post

Natural proofs

See Razborov and Rudich, "Natural proofs" Proceedings of the twenty-sixth annual ACM symposium on Theory of computing (1994).

(Some discussion on the uniformity vs. non-uniformity distinction seems relevant here; the current strategy does not, strictly speaking, trigger this barrier so long as it exploits uniformity in an essential way.)

Algebrization

See Aaronson and Widgerson, "Algebrization: A New Barrier in Complexity Theory" ACM Transactions on Computation Theory (2009).

The paper is all about the local properties of a specific NP-complete problem (k-SAT), and for that reason, I don't think relativization is relevant. Personally, I'm more interested in why the argument makes essential use of uniformity (which is apparently why it's supposed to avoid Razborov-Rudich). (Scott Aaronson)

Average to Worst-case?

A possible new barrier implied by the discussion here, framed by Terry Tao:

If nothing else, this whole experience has highlighted a “philosophical” barrier to P != NP which is distinct from the three “hard” barriers of relativisation, natural proofs, and algebraisation, namely the difficulty in using average-case (or “global”) behaviour to separate worst-case complexity, due to the existence of basic problems (e.g. k-SAT and k-XORSAT) which are expected to have similar average case behaviour in many ways, but completely different worst case behaviour. (I guess this difficulty was well known to the experts, but it is probably good to make it more explicit.)

Note that "average case behaviour" here refers to the structure of the solution space, as opposed to the difficulty of solving a random instance of the problem.

Followup by Ryan Williams:

It is a great idea to try to formally define this barrier and develop its properties. I think the “not necessary” part is pretty well-understood, thanks to Valiant-Vazirani. But the “not sufficient” part, the part relevant to the current paper under discussion, still needs some more rigor behind it. As I related to Lenka Zdeborova, it is easy to construct, for every n, a 2-CNF formula on n variables which has many “clusters” of solutions, where each cluster has large hamming distance from each other, and within the cluster there are a lot of satisfying assignments. But one would like to say something stronger, e.g. “for any 3-CNF formula with solution space S, that space S can be very closely simulated by the solution space S’ for some CSP instance variables that is polytime solvable”.

See also the previous section on issues with random k-SAT for closely related points.

Terminology

Online reactions

Theoretical computer science blogs

Media and aggregators

8th August

9th August

10th August

11th August

12th August

Real-time searches

Other


Additions to the above list of links are of course very welcome.

Timeline

Bibliography

Other links

Further reading

Given the current interest in the subject matters discussed on this page, it would be good to start collecting a list of references where people can learn more about such topics. Please add liberally to the list below.

  • There are many other complexity classes besides P and NP. See the Complexity Zoo