Deolalikar P vs NP paper

From Polymath Wiki
Jump to navigationJump to search

Note: This is currently an UNOFFICIAL page on Deolalikar's P!=NP paper, and is not yet affiliated with a Polymath project.

This is a clearinghouse wiki page for the analysis of Vinay Deolalikar's recent preprint claiming to prove that P != NP, and to aggregate various pieces of news and information about this paper. Corrections and new contributions to this page are definitely welcome. Of course, any new material should be sourced whenever possible, and remain constructive and objectively neutral; in particular, personal subjective opinions or speculations are to be avoided. This page is derived from an earlier collaborative document created by Suresh Venkatasubramanian.

For the latest discussion on the technical points of the paper, see this thread of Dick Lipton and Ken Regan. For meta-discussion of this wiki (and other non-mathematical or meta-mathematical issues), see this thread of Suresh Venkatasubramanian.

The paper

These links are taken from Vinay Deolalikar's web page.

  1. First draft, Aug 6, 2010
  2. Second draft Aug 9, 2010. File removed, Aug 10 2010.
  3. draft 2 + ε, Aug 9 2010.

Typos and minor errors

  • (Second draft, page 31, Definition 2.16): "Perfect man" should be "Perfect map". (via Blake Stacey)
  • (Second draft) Some (but not all) of the instances of the [math]\displaystyle{ O() }[/math] notation should probably be [math]\displaystyle{ \Theta() }[/math] or [math]\displaystyle{ \Omega() }[/math] instead, e.g. on pages 4, 9, 16, 28, 33, 57, 68, etc. (via András Salamon)
  • (Second draft, page 27) [math]\displaystyle{ n 2^n }[/math] independent parameters → [math]\displaystyle{ n 2^k }[/math] independent parameters
  • (draft 2 + e, p.34, Def. 3.8): [math]\displaystyle{ n }[/math][math]\displaystyle{ k }[/math]
  • (Second draft, page 52) [math]\displaystyle{ \sum C_{li}S_i-k\gt 0 }[/math][math]\displaystyle{ \sum C_{li}S_i+k\gt 0 }[/math]

Proof strategy

(Excerpted from this comment of Ken Regan)

Deolalikar has constructed a vocabulary V which apparently obeys the following properties:

  1. Satisfiability of a k-CNF formula can be expressed by NP-queries over V—in particular, by an NP-query Q over V that ties in to algorithmic properties.
  2. All P-queries over V can be expressed by FO(LFP) formulas over V.
  3. NP = P implies Q is expressible by an FO(LFP) formula over V.
  4. If Q is expressible by an LFP formula over V, then by the algorithmic tie-in, we get a certain kind of polynomial-time LFP-based algorithm.
  5. Such an algorithm, however, contradicts known statistical properties of randomized k-SAT when k >= 9.

Possible issues

Issues with LFP

There appear to be 4 issues related to the use of the characterization of P in terms of first order logic, an ordering and a least fixed point operator. All of these are discussed in the Lipton/Regan post, with contributions from David Barrington, Paul Christiano, Lance Fortnow, James Gate, Arthur Milchior, Charanjit Jutla and Julian Bradfield.

  1. Is the lack of ordering in the logical structures used to define the LFP structure a problem ? On the surface, it appears to be, since it is not known whether FO(LFP) can be used to characterize P without ordering. (No, it is known that parity can not be expressed without an ordering even with LFP, hence P is not captured without order [AVV1997, page 35]. But in chapter 7 this issue seems to disappear since he introduces a successor relation over the variables [math]\displaystyle{ x_1\lt \dots\lt x_n\lt \neg x_1\lt \dots\lt \neg x_n }[/math] )
  2. The paper requires that a certain predicate in the FO(LFP) formula be unary, and forces this by expanding neighborhoods and constructing k-tuples of parameters to act as single parameters. It is not clear how this affects the arguments about the propagation of local neighborhoods.
  3. Does the logical vocabulary created to express the LFP operation suffice to capture all P-time operations ?
  4. Charanjit Jutla has pointed out that the argument in section 4.3 (with which several other people have also had issues) depends on the absence of a greatest fixed point. "This is a usual mistake most people new to fixed-point logics fall prey to. For example, now he has to deal with formulas of the kind [math]\displaystyle{ \nu x (f(y, x) \and g(y, x)). }[/math] Section 4.2 deals with just one least fixed point operator…where his idea is correct. But, in the next section 4.3, where he deals with complex fixed point iterations, he is just hand waving, and possibly way off."

Issues with random k-SAT

  1. The nomenclature of phase transitions: In the statistical physics picture, there is not a single phase transition, but rather a set of different transitions called "clustering", "condensation" and "freezing" (Florent Krzakala and Lenka Zdeborova). In the current version of paper, clustering and freezing are mixed-up.
  2. Whether the "clustering" or the "condensation" stages are significant : the latest ideas from physics suggest that random [math]\displaystyle{ k }[/math]-SAT and similar CSPs don’t become hard at the clustering transition, as was previously believed, and that the condensation transition is not connected to hardness either. Graph coloring provides some evidence of this when one compare the performance of algorithms with the statistical physics predictions.
  3. Whether the "freezing" stage is significant : It was thus more recently conjectured in the statistical physics community (Florent Krzakala, Lenka Zdeborova and Cris Moore) that really hard instances appears after the freezing transition so that k-SAT is hard in the region where w.h.p. all clusters have frozen variables. Such a region is known to exist rigorously.
  4. The XOR-SAT objection : The conjecture that frozen variables make a problem hard is however restricted to NP-complete problems such as K-SAT and Q-COL. Indeed a linear problem such as random k-XORSAT also has a clustering transition, frozen variables, etc., and is not easy to solve with most algorithms, but is of course in P as one can use Gauss elimination and exploit the linear structure to solve it in polynomial time (Cris Moore, Alif Wahid, and Lenka Zdeborova). Similar problem might exists in other restricted CSP which are in P, but may exibit freezing stage, as pointed by several other people.
  5. Whether a complex solution space amounts to true problem hardness: The author tries to use the fact that for certain distributions of random k-SAT, the solution space has a "hard structure". There are two "meta" objections to this. They don't actually point to a place where the proof is wrong. But they do appear to give an obstacle to the general proof method. Any objections to these objections are welcome!
    1. Polytime solvable problems (such as perfect matching on random graphs) can also have complicated solution distributions (Ryan Williams, on twitter). In fact it is not hard to design 2-SAT formulas (in this case not random, but specifically designed ones) so that they have exponentially many clusters of solutions, each cluster being "far" from the others. That is, the fact that random k-SAT has a "hard" distribution of solutions does not seem to be relevant for showing that k-SAT is hard. That is, it is not sufficient to use a problem with a hard distribution of solutions, if you're separating P from NP.
    2. Moreover, a hard distribution of solutions is not necessary for NP-hardness, either. The "hard" case of 3-SAT is the case where there is *at most one* satisfying assignment. There is a randomized reduction from 3-SAT to 3-SAT with at most ONE satisfying assignment (Valiant-Vazirani). This reduction increases the number of clauses and the number of variables, but that doesn't really matter. The point is that you can always reduce 3-SAT with a "complex" solution space to one with an "easy" solution space, so how can a proof separating P from NP rely on the former? Suppose Valiant-Vazirani can be derandomized to run in deterministic polynomial time (which is true if plausible circuit lower bounds hold up). For every LFP formula F that is to solve k-SAT, replace it with an LFP formula F' that has computationally equivalent behavior to the following algorithm: first "Valiant-Vazirani-ize" your input formula (reduce it to having at most one solution in polynomial time) then evaluate F on the result. These new formulas only have at most "one" solution to deal with. The intuition here is that either Valiant-Vazirani can't be derandomized (very unlikely) or the proof must break (Ryan Williams, on twitter). But again, this is just intuition.

Barriers

Any P vs NP proof must deal with the three known barriers described below. The concerns around this paper have not yet reached this stage yet.

Relativization

Quick overview of Relativization Barrier at Shiva Kintali's blog post

Natural proofs

Algebraization

Terminology

Online reactions

Theoretical computer science blogs

Media and aggregators

8th August

9th August

10th August

Real-time searches

Other

Additions to the above list of links are of course very welcome.

Timeline

Bibliography

Other links