Oracle counterexample to finding pseudoprimes: Difference between revisions
No edit summary |
|||
(One intermediate revision by the same user not shown) | |||
Line 30: | Line 30: | ||
The “natural path” to show that Assumption 3 implies P = BPP suggested above can be made to work but it requires a bit more than what I sketched. This is actually done in Fortnow’s paper (see the proof of Theorem 2.7 in page 3). In turn, this “bit more” that is required seems to use circuits in an essential way (for example, in Fortnow’s proof, in the simulation of Lauteman’s promise-BPP in RP^{promise-RP}), in a way that seems to make the Wiki assumption insufficient to put L(A) in P^A (which would be the desired contradiction). In conclusion, it seems we need a refined oracle construction or a different argument linking Assumption 3 with P = BPP. | The “natural path” to show that Assumption 3 implies P = BPP suggested above can be made to work but it requires a bit more than what I sketched. This is actually done in Fortnow’s paper (see the proof of Theorem 2.7 in page 3). In turn, this “bit more” that is required seems to use circuits in an essential way (for example, in Fortnow’s proof, in the simulation of Lauteman’s promise-BPP in RP^{promise-RP}), in a way that seems to make the Wiki assumption insufficient to put L(A) in P^A (which would be the desired contradiction). In conclusion, it seems we need a refined oracle construction or a different argument linking Assumption 3 with P = BPP. | ||
== Full derandomization == | |||
On the other hand, the required result will follow from [[http://blog.computationalcomplexity.org/2006/07/full-derandomization.html full derandomization]] which itself follows from some very harsh hardness result. Then there was a comment (but I forgot where) by Avi Wigderson that full derandomization is much more than needed because of some unary nature of the problem which implies that less harsh hardness result (for turing machines rather than circuits) would suffice. |
Latest revision as of 18:17, 24 February 2011
One can consider the following soft generalisation to the finding primes conjecture:
- Soft conjecture Suppose one has a set S of integers which is in P, and which is at least as dense at the primes (thus the set of k-digit elements of S has density [math]\displaystyle{ \gg 1/k }[/math]. Then there exists a deterministic algorithm which, when given an integer k, will produce an element of S of at least k digits in length in time poly(k).
This soft conjecture is for instance true if P=NP, by this argument. It certainly implies the finding primes conjecture, by taking S to be the set of primes. However, we show here that the soft conjecture fails in the presence of an oracle. Roughly speaking, this means that there is no "soft" way to prove the finding primes conjecture that uses nothing about the primes other than that they are in P and are dense.
Here's how it works. For each integer k, we recursively define the set [math]\displaystyle{ S_k }[/math] to be the set of k-digit numbers whose Kolmogorov complexity is at least [math]\displaystyle{ \sqrt{k} }[/math] relative to [math]\displaystyle{ S_1,\ldots,S_{k-1} }[/math]. What this means is that n lies in [math]\displaystyle{ S_k }[/math] if n has k digits and it is not possible to find a Turing machine of length less than [math]\displaystyle{ \sqrt{k} }[/math] that can compute n even given access to oracles for [math]\displaystyle{ S_1,\ldots,S_{k-1} }[/math]. An easy counting argument shows that all but at most [math]\displaystyle{ 2^{\sqrt{k}} }[/math] k-digit integers lie in [math]\displaystyle{ S_k }[/math]. Thus if we set S to be the union of the [math]\displaystyle{ S_k }[/math], then S is at least as dense as the primes (in fact it is much, much denser).
We now work relative to an oracle for S, i.e. the oracle takes any number n as input and returns whether it is in S in unit time. Clearly, S is now in P. Suppose for contradiction that there was an algorithm A (of length O(1)) which, when given k as input, returns an element of S at least k digits in length in time poly(k).
Let k be large. One can show inductively that every integer of k digits or more produced by the algorithm A with input k in time less than poly(k) has a Kolmogorov complexity of O( log(k) ) relative to [math]\displaystyle{ S_1,\ldots,S_{k-1} }[/math]; the point is that the oracles for [math]\displaystyle{ S_k, S_{k+1},\ldots, }[/math] can be shown by induction to be useless as the numbers produced by A will never lie in these sets. But O(log(k)) is less than [math]\displaystyle{ \sqrt{k} }[/math] for k large, so A cannot produce an element of S of k digits or more, a contradiction.
The same argument shows that there is no soft way to prove the weak conjecture either. However, it is not obvious whether one can adapt the argument to handle the situation in which there is a factoring oracle, since of course the fundamental theorem of arithmetic doesn't work if one replaces the primes by an arbitrary set S of "pseudoprimes".
Relationship with P=BPP
We now discuss whether one can modify the above oracle to produce a situation in which P=BPP or if P!=BPP.
A relevant reference is this paper of Fortnow in which it is noted in passing (page 3) that there are oracles in the literature that separate what he calls “Assumption 4″ and “Assumption 3″. Assumption 4 is that P=BPP while Assumption 3 is a bit stronger than the assumption that any set of pseudoprimes has a deterministic construction. The difference has get to do with what Avi calls “unary inputs” in his post above; basically, in Assumption 3 we want to be able to construct elements from any set of “pseudoprimes” which is recognized by a *family of polynomial size circuits* while in the problem posed in the wiki we want to be able to construct elements from any set which is recognized by a *uniform polynomial time algorithm*; let us call it the Wiki Assumption
In any case, the same oracle mentioned by Fortnow should also falsify the Wiki Assumption, while having P=BPP.
The wiki page also raises the question of finding an oracle relative to which P!=BPP and the Wiki Assumption is false. In any oracle world in which P!=BPP, then Fortnow’s Assumption 3 is also false, and as discussed above Assumption 3 and the Wiki Assumption are very close. It is plausible that the standard oracle relative to which P!=BPP (which I am not familiar with) is also such that the Wiki Assumption fails, so it might be worth checking it out.
The standard construction of an oracle where P != BPP is very similar to the standard construction of an oracle where P != NP as described in the textbooks. If the oracle is A, the separating language L(A) is still the set of all unary inputs 1^n for which there exists a string of length n in A. But now, when you build A to diagonalize against the i-th polynomial-time machine, you make sure to include not *one or none* of the strings of a certain length but *many or none* of the strings of that length (that is, a constant fraction of all strings of that length, or none). This puts L(A) in BPP^A (in fact in RP^A). Note that it is always possible to add a constant fraction of the strings when required because a polynomial-time algorithm can only query a tiny fraction of all strings of the length of input.
The tricky part is in checking that, under this oracle, Assumption 3 is falsified and possibly also the Wiki assumption is falsified. The proof that P != BPP implies that Assumption 3 is falsified is not completely straightforward as far as I can see. We can try to look at it.
The natural path to show that P != BPP implies that Assumption 3 is falsified would be showing that if we can deterministically in polynomial time find an accepted input to circuits that accept many inputs, then we can approximate the acceptance probability of a given circuit in polynomial time. I would guess this is done by applying the assumed algorithm to the circuit that interprets its input as a sample of a few inputs to the given circuit for which we want to approximate its acceptance probability and checks if the fraction of accepted inputs is bigger than a threshold. The assumed algorithm will return a candidate sample, which we can check for correctness, and start over with a larger or smaller threshold depending on whether the candidate was wrong or right. Proceeding this way in binary search we should be able to approximate the acceptance probability of the given circuit.
The “natural path” to show that Assumption 3 implies P = BPP suggested above can be made to work but it requires a bit more than what I sketched. This is actually done in Fortnow’s paper (see the proof of Theorem 2.7 in page 3). In turn, this “bit more” that is required seems to use circuits in an essential way (for example, in Fortnow’s proof, in the simulation of Lauteman’s promise-BPP in RP^{promise-RP}), in a way that seems to make the Wiki assumption insufficient to put L(A) in P^A (which would be the desired contradiction). In conclusion, it seems we need a refined oracle construction or a different argument linking Assumption 3 with P = BPP.
Full derandomization
On the other hand, the required result will follow from [full derandomization] which itself follows from some very harsh hardness result. Then there was a comment (but I forgot where) by Avi Wigderson that full derandomization is much more than needed because of some unary nature of the problem which implies that less harsh hardness result (for turing machines rather than circuits) would suffice.