Pseudo-random generators (PRG): Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
New page: Loosely speaking, a pseudorandom generator is a deterministic and efficiently computable function that cannot be distinguished in polynomial time from a random function. To see how to mak...
 
siscussion of existence of PRG relative to restricted models of computation
 
Line 12: Line 12:


Note that there is a slight issue in the above, which is that the probabilities we obtain from this derandomization will be multiples of <math>2^{-m}</math>, so we cannot expect them to be accurate to within less than this amount. So we cannot get accuracy that is better than 1/poly(n) if m is only around Clogn. However, we can fix the degree of the polynomial that we allow when we are testing for randomness and then choose C to be large enough to defeat all tests that run in time bounded above by a polynomial of that degree.
Note that there is a slight issue in the above, which is that the probabilities we obtain from this derandomization will be multiples of <math>2^{-m}</math>, so we cannot expect them to be accurate to within less than this amount. So we cannot get accuracy that is better than 1/poly(n) if m is only around Clogn. However, we can fix the degree of the polynomial that we allow when we are testing for randomness and then choose C to be large enough to defeat all tests that run in time bounded above by a polynomial of that degree.
Generic PRGs fooling polynomial-time probabilistic Turing machines are not proven to exist (see P=BPP conjecture).
Nevertheless, for weaker models of computation unconditional pseudorandom generators are known. For example, there exist [http://en.wikipedia.org/wiki/Pseudorandom_generators_for_polynomials PRGs for mutlivariate polynomials over finite fields].

Latest revision as of 22:21, 31 July 2009

Loosely speaking, a pseudorandom generator is a deterministic and efficiently computable function that cannot be distinguished in polynomial time from a random function.

To see how to make this idea precise, consider the following set-up. Let m and n be integers with m considerably less than n, and let [math]\displaystyle{ \phi:\{0,1\}^m\rightarrow\{0,1\}^n }[/math]. Now consider the following two ways of producing random n-bit sequences. The first way is simply to choose a sequence uniformly at random from [math]\displaystyle{ \{0,1\}^n }[/math]. This needs n "units of randomness" -- if we were using a coin, we would have to toss it n times. The second way is to choose a sequence x uniformly at random from [math]\displaystyle{ \{0,1\}^m }[/math] and then to evaluate [math]\displaystyle{ \phi(x) }[/math].

Obviously, the second method needs only m random bits, so if we regard randomness as a resource, then the second method is a lot cheaper. But will it actually work for us? That is, if we wanted to run a randomized algorithm that needed n random bits, could we get away with using n bits produced by the second method rather than by the first method?

Well, for such an idea to work, we would need the random bits produced by the second method to behave, for all practical purposes, as though they were chosen uniformly from [math]\displaystyle{ \{0,1\}^n }[/math]. What this means is that they should "fool" all efficient algorithms. More precisely still, we require the following. Let x be a random string chosen uniformly from [math]\displaystyle{ \{0,1\}^n }[/math], let y be a random string chosen uniformly from [math]\displaystyle{ \{0,1\}^m }[/math] and let f be any Boolean function that can be computed in polynomial time. Then the probability that f(x)=1 should differ from the probability that [math]\displaystyle{ f(\phi(y))=1 }[/math] by [math]\displaystyle{ \epsilon(n) }[/math], where [math]\displaystyle{ \epsilon(n) }[/math] tends to zero faster than any polynomial. If that is the case, then we cannot use f as a test for distinguishing between the random bits and the pseudorandom bits, even if we independently repeat the test polynomially many times.

If m is sufficiently small, then a pseudorandom generator can be used to derandomize algorithms. All you have to do is this. Let r be about the same size as n (the precise condition is that n should be bounded above by a polynomial in r). We can think of an efficient randomized algorithm for computing a Boolean function [math]\displaystyle{ g:\{0,1\}^r\rightarrow\{0,1\} }[/math] as a polynomial-time-computable Boolean function [math]\displaystyle{ f:\{0,1\}^n\times\{0,1\}^r\rightarrow\{0,1\} }[/math], where f(x,z) is the output when x is the string of random bits and z is the input. If we fix z, then this becomes a function of x. Now suppose we calculated [math]\displaystyle{ f(\phi(y),z) }[/math] instead. If this did not output 1 with almost exactly the same probability, then we would have an efficiently computable function on [math]\displaystyle{ \{0,1\}^n }[/math], namely the function [math]\displaystyle{ u\mapsto f(u,z) }[/math], that behaved differently when you presented it with n truly random bits from how it behaved if you presented it with a random output of [math]\displaystyle{ \phi }[/math]. From this it would follow that [math]\displaystyle{ \phi }[/math] was not a pseudorandom generator.

So if [math]\displaystyle{ \phi }[/math] is a pseudorandom generator, then any efficient randomized algorithm will behave in almost exactly the same way if you use a random [math]\displaystyle{ \phi(y) }[/math] instead of a random x. And if m is really small, like [math]\displaystyle{ C\log n }[/math], then you can find out exactly how the algorithm behaves for [math]\displaystyle{ \phi(y) }[/math] by simply running it for every possible y, since that requires [math]\displaystyle{ 2^{C\log n} }[/math] runs, which is polynomial in n. Then you simply add up the number of times that you got 1 and divide by the total number of runs, to get a very good estimate for the probability that a truly randomized algorithm would give 1. In this way, the algorithm has been derandomized.

Note that there is a slight issue in the above, which is that the probabilities we obtain from this derandomization will be multiples of [math]\displaystyle{ 2^{-m} }[/math], so we cannot expect them to be accurate to within less than this amount. So we cannot get accuracy that is better than 1/poly(n) if m is only around Clogn. However, we can fix the degree of the polynomial that we allow when we are testing for randomness and then choose C to be large enough to defeat all tests that run in time bounded above by a polynomial of that degree.

Generic PRGs fooling polynomial-time probabilistic Turing machines are not proven to exist (see P=BPP conjecture). Nevertheless, for weaker models of computation unconditional pseudorandom generators are known. For example, there exist PRGs for mutlivariate polynomials over finite fields.