Difference between revisions of "Kolmogorov complexity"

Intuitively, the Kolmogorov complexity of an integer $n$ is the least number of bits one needs to describe $n$ in some suitable language. For instance, one could use the language of Turing machines: an integer $n$ has Kolmogorov complexity at most $k$ if there exists a Turing machine with length at most $k$ which, when run, produces $n$ as the output.
Thus, for instance, any $k$-bit integer has Kolmogorov complexity at most $k+O(1)$ (the $O(1)$ overhead being for the trivial program that outputs the remaining $k$ bits of the Turing machine). On the other hand, it is obvious that there are at most $2^k$ integers of Kolmogorov complexity at most $k$ (we'll refer to this as the counting argument). As a consequence, most $k$-bit integers have Kolmogorov complexity close to $k$.
On the other hand, a deterministic algorithm which takes $k$ as input and runs in time polynomial in $k$ can only produce integers of Kolmogorov complexity $O(\log k)$, since any number produced can be described by a Turing machine of length $O(\log k) + O(1)$ (the $O(1)$ is to describe the algorithm, and the $O(\log k)$ bits are to describe how long the program is to run and what $k$ is).