Kolmogorov complexity
Intuitively, the Kolmogorov complexity of an integer [math]\displaystyle{ n }[/math] is the least number of bits one needs to describe [math]\displaystyle{ n }[/math] in some suitable language. For instance, one could use the language of Turing machines: an integer [math]\displaystyle{ n }[/math] has Kolmogorov complexity at most [math]\displaystyle{ k }[/math] if there exists a Turing machine with length at most [math]\displaystyle{ k }[/math] which, when run, produces [math]\displaystyle{ n }[/math] as the output.
Thus, for instance, any [math]\displaystyle{ k }[/math]-bit integer has Kolmogorov complexity at most [math]\displaystyle{ k+O(1) }[/math] (the [math]\displaystyle{ O(1) }[/math] overhead being for the trivial program that outputs the remaining [math]\displaystyle{ k }[/math] bits of the Turing machine). On the other hand, it is obvious that there are at most [math]\displaystyle{ 2^k }[/math] integers of Kolmogorov complexity at most [math]\displaystyle{ k }[/math] (we'll refer to this as the counting argument). As a consequence, most [math]\displaystyle{ k }[/math]-bit integers have Kolmogorov complexity close to [math]\displaystyle{ k }[/math].
On the other hand, a deterministic algorithm which takes [math]\displaystyle{ k }[/math] as input and runs in time polynomial in [math]\displaystyle{ k }[/math] can only produce integers of Kolmogorov complexity [math]\displaystyle{ O(\log k) }[/math], since any number produced can be described by a Turing machine of length [math]\displaystyle{ O(\log k) + O(1) }[/math] (the [math]\displaystyle{ O(1) }[/math] is to describe the algorithm, and the [math]\displaystyle{ O(\log k) }[/math] bits are to describe how long the program is to run and what [math]\displaystyle{ k }[/math] is).