Font Size: a A A

Applications of unconditional pseudorandomness in complexity theory

Posted on:2008-02-20Degree:Ph.DType:Thesis
University:Harvard UniversityCandidate:Healy, Alexander DFull Text:PDF
GTID:2448390005951420Subject:Computer Science
Abstract/Summary:
Pseudorandomness---that is, information that "appears random" even though it is generated using very little true randomness---is a fundamental notion in cryptography and complexity theory.; This thesis explores the applications of pseudorandomness within complexity theory, with a focus on pseudorandomness that can be constructed unconditionally, that is without relying on any unproven complexity assumptions. Such pseudorandomness only "fools" restricted classes of algorithms, and yet it can be applied to prove complexity results that concern very general models of computation. For instance, we show the following: (1) Randomness-Efficient Error Reduction for Parallel Algorithms. Typically, to gain confidence in a randomized algorithm, one repeats the algorithm several times (with independent randomness) and takes the majority vote of the executions. While very effective, this is wasteful in terms of the number of random bits that are used. Randomness-efficient error reduction techniques are known for polynomial-time algorithms, but do not readily apply to parallel algorithms since the techniques seem inherently sequential. We achieve randomness-efficient error reduction for highly-parallel algorithms. Specifically, we can reduce the error of a parallel algorithm to any delta > 0 while paying only O(log(1/delta)) additional random bits, thereby matching the results for polynomial-time. (2) Hardness Amplification within NP. A fundamental question in average-case complexity is whether P ≠ NP implies the existence of functions in NP that are hard on average (over randomly-chosen inputs). While the answer to this question seems far beyond the reach of current techniques, we show that powerful hardness amplification is indeed feasible within NP. In particular, we show that if NP has a mildly hard-on-average function f (i.e., any small circuit for computing f fails on at least a constant fraction of inputs), then NP has a function f' that is extremely hard on average (i.e., any small circuit for computing f' only succeeds with exponentially-small advantage over random guessing). Previous results only obtained functions f' that could not be computed with polynomial advantage over random guessing. Our stronger results are obtained by using derandomization and nondeterminism in constructing f'.; A common theme in our results is the computational efficiency of pseudorandom generators. Indeed, our results both rely upon, and enable us to construct pseudorandom generators that can be computed very efficiently (in terms of parallel complexity).
Keywords/Search Tags:Random, Complexity, Parallel
Related items