Font Size: a A A

Computational complexity and information theory

Posted on:2008-08-30Degree:Ph.DType:Thesis
University:Stanford UniversityCandidate:Sohangir, SinaFull Text:PDF
GTID:2448390005954785Subject:Engineering
Abstract/Summary:
Measuring hardness or complexity of computational problems has been investigated for many years. Many intuitive relationships suggest that information theory is an effective tool to measure computational complexity. In this thesis we first explain why traditional measures of information are unable to characterize complexity of computation problems. We then raise and answer an information theoretic question about computation that has not been considered before. For a given computation problem and a given input probability distribution, we ask what is the least information, on average, that any computation box needs to reveal from the input to be able to evaluate the output. We call this quantity effective input information (or EII) and show that it is an information theoretic generalization of the average depth of decision trees and therefore it can be utilized to measure the intrinsic complexity of computations for different input distributions.; Then we show that the effective input information influences the output of computation problems in two different ways. One part of the effective input information is used to make decisions while another part is communicated directly to the output after the decisions are finalized. We then demonstrate how to measure only the part of effective input information that is used to make decisions and show why it provides a good measure of computational complexity. We call this part of EII the average number of decisions. This allows us to measure computation in bits in terms of the number of required binary decisions.; We use our proposed information theoretic measures of computation to evaluate the complexity of the problem of comparing two real valued random variables. Our measures allows us to demonstrate that despite the simplicity of the problem, the best strategy to compare two real numbers has some unexpected properties. In particular, the best decision strategy goes through phase transitions as the problem changes.; We also consider the following information theoretic question. What is the output entropy of a computation if no (or limited) computation resources are available. We show how our notions of effective input information and average number of decisions can help us answer that question. We propose a procedure to evaluate output entropy in such scenarios and show that it satisfies some nice properties.; Finally we consider the problem of computation using noisy decision trees and explain, using the information theoretic tools that we have developed in this thesis, how decision making in noisy decision trees is analogous to communication over a noisy channel. We define the notion of switching in a decision node and we argue that each noisy decision node has a switching capacity which cannot be exceeded. We then show that using a coding construction analogous to channel coding, noisy decision nodes can achieve their switching capacity in the limit. The most important consequence of this model is that it shows, in the limit, by utilizing suitable coding schemes, implementing noiseless computation using noisy decision trees costs only a constant factor of redundancy compared to noiseless case.
Keywords/Search Tags:Computation, Information, Complexity, Noisy decision, Problem, Using
Related items