Font Size: a A A

Memory model based algorithmic computational optimizations for signal processing

Posted on:2001-03-15Degree:Ph.DType:Thesis
University:University of California, BerkeleyCandidate:Cheung, GeneFull Text:PDF
GTID:2468390014451810Subject:Engineering
Abstract/Summary:
Signal processing algorithms often perform designated tasks under environmental constraints. While many algorithms are designed to be rate-adaptive to adapt to the changing rate-constrained environment, very few are designed to be computation-adaptive to adapt to the changing computation-constrained environment. We address this important yet neglected property of computation-adaptivity of signal processing algorithms in this thesis.;Among the possible machine-dependent optimizations one can perform, we look at the fundamental tradeoff between computation and memory for optimal execution speed. At one extreme, an algorithm can process input using only memory and no computation---since input is represented as bits, the number of possible inputs is countably finite, and we can pre-compute all the corresponding outcome and store them in a lookup table, to be looked up using the input as index during execution. While the execution requires only a single data memory retrieval, for a machine with finite and often small memory, the algorithm is not feasible, since the number of inputs is nevertheless exceedingly large.;At the other extreme, an algorithm can process input using only computation and no memory---for each input, compute the corresponding output on-the-fly from scratch. This algorithm is likely sub-optimal, since no computation is amortized across inputs using data memory storage and the memory of the machine is under-utilized. Instead of these two extremes, the optimal algorithm uses the right mixture of computation and memory to minimize execution speed. The goal of this thesis is to find this mixture for some specific, important signal processing algorithms: the variable-length code decoding algorithm, and the scalar and vector quantizer encoding algorithms.;Variable-length code (VLC) has been used for lossless signal compression in the form of Huffman code for decades. Decoding VLC is always a time-consuming task, and in our thesis we show by finding the algorithm with the optimal computation-memory tradeoff for a given machine's memory structure, we can optimize decoding performance.;Another application of VLC decoding is Internet Protocol (IP) address lookup. In the current Internet IP architecture, packets of information are forwarded at a router towards their destinations via a route lookup procedure: the longest prefix that matches the packet's IP destination address among a set of prefixes in the routing table is located, and the packet is forwarded to the outgoing link indicated by the corresponding table entry. For optimal network performance, this lookup procedure needs to be as fast as possible---designing a fast lookup algorithm that performs this procedure is the IP address lookup problem. In our thesis work, we cast this problem as a VLC decode problem and find an algorithm with the optimal computation-memory tradeoff. Simulations show our algorithm outperforms competing algorithms in the literature by a wide margin.;The last algorithms we optimize are the scalar and vector quantizer encoding algorithms. Non-uniform scalar quantizer encoding is the process of mapping an input to one of many partitions in the input space. Vector quantizer encoding is the extension of the scalar case into multiple dimensions: mapping an input vector to one of many nearest neighbor regions in the input vector space. For each case, we show that by finding an algorithm that optimally trades off computation and memory for a particular machine, the algorithm has enhanced performance over existing algorithms in the literature.
Keywords/Search Tags:Algorithm, Memory, Computation, Signal, Processing, Vector quantizer encoding, Input, VLC
Related items