Font Size: a A A

Study On An Encoding Algorithm For Vector Quantization And Its VLSI Architecture Design

Posted on:2005-11-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:L J LiuFull Text:PDF
GTID:1118360152468306Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Image compression has become more and more important in consequence of spreading digital still camera and digital documentation system applications. Vector quantization is an attractive technique in digital image compression field due to its simple and effective feature.Two measures are often used for vector quantization: absolute error measure and squared error measure. For many years, most of the research efforts into vector quantization based on squared error measure have concentrated on fast encoding algorithms in software. The basic vector quantization based on squared error measure—full search algorithm is simple in decoding and provides a high reconstructed image quality, but squared error between an input vector and each codeword must be calculated, the computations complexity and encoding time increase significantly with the codebook size, which limits its practical apllications, since an efficient vector quantization system usually requires a codebook with a large size. Vector quantization based on absolute error measure is mainly used for vector quantization encoders, because absolute error is computationally simple and can be easily implemented in hardware, however, the algorithms based on absolute error measure produce a lower encoded quality than that obtained in the algorithms based on squared error measure, and many hardware resources are consumed in computing the absolute errors in parallel. This thesis offers a fast encoding algorihm based on squared error measure for vector quantization and its VLSI architecture. In the thesis, at first, a codebook is designed by using the famous LBG codebook gereration mathod, then, a new fast encoding algorithm siutable for hardware implementation based on squared error measure for vector quantization is presented while maintaining the accuracy of the conventional full-search algorithm. In the meantime, several processing ways including a cordword-deleting rule to delete unmatched codewords; an efficient candidate codeword-searching method and an index space to save the memory required in execution are provided. At last, the algorithm is developed into VLSI architecture, which is simulated functionally and verified successfully. This thesis focuses on the following innovative study: First, a new pyramid data structure is proposed. This structure accords with the definition of an image pyramid data structure. Data at a low level is half L2-norm of the corresponding data at the next high level in the pyramid. All the data in the pyramid is just in the range of gray-levels of 0~255. The required memory overhead is saved, which is suitable for inplementaion in hardware. On the basis of the data structure, a multiple inequality condition for the fast codeword-searching algorithm is derived. When matching a codeword by using the inequality, if the distortion between the low levels of an input vector pyramid and a codeword pyramid is larger than or equal to the "so far" minimum distortion, then stop the matching process of the current codeword to reduce the amount of the distortion computations between the remaining levels of the two pyramids. An efficient way to search a candidate codeword, a powerful codeword-deleting rule and an index space are presented to speed up the encoding. Due to the specialty of the distortion between the top levels of an input vector pyramid and a codeword pyramid, the search way to select a candidate codeword, whose norm is nearest to the norm of an input vector to be encoded, is given to guarantee that the distance between the top levels of the the input vector pyramid and the selected candidate codeword pyramid relative to all the remaining unrejected codewords is minimized. Hence, the best-matched codeword can be found as quickly as possible and the encoding time is decreased rapidly. Based on the proposed inequality and cordword-search way, once a codeword is determined to satisfy the deleting rule, then the encoding of the current input vector can be immediately terminated and all the remaining unlikely codeword...
Keywords/Search Tags:Vector Qauntization, Fast Encoding Algorithm, VLSI Architecture, Verilog HDL, Simulation and Verification
PDF Full Text Request
Related items