Font Size: a A A

Study On Vector Quantization Codebook Design Algorithm Based On Neural Network

Posted on:2006-06-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y P DuanFull Text:PDF
GTID:2168360155452520Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
1. IntroductionVector quantization(VQ) is an efficient and lossy compression technique,whose prominent virtues are high compression ratio and simple decoding process,so it has become one of important compression techniques for coding. VQcompression technique has broad applicable fields, such as compression andreal-time transmission of satellite-sensed or plane-sensed images, videocompression for digital television and DVD, compression and storage of medicalimages, compression and transmission of network-based test data, speech coding,image recognition and speech recognition, and so on. The research on VQ involveslots of theories and techniques from various academic subjects, and hassignificance for academic, economic and national defence from angles of theoryand application.VQ technology is based on Shannon's rate-distortion theory. VQ finds thenearest codeword for each input vector and transmits the corresponding index tothe decoder, thus in the decoding phase merely a simple table-look-up operation isrequired. VQ has three key techniques, i.e., codebook design, codeword search andcodeword index assignment. Of which, it is important to design good performancecodebook. This is the key of success or not of the total VQ Quantizer Design andchief factor to determine to VQ Quantizer's performance.This thesis focused on the issues about VQ codebook design, compared thetraditional codebook design algorithms with the codebook design algorithms basedon Neural Network, tried to enhance the codebook design algorithms based onNeural Network, in order to improve convergence speed of codebook training andperformances of the final codebook.2. LBGAlgorithm In 1980, Linde,Buzo and Gray put famous VQ codebook design algorithm,i.e., LBG algorithm. LBG algorithm based on k-mean is to use the mean of inputvectors mapping to one unit in the codebook as the new codebook vector in the unit.LBG algorithm is easy to realize, whose physical concept is clear and whose theoryis rigorous. LBG algorithm not only is very popular in the engineering application,but also acts as additional step of the other codebook design techniques. BecauseLBG algorithm has characteristic of average distortion monotonic no increasing,which can improve any given initial codebook. From this angle, the codebookswhich other any codebook design algorithm produce can be optimized furtheracting as the initial codebook of LBG algorithm. But the disadvantage of LBG algorithm is obvious: (1)during every iterativeoptimal partition procedure, searching the nearest codeword with the trainingvector needs lots of storage memory and computation; (2)the choice of the initialcodebook influences on the convergence speed and the final codebook performanceof codebook training; (3)self-adaptation ability of the codebook is not high. For settling these problems, scholars in many countries have donecomprehensive and thorough research, and have some achievements. Thesealgorithms may be classified as four kinds: (1)LBG enhanced algorithms, includingenhanced algorithms for initial codebook choosing and accelerated algorithms fordesign speed ; (2)the codebook design algorithms based on Neural Network, forexample self-learning Neural Network and competing-learning Neural Networkand so on; (3) the codebook design algorithms based on global optimize technique,for example stochastic relaxation algorithm, simulated annealing algorithm,heredity algorithm and Tabu search algorithm; (4) the codebook design algorithmsbased on fuzzy clustering theory. The thesis studied the codebook design algorithms based on Neural Network.3. The Codebook Design Algorithm Based on Neural Network VQ codebook design algorithm based Neural Network makes use of learningfunction of Neural Network , updating gradually the winning neuron or theneighborhood codeword of the winning neuron, even all codeword to a certainextent according distortion value in the learning procedure. Moreover, learningvelocity(convergence rate) can be controlled by controlling factor of the learningrate. So, compared to traditional codebook design algorithm, VQ codebook designalgorithm based Neural Network has notable trait and virtue. In this thesis, wefocus our sight on the research of two kinds of VQ codebook design algorithmsbased Neural Network, i.e., the learning vector quantization(LVQ) codebook designalgorithm and the self-organization feature map(SOFM) Neural Network codebookdesign algorithm, and improved the two algorithms.3.1 LVQ Modified Algorithm The learning vector quantization is simple hard decision algorithm, whichonly updates the winning neuron(codeword), and continually adjusts learning ratein learning procedure, making algorithm convergence gradually. In LVQ algorithm,general form of updating equation of phenotype is: mc (t +1) = mc (t) +α(t)(x(t) ? mc (t)) So mc(t+1) is convex function of mc(t) and x(t), while t is increasing, mc(t) isclose to x(t) gradually. But due to only updating the winning neuron, thealgorithm's dependence on initial codebook is high, the algorithm is easy to trap inlocal minimum. Finally performance of codebook is bad. On the other hand owingto uncertainty of training vector, it is difficult to adjust learning rate a(t). To thisproblem, convergence's adequate and necessary conditions of thesis based onLVQ algorithm are:the optimum learning rate factor gets: αc(t) = αc(t ?1) 1+αc (t ?1) On the other hand, from the simulating experiments it may find that someneurons are winning many times, and others are little even zero during the LVQalgorithm training. This results in low codeword utilization efficiency. For makingneurons'winning times fair, it is considered to use the controlling factor qc(t)during the LVQ algorithm realizing. When the c neuron is winning, it increases thevalue of qc(t). The aim of doing it is to low the frequency of the c neuron winningagain.3.2 SOFM Modified Algorithm SOFM is a complicated clustering algorithm, which not only updates winningneuron, but also updates neighborhood of winning neuron. In this thesis some ofthe advantages of the SOFM algorithm in utilizing in vector quantization aresummarized. The codebook performances of the SOFM algorithm outperform theLBG algorithm; the SOFM algorithm can be much more easily controlled in thetraining procedure; the SOFM algorithm can speed up the training procedure incase of large scale training set; the convergent result of the SOFM algorithm is notsensitive to the initial codebook; the SOFM algorithm can nearly realize the globaloptimization. These characters can be conformed from the simulation experimentalresults in this thesis. The SOFM algorithm has two important parameters, i.e., neighborhoodfunction and learning rate function, which the thesis studied. 1. Neighborhood Function The intention of choosing neighborhood function is to ensure more weightvectors being updated to a training vector, and the weight vectors being updatedhave very deeply relevance in the relation of space topology. If only the winningneurons are updated in all the learning procedure, it is possible to make someweight vectors not win owing to poor initial condition, so it is possible to appearempty unit in the final codebook. Although using other methods can solve theproblem of empty unit, using mechanism of neighborhood can realize the betterclassification in more expensive space extent to input pattern space, so this canremove space redundancy information better, which make pattern classificationeffect better. In this way the codebook which the SOFM algorithm designs canapproach the global optimum result. 2. Learning Rate Function The learning rate factor is one of key factor influencing on the performanceand convergence speed of SOFM algorithm. It is a iterative time function, whosefunction is to adjust training updating intensity moderately according the currenttraining state. As high value of learning rate have advantage to accelerate thetraining velocity , and low value can stabilize the learning training procedure, sothe algorithm chooses high value of learning rate in the initial condition, andchooses low value of learning rate as iterative time increases, which make thealgorithm tend to stabilize. But realizing the algorithm, we can find that more smalla(t) is, more slow convergence speed is, more long the training's time, both sidesare contrary. So the learning rate is difficult to adjust. To this situation, the thesisdeduces the optimum learning rate iterative function in the improved scheme ofSOFM algorithm: α(t) = hcj (t ?1)α(t ?1) hcj (t)[1+ hcj (t ?1)α(t ?1)] The simulation experimental results confirm its validity.4. Conclusion Through analysis and simulation experiments to algorithm, this thesis hasfollowing conclusions: 1. LVQ algorithm is easy to realize, but LVQ algorithm only updating the...
Keywords/Search Tags:Quantization
PDF Full Text Request
Related items