Font Size: a A A

The Application Of SOFM And Direct Vector Quantization To LD-CELP Speech Coding Algorithm

Posted on:2009-07-03Degree:MasterType:Thesis
Country:ChinaCandidate:Q Q ZhaoFull Text:PDF
GTID:2178360245965566Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
The ITU-T G.728 speech coding standard with low delay and high quality speech coding characteristic has been widely applied in every field of data communication, but the algorithm complexity is high and the computation quantity is very large. This research made modification in G.728 algorithm aiming at reducing the coding complexity, used and improved a method of reducing codebook search complexity—direct vector quantization.In LD-CELP algorithm, 1024 code vectors in excitation codebook passed through a cascaded filter consisting of synthesis filter and perceptual weighting filter one at a time, then compared with target vector normalized, and the excitation codeword of the least of the mean-squared error (MSE) was selected. The computation quantity of cascaded filter is much larger in the whole coding process. For this problem, the idea of direct vector quantization was applied in LD-CELP speech algorithm. The main idea of direct vector quantization was to remove the filter operation of cascaded filter in codebook search process, meanwhile the inverse perceptual weighting filter was used to synthesize speech. This research implemented the combination of direct vector quantization and LD-CELP speech algorithm. The parameter selection and coefficient updating process of inverse perceptual weighting filter as well as the codebook search operation were discussed in details. The experiment result showed with respect to operation speed, direct vector quantization LD-CELP is obviously faster than LD-CELP, and is much faster when the sentence becomes long, and direct vector quantization LD-CELP is 4.7s faster than LD-CELP in 81 sentences. Because cascaded filter takes part in energy operation and time-reversed convolution operation, aiming at which we further analyse the operation time of multiplication and addition, the multiplication operation amount could be reduced by 75% and the addition operation amount could be reduced by 77.78%, while keeping subject auditory quality and speech quality.Using the idea of direct vector quantization, the codebook needs to be trained again. So the neural network applied in the codebook design of vector quantization is studied. The SOFM (self-organizing feature map) neural network is applied in the codebook design because of the little effect on the initial codebook, the strong ability of preventing signal noise error, strong adaptability. The SOFM network consists of two layers (input layer and output layer) owning the lateral association capability. The Kohonen competitive learning algorithm is adopted, the weight values of winning node and neighboring nodes are updated, then the space feature map between input vectors and output vectors is realized, the codebook is made up of weight sets. Based on the analyzing the theory of neural network, this research described, in the algorithm of speech coding, the selecting of learning rate and neighboring function, and further proposed two methods of improving network performance: First, normalizing of input train vectors and connection weight vectors; Second, decomposing the adaptive adjusting process of network weights into two steps of sequencing and convergence. The result showed the performance of SOFM network is further improved using these methods. The codebook trained by SOFM neural network generated the speech average segment SNR is 0.73dB higher than the LBG algorithm on average, and the time of the reconstruction codebook is only 10.8% of LBG algorithm.
Keywords/Search Tags:speech coding, vector quantization, neural network, codebook design
PDF Full Text Request
Related items