Font Size: a A A

Recognition Based On Support Vector Machines For Speaker

Posted on:2010-05-23Degree:MasterType:Thesis
Country:ChinaCandidate:C Y ZhouFull Text:PDF
GTID:2208360278469505Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
In speaker recognition field, recognition method based on Support Vector Machine (SVM) technique is a hot spot. Unlike other conventional pattern recognition techniques, this method has two perculiar characteristics. Firstly, the proposed SVM technique expresses inner product of feature space using a non-linear kernel function. Secondly, the SVM method carries out structural risk minimization principle using optimal classification super surface. This made the proposed SVM technique widely applicable.In this thesis, we investigate the fundamental theory and realization procedure for speaker recognition. We began with a thorough review on feature parameter. This is followed by an investigation of the linear prediction cepstrum coefficient (LPCC) and mel-frequency cepstrum coefficient (MFCC). The thesis combined features from LPCC and MFCC into several feature vectors and tested their degree of accuracy in abstracting personal characteristics. The thesis also investigated the impact of virous feature parameter on rate of recognition and noise abatement.Since kernel function is an essential technique in SVM theory and the accuracy of feature classification is greatly influenced by the selection of function and parameter, we conducted a review of the basic theory of kernel functions. A simulation and analysis of kernel function such as polynomial function, radial basis function, sigmoid function is presented. Then, the rate of recognition and steadiness of pure speech signal and noisy signal condition is also presented.Before SVM training, the size of sample set is critical to achieving high rate of recognition and time efficiency, therefore, we propose reducing the size of the sample set. We also presented a new algorithm for reducing the so-called Support Cluster Abstracting (SCA). We conducted a review of the SCA's fundamentals and provide its realistic steps. At last, the thesis presented a simulation and analysis comparing SCA and other methods. On one hand, we tested linear divisible samples and their performance at boundary description. On the other hand, we tested linear non-divesible samples and measured their rate of reduction and recognition. The obtained SCA parameters determine whether reducing sample set can contain all the supporting vectors and relieve the burden of SVM training as far as possible. In this thesis, we set up SCA parameters experimentally. The parameters include fan-out coefficient k, clustering numbers C and approximation degree factor a. The simulation results reveal that, compared to other reducing algorithms, SCA reaches the higher rate of recognition at higher rate of reducing after coefficient set-up. The results of our experiments justify the prediction of theory. This thesis investigated the difference of capability of virous speaker recognition model.
Keywords/Search Tags:support vector machine, kernel function, sample reducing, support cluster abstraction
PDF Full Text Request
Related items