Font Size: a A A

The Research Of Kernel Adaptive Filtering Algorithms

Posted on:2013-02-02Degree:MasterType:Thesis
Country:ChinaCandidate:Q Y MiaoFull Text:PDF
GTID:2218330371456243Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
The computational power of linear systems is limited. In general, most complex real-world applications involve nonlinear relationships. The kernel method is a powerful tool for extending an algorithm from linear to nonlinear case. Recently, the kernel trick is increasingly applied to the designation of nonlinear adaptive filters. However, there are many things to do with the kernerlized nonlinear adaptive filtering algorithms compared with the much more comprehensive linear algorithms, for example, the capability to deal with different system noises and the improvement of the convergence rate. In this thesis, we design kernelized nonlinear adaptive filters from the two aspects mentioned above.Firstly, concerning the more general noise environments, the least-mean mixed-norm (LMMN) algorithm possesses good performance when the system measurement noise shows distribution with a linear combination of long tails and short tails. Thus, in this thesis, we do the job of combining the famed kernel trick and the LMMN algorithm to present the kernel LMMN (KLMMN) algorithm, which is an adaptive filtering algorithm in reproducing kernel Hilbert space (RKHS). The stable range of the step size, as well as the expression of the optimal norm-mixing parameter, are derived. Steady-state analysis shows that the KLMMN algorithm is locally exponentially stable (LES). To demonstrate the effectiveness and advantages of the proposed algorithm, we apply the algorithm to nonlinear system identification and chaotic time-series prediction in the presence of noise composed of a linear combination of Gaussian and Bernoulli distributions, and the results demonstrate that our new algorithm can achieve lower steady-state mean squre error (MSE).Secondly, aimed at the problem of increasing the convergence rate of kernel adaptive filtering algorithms, we present two kernalized variable step-size algorithms. After simple convergence analysis, we give an example on nonlinear channel equalization, which demonstrates that the two algorithms can largely increase the convergence rate without influencing the steady-state MSE in RKHS.
Keywords/Search Tags:Kernel method, reproducing kernel Hilbert space, least-mean mixed-norm, locally exponentially stable, varible step size, incremental meta-learning
PDF Full Text Request
Related items