Font Size: a A A

The Research Of Adaptive Metalearning Algorithms

Posted on:2014-01-20Degree:MasterType:Thesis
Country:ChinaCandidate:Y YeFull Text:PDF
GTID:2248330395976068Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
The conventional adaptive filtering algorithms are developed for the linear learning systems, if the mapping between output and input is highly nonlinear, their performances will become poor. In general, many problems in the real-world are nonlinear. The kernel trick is a powerful tool to solve the nonlinear problems. Recently, the kernel trick is increasingly applied in the research of nonlinear adaptive filtering algorithms. Compared with the much more comprehensive linear algorithms, the kernelized adaptive filtering algorithms have many things to do to improve their performance, for example, the improvement of the convergence rate and the reduction of the mean square error (MSE). On the other hand, metalearning is an important method in the context of machine learning and it mainly studies how learning system itself can become more efficient and powerful by utilizing past experience. In other words, metalearning means learning to learn. Metalearning can use experience to adjust free parameters of a base learning algorithm so as to improve the performance of the learning system. So we make researches on adaptive filtering algorithms based on metalearning and kernel trick from the three aspects in this paper.Firstly, the incremental Delta-Bar-Delta (IDBD) algorithm can be regarded as a metalearning algorithm, which is proved to exhibit faster convergence rate and lower MSE than the LMS algorithm. We combine the IDBD algorithm and kernel trick to derive a new adaptive metalearning algorithm in reproducing kernel Hilbert space (RKHS), named kernel incremental metalearning (KIMEL) algorithm. At the same time, the stable range of the step-size, as well as the performance of steady-state, are derived. Applied in nonlinear channel equalization and image quality assessment, the proposed algorithm is proved to have a faster convergence rate than other competing algorithms.Secondly, considering that the complex signal processing is gaining popularity due to their broad applicability, this paper extends the KIMEL algorithm to complex RKHS, and obtains two versions of complex-valued kernel metalearning algorithms, named CKIMEL1and CKIMEL2, respectively. The proposed algorithms are applied successfully in nonlinear channel equalization and nonlinear channel identification, and the results demonstrate that the CKIMEL algorithm has faster convergence rate than the corresponding CKLMS algorithm and the CKIMEL1algorithm is superior to the competing algorithms.Finally, considering the particularity of some actual systems, we combine the sparse regularization and IDBD algorithm to derive a new sparse mealearning algorithm, named l0-IDBD algorithm. In the l0-IDBD algorithm,every weight has its own variable step-size and controlling factor, which can accelerate the convergence of near-zero coefficients in the impulse response of a sparse system and reduce the steady-state misalignment. Four simulations demonstrate that l0-IDBD algorithm improves the tracking rate and the steady-state behavior compared with l0-LMS algorithm.
Keywords/Search Tags:kernel trick, reproducing kernel Hilbert space, variable step-size, incremental metalearning, complex signal processing, sparse systems
PDF Full Text Request
Related items