Font Size: a A A

Robust Adaptive Filtering Algorithms And Their Sparsification Methods

Posted on:2022-03-09Degree:MasterType:Thesis
Country:ChinaCandidate:T ZhangFull Text:PDF
GTID:2518306530499864Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
As a classic algorithm in signal processing field,the adaptive filter learning algorithms have been widely used in practical environments,e.g.,system identification,echo cancellation,and channel equalization.The filter coefficients are iterated adaptively by solving a minimized(or maximized)cost function,and realize the effective learning of the unknown system models,where the cost function is a certain statistical similarity measure between the data generation system and the learning model.Among the existing statistical similarity measurements,the measurement based on the second-order statistics is a widely used criterion under the linear model with Gaussian assumption.However,the filtering algorithms that only rely on the second-order similarity metric often exhibit poor prediction performance in processing non-Gaussian data.In order to reduce the influence of outliers caused by non-Gaussian noise,information theoretic learning(ITL)provides an effective processing method from the perspective of probability models.In addition,in order to further improve the ability of the filtering algorithm in the linear model to deal with nonlinear problems,a kernel learning mechanism is introduced into the field of adaptive filtering and then the kernel adaptive filtering theory is developed,which is helpful for developing a class of kernel adaptive filters(KAFs)to solve nonlinear problems in reproducing kernel Hilbert space.However,the algorithms based on the kernel learning mechanism also exist the drawbacks of high memory consumption caused by network growth in the adaptive stage of filtering,which restricts the speed of online prediction.Current adaptive filtering algorithms are difficult to balance simultaneously the three problems of numerical instability,prediction accuracy estimation,and calculation efficiency.Because the correntropy in ITL contains high-order statistics of the data,it can be used to design statistical measures that are robust,high predictive,and informative.At the same time,in order to reduce the training and testing time of robust KAFs,random Fourier features mapping and improved Nystr(?)m are used to design its sparsification algorithms.The following are the work research contributions of this thesis.(1)A new robust minimum kernel risk-sensitive mean p-power error(MKRSP)algorithm based on stochastic gradient descent method is proposed to improve the filtering accuracy and robustness of the traditional linear algorithms against impulse noise.The MKRSP algorithm is regarded as the minimum kernel risk-sensitive loss(MKRSL)algorithm by setting p = 2,and its filtering accuracy is also improved by setting an appropriate p value.In addition,in order to further enhance the adaptability of the algorithm to address nonlinear problems,a random Fourier features MKRSP(RFFMKRSP)algorithm is proposed.Simulation results show the accuracy advantage of the MKRSP and RFFMKRSP algorithms in different mixed noise environments.(2)The paper proposes a novel Nystr(?)m MKRSL(Nys MKRSL)algorithm for obtaining a low dimensional fixed structure.The Nys MKRSL algorithm using the Nystr(?)m method for the kernel matrix approximation accelerates the training of kernel method.Besides,to improve the approximation accuracy of the Nys MKRSL algorithm,another algorithm is designed by using k-means sampling in Nys MKRSL,named the Nys MKRSLKM algorithm.And the performance of the Nys MKRSL-KM algorithm is analyzed theoretically.Finally,Monte Carlo simulations verify the theoretical analysis results and confirm the advantages of the proposed Nys MKRSL-KM algorithm in terms of convergence speed and steady-state error.(3)The kernel recursive generalized maximum correntropy(KRGMC)algorithm is not conducive to real-time prediction because of the increasing network structure,which leads to excessive memory demand,excessive computing load,and long training process.Therefore,a probability density rank-based quantization(PRQ)sampling method using in Nystr(?)m method is applied to the KRGMC algorithm,which generates a new efficient fixed-dimensional Nys KRGMC-PRQ algorithm.The simulation results verify that the proposed Nys KRGMC-PRQ algorithm not only has low computational cost,but also is almost close to the KRGMC algorithm in terms of steady-state mean square error.Compared with the kernel recursive maximum correntropy algorithm and its sparsification method,the Nys KRGMC-PRQ algorithm improves the calculation speed,robustness,and prediction performance of the traditional KAFs in the adaptive learning stage.
Keywords/Search Tags:Adaptive filters, robustness, random Fourier features, Nystr(?)m, sampling method
PDF Full Text Request
Related items