Font Size: a A A

On The Robustness And Sparsification Of Adaptive Filtering Algorithms

Posted on:2020-02-22Degree:MasterType:Thesis
Country:ChinaCandidate:W Y WangFull Text:PDF
GTID:2428330599957025Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Different from the conventional filter,the adaptive filter(AF)adjusts the parameters of the filter adaptively according to the statistical characteristics of the signal to achieve the optimal filtering performance.Therefore,AF can accomplish the noise cancellation or signal separation based on some error criterion even if the noise and the desired signal are distributed in a similar frequency band.As a new class of the adaptive filters,the kernel adaptive filter(KAF)provides an efficient solution for nonlinear classification and regression by transforming the original input space into the high-dimensional space(reproduce kernel Hilbert space)using the kernel methods.In addition,the extreme learning machine(ELM)proposed in recent years can also implement the classification and regression of data by training the single/multiple-hidden layer feedforward network.In the theory of ELM,the parameters of the hidden layer can be randomly initialized and fixed in the following training.During the filtering,the weights between the hidden layer and the output layer are adjusted to reduce the estimation error.And the ELM can maintain the universal approximation capability of single hidden layer feedforward network.At present,research and innovation of AF are mainly developed from the three aspects,i.e.,error criterion,optimization method and filtering structure.The commonly used error criteria include the minimum mean square error(MMSE),mean absolute error(MAE),and maximum correntropy criterion(MCC).Among them,MMSE is simple and widely used,and can achieve optimal estimation under Gaussian noise environment;MAE and MCC have certain ability to combat non-Gaussian noise.There are many research results of optimization methods,including gradient descent method,momentum method,and adaptive moment estimation(Adam),etc.Different optimization methods can be adopted according to the applications.The structure is divided into feedforward and feedback structure.The feedback structure can adjust the filtering parameters by using the history information,thereby improving the filtering performance.For the study of KAF,the sparsification method is required since its filtering network grows linearly with the filtering process which increase the computational and storage burden.The commonly used sparsification methods include: setting the appropriate threshold to remove the redundant data(sample sparsification),and fixing the size of the filtering network(structure sparsification).Based on the aforementioned research directions,this paper mainly focuses on the methods for improving the robustness of the algorithms,accelerating the convergence rate and controlling the filtering network size.(1)Improvement of the robustness of the algorithm from the perspective of error criterion.Online sequential extreme learning machine(OS-ELM)performs well in terms of convergence speed and filtering accuracy under the Gaussian noise environments,but its robustness cannot be guaranteed under the non-Gaussian noise environments.The MCC can captures high-order statistical information of the error,which can eliminate the influence of outliers on the filtering performance.Applying MCC to OS-ELM and combining the fixed-point iterative method derive the online sequential extreme learning machine based on maximum correntropy criterion(OS-ELM-MCC),which exhibits more robust filtering performance against non-Gaussian noise such as ?-stable noise.Furthermore,the constrained online sequential extreme learning machine based on maximum correntropy criterion(COS-ELM-MCC)is proposed,which guarantees the robustness and has a better generalization capability than that of OS-ELM-MCC.(2)Improvement from the perspective of the optimization method and error criterion.The adaptive moment estimation algorithm based on maximum correntropy criterion(AdamMCC)is proposed considered from the two aspects.The Adam method is used as the update method of the filter parameters to achieve the adaptive adjustment of each weight component by the first moment estimation and the second moment estimation of the gradients,which can accelerate the convergence rate.Then combined with MCC,AdamMCC can eliminate the outliers of noises and guarantee the robustness under the non-Gaussian noise environments especially.(3)Improvement from the perspective of the sparsification.Although the size of the network structure of KAF can be controlled by sample sparsification,there is still a problem of the uncertainty of the network size.To solve this problem,extra calculations are needed to achieve the fixed network size with this method.Using the structure sparsification,the Nystr?m method selects a subset of the sample data to form a fixed-size column vector to approximate the kernel vector composed of all sample data.Applying Nystr?m method to the kernel least mean square algorithm generates the kernel least mean square algorithm based on the Nystr?m method(NysKLMS).
Keywords/Search Tags:Adaptive filtering, extreme learning machine, error criterion, optimization method, sparsification
PDF Full Text Request
Related items