Font Size: a A A

Multi-partition Relaxed Alternating Direction Method Of Multipliers For Regularized Extreme Learning Machines

Posted on:2021-01-10Degree:MasterType:Thesis
Country:ChinaCandidate:L J ZhangFull Text:PDF
GTID:2428330605951252Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
Extreme learning machines(ELMs)had been favoured by researchers due to their fast learning speed.In the environment of big data,however,they still suffer from an issue of overly heavy computational cost.The fast learning speed of ELMs is powered by the randomly generated hidden nodes and the analytical calculation of output weights of the training of single-hidden-layer feedforward neural networks(SLFNs).Therefore,the training of SLFN can be simplified to be the least square problem,in which case its solution can be expressed analytically by the matrix inverse method.As the dimensions and volume of the training data becoming very large,solving the inverse operation of high-order matrixs makes the calculation cost too much in the environment of big data.It is necessary to study some fast and efficient regularized extreme learning machine algorithmsIt is an effective way to solve the problem of large-scale data processing by the alter-nating direction method of multipliers(ADMM).Through model division,the ADMM decomposes optimization problem into relatively small sub-problems that can be exe-cuted in parallel.The ADMM uses the Gauss-Seidel iteration method to gradually ap-proach the solution of global problems.To reduce the computational cost,the ADMM can be applied to the solution of the least squares problem in ELMThe main contributions of this thesis are summarized as follows1.A method of ADMM-based regularized ELMs is proposed.Firstly,a method of solving the least squares problem based on a multi-partition ADMM,was proposed.A necessary and sufficient condition for convergence of the multi-partition ADMM was established,and the linear convergence ratio of the algorithm was provided.Then,through experiments on real-world benchmark datasets,the relationship between the convergence ratio of the multi-partition ADMM and the number of partitioned blocks was obtained,and a comparison with the steepest descent method on the convergence performance was made.The results show that,the proposed multi-partition ADMM converges much faster than the steepest descent method.2.A method of relaxed ADMM is proposed.Firstly,by introducing a novel relax-ation method in the iterative steps of the multi-partition ADMM,the multi-partition relaxed ADMM is studied in this thesis.In the case of adding one more parameter,which is the relaxation factor a,the analysis of convergence and the method of param-eter selection are given.Then,in the experiments of real-world benchmark datasets,the simulation calculates the relationship between the minimum convergence rate and the order of the block,and compares it with the un-relaxed ADMM,which highlights the necessity of introducing the relaxation method.Then,the results show that the proposed N-and and N/5-N/2-equipartition Relaxed ADMM converges much faster than the un-relaxed ADMM and the steepest descent method.At last,through matrix transformation and programming techniques,two scalarwise implementations of the multi-partition relaxed ADMM in the N-and N/2-equipartition cases was proposed.The algorithm is compared with the matrix-inversion-basedmethod in terms of the com-putational efficiency and the acceleration ratio through a GPU acceleration experiment in the MATLAB environment.The less computation time and larger GPU acceleration ratios shows that the proposed method has the low computational complexity and high parallelism of the proposed N-and N/2-equipartition ADMM-based RELMs.
Keywords/Search Tags:machine learning, extreme learning machine, alternating direction method of multipliers, parallel optimization, GPU acceleration ratio
PDF Full Text Request
Related items