Font Size: a A A

Research And Application On Several Sparse Regularization Problems

Posted on:2023-04-25Degree:MasterType:Thesis
Country:ChinaCandidate:L J ZhouFull Text:PDF
GTID:2558307097477474Subject:Mathematics
Abstract/Summary:PDF Full Text Request
The high-dimensional data that collected in real life tends to have the character-istics of low rank and sparsity.Utilizing such characteristics of the dataset to facilitate modeling capability is worth studying.Plenty oflp(0≤p≤1)regularization methods has been widely applied on account of its sparsity,and the subspace learning mod-els have also attracted much attention.This paper studies the sparse and low-rank representation learning problems in machine learning.The major contributions are as follows:1.Extreme learning machine(ELM)is a network model that arbitrarily ini-tializes the first hidden layer and can be computed speedily.In order to improve the classification performance of ELM.Al2andl0.5regularization ELM model(l2-l0.5-ELM)is proposed in this paper.An iterative optimization algorithm of the fixed point contraction mapping is applied to solve thel2-l0.5-ELM model.The convergence and sparsity of the proposed method are discussed and analyzed under reasonable as-sumptions.The performance of the proposedl2-l0.5-ELM method is compared with BP,SVM,ELM,l0.5-ELM,l1-ELM,l2-ELM andl2-l1-ELM and the results show that the prediction accuracy,sparsity and stability of thel2-l0.5-ELM are slightly better than the other 7 models.2.Existing computational studies have found that the singular value spectrum of some high-dimensional data has”long tail”effect,which means a small amount of the singular values are large but the most of them small.The classical subspace model PCA can effectively realize an compact low-dimensional representation of the original data.However,this low-dimensional representation tends to ignore the”long tail”part of the singular value which contains some useful information.With the assumption of that the original data can be decomposed into the low-rank subspace and the high-dimensional parts,a hybrid subspace learning model(HSL)was proposed by some researchers.HSL use thel1norm regularization to learn the features retained in the original space.We proposed a new hybrid subspace learing model IHSL to learn a sparser high-dimensional part withl0.5norm regularization.The block-ADMM algorithm is applied to solve the subproblems.In the experiment part,we use the features that extracted by IHSL for downstream clustering and classification problems to verify the effect of the proposed model.The results indicated that the clustering and classification effect is compared with the other 4 methods(PCA,Sparse PCA,Robust PCA and HSL).
Keywords/Search Tags:machine learning, sparsity, subspace learning, hybird regularization, noconvex optimization
PDF Full Text Request
Related items