Font Size: a A A

Concentration Inequalities Of Random Matrices And Their Applications

Posted on:2022-08-20Degree:MasterType:Thesis
Country:ChinaCandidate:B JiangFull Text:PDF
GTID:2480306509984389Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
One major research topic on random matrices is to study the concentration inequalities of random matrices which have been used in many research fields,such as machine learning,compressed sensing,quantum computing and optimization.The research content of concentration inequalities of random matrices is to bound the probability that the extreme eigenvalue(or norm)of the sum of random matrices is greater than a given constant.Domestic research on this problem is still in the start stage.This paper summarizes the current mainstream framework of concentration inequalities of random matrices.The main steps of these frameworks are as follows.Firstly,the matrix Laplace transform method is used.The tail probability of the eigenvalues of the sum of random matrices is controlled by the trace of the matrix moment generating function.Secondly,the upper bound of the moment generating function is controlled by Golden-Thompson inequality or cumulant generating function.The concentration inequalities of random matrices obtained by above methods depend on matrix dimension,which prevents its application in high-dimensional or infinite-dimensional matrices.Principal component analysis and sparse principal component analysis are usually performed on high-dimensional data.In this paper,the upper bound of the actual error is bounded by the dimension-free concentration inequalities of random matrix with exponential form,which is suitable for high dimensional data.According to its theoretical research,sparse principal component analysis is improved as follows.The principal axis obtained by sparse principal component analysis are not orthogonal.To overcome this shortcoming,we remove the correlation of principal axis based on the idea of Schmidt's orthogona ligation.The Wine,Heart and Sonar data set were reduced by sparse principal component analysis and improved sparse principal component analysis,respectively.Then,we use SVM to classify the data sets which are reduced dimension by the above two methods.Compared with the data which reduced the redundancy by sparse principal component analysis,the average accuracy and average recall rate of classification using the data which reduced the redundancy by improved sparse principal component analysis are increased by 5.8%and 5.1%,respectively.We apply sparse principal component analysis and improved sparse principal component analysis to factor analysis.The experiment shows that the loading obtained by sparse principal component analysis is easy to interpret and the loss of information is less.The loss of information of improved sparse principal component analysis is relatively large.
Keywords/Search Tags:Random Matrix, Concentration Inequalities, Principal Component Analysis, Sparse Principal Component Analysis, Factor Analysis
PDF Full Text Request
Related items