Font Size: a A A

A Study On Regularization Algorithm Of Low Rank Matrix Recovery And Its Applications

Posted on:2017-05-07Degree:MasterType:Thesis
Country:ChinaCandidate:J Y ChenFull Text:PDF
GTID:2348330488996157Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
This dissertation mainly researches the regularization algorithm of low-rank matrix recovery,and improves the correlation algorithms of low-rank matrix recovery.The primary contents of this dissertation are the algorithm of low-rank and sparse decomposition based on truncation nuclear norm,the matrix elastic-net regularization algorithm with truncated nuclear norm,and a robust two-dimensional neural network with random weights algorithm.The detailed contents are summarized as follows.1.The property of low rank is usually characterized by the nuclear norm to solve low rank and sparse decomposition problem generally.Considering the nuclear norm is not the best descriptive methods,a novel low-rank and sparse decomposition model with the truncated nuclear norm is proposed in this dissertation.In addition,a two-step cyclic iteration method is designed to solve the model,in which the sub-model of the second step is solved by alternating direction method.Under some assumptions,the convergence of the sub-model is proved theoretically.Hence,the effectiveness of the algorithm is guaranteed.Moreover,with both small entry-wise noise and gross errors,a stable low-rank and sparse decomposition model is proposed.The experimental results reveal that,for the problem of artificial data,video background subtraction(foreground object detection),removal of shadows and specularity from facial images and the background music separation of the song,the algorithm of this dissertation can perform effectively.2.For the matrix recovery problem,of which the correlation is larger or the number of elements to be predicted is larger than the number of known elements,a matrix elastic-net regularization algorithm with truncated nuclear norm is proposed to gain a more accurate and stable solution.Due to the non convexity of the truncated nuclear norm,a two-step cyclic iterative method is used to solve the model.Simultaneously,using the knowledge of convex analysis,a fixed point iteration method is designed to solve the sub-model in the second step iterative,and the convergence of the fixed point iteration algorithm is proved theoretically.The experimental results demonstrate that the solution solved by the algorithm proposed in this dissertation is more accurate and stable.3.For the matrix input problem,one dimensional neural networks with random weights(1DNNRWs)needs to transform the input matrix into a column vector which may destroy the correlation of the matrix data elements to lead to effect the recognition result.Meanwhile,the two dimensional neural networks with random weights(2DNNRWs)can directly solve the matrix input problem,but the recognition ability is limited with outliers data.For this,according the sparsity of the outlier in the sample,a two dimensional robust neural networks with random weights(2DRNNRWs)algorithm is proposed to deal with the problem of matrix input identification with outliers,which combines the ?1loss function and F regularization term.Under the hypothesis with Laplace error and Gaussian priori distribution,the 2DRNNRWs model is transformed into a probability model,and the expectation maximization(EM)algorithm can be used to solve the probability model.The experimental results indication that 2DRNNRW algorithm can be efficient to deal with the face recognition problem with outliers.
Keywords/Search Tags:Matrix recovery, Low-rank sparse decomposition, Regularization algorithm, Truncated nuclear norm, Neural networks with random weights(NNRWs)
PDF Full Text Request
Related items