Font Size: a A A

Sparse And Low Rank Models For Low Level Vision:Theories And Methods

Posted on:2019-03-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:X X JiaFull Text:PDF
GTID:1368330572951490Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Recent years have witness the brilliant progress and great success of the sparse and low rank theory and applications especially in low level vision problems such as:image denoising,image inpainting,super-resolution and video background subtraction.In specific,the sparsi-ty and the rank of a matrix provide an ideal prior knowledge for image in the spatial domain or the transform domain(such as the wavelet transform or DCT transform).Sparse represen-tation aims to pursuit a linear representation of the given data under some dictionary atoms such that the representation coefficients have only few non-zero values.Likewise,low rank matrix approximation pursuit for rank-1 matrix representation.Generally,the sparse and low rank models are formulated as an optimization problem,to which the objective function is assumed to be convex for easy computation.While,the convex model is only a loose ap-proximation of the oracle sparse and low rank models,which may not able to fit the problem well.In addition,the optimization for the low rank model involves massive SVD compu-tations which are computationally intractable,and severely hampers its application to very large scale problems.To alleviate these issues,this dissertation improves the existing works and provides some new thoughts and strategies.Specifically,we make the following efforts.Firstly,we develop a rank constrained nuclear norm minimization method to remedy the over-penalty caused by the convex nuclear norm.The main weakness of the nuclear nor-m minimization problem is that it controls the rank of the solution and the singular value thresholding effects simultaneously by a single parameter,which is impractical.The new method separately conquers the rank of the solution and the singular value thresholding ef-fect,result in a balanced solution.Moreover,to release the optimization of the new model from SVD computation,an easy computable bi-linear matrix factorization model is further introduced.An successive over relaxed algorithm is put forward to handle the new model,and the convergence of the algorithm is guaranteed.To testify the new model,experiments are conducted on image denoising,we compare the new method with the state-of-the-art,and results show that the proposed model is effective and has certain advantage over the compared ones.Currently,we restrict the solution in a certain subspace by restricting its rank.In further works,we would like to adaptively estimated the inherent subspace.Secondly,to provide an alternative rank surrogate for fast low rank matrix recovery,we in-troduce a generalized unitarily invariant gauge(GUIG)function which generalizes the con-ventional spectral function.The merit of the GUIG function is that it can be represented in bi-linear matrix factorization form,which can be easily optimized without computing SVD,which is a good fit for large scale low rank matrix recovery problems.We prove that under some conditions,the GUIG function is equal to the spectral function,for instance,the rank function,the Schatten-p quasi norm,and the log sum of the singular values.Considering that the objective function we aim to optimize is non-convex,to yield a good solution,an initialization strategy is designed,which further accelerate the convergence of the algorithm.In applications,the GUIG function is utilized in the matrix completion and robust principle component analysis problems.Own to the bi-linear matrix factorization and the initializa-tion,the proposed method significantly improve the existing spectral methods in handing the low level vision problems such as Movielens and Netflix.Experimental results have verified the effectiveness and efficiency of the GUIG regularized method.Thirdly,instead of operating on the whole batch of data samples,we propose an online algorithm to solve the Schatten-p quasi norm minimization problem,which processes one data simple in each time instance.The basic idea is to represent the Schatten-p quasi norm in bi-linear matrix factorization formulation,and according to the bi-linear matrix factorization the online algorithm is designed.The memory cost of the online algorithm is independent of the simple size,and moreover the computation is SVD free.We prove that the target loss is a specific instance of an expectation loss,thus,in optimization,the stochastic optimization algorithm is employed to solve the problem and the convergence is guaranteed.We testify the algorithm on subspace tracking and video background subtraction,experimental results on subspace tracking reveal that the online Schatten-p quasi norm minimization is superior to the online nuclear norm methods,and results on video background subtraction show that the proposed online algorithm meet the real time video processing task.Finally,we employ the Bayesian inference to adaptively estimate the low rank regularization function.To be more specific,we assume the prior distribution of the singular values is heavy tail Laplacian,instead of using hand-crafted scale parameter we adaptively estimate it by using the full MAP estimation,as a result,an advanced low rank regularization function(the log function)is introduced.The newly proposed model well preserve the large singular values and adaptively threshold the small ones,which is useful in protecting image texture and edge information.We further applied the regularizer on the RPCA model to address the low rank and sparse part simultaneously.Experimental results on both image denoising and RPCA problems have verified the effectiveness of the newly proposed model.
Keywords/Search Tags:Sparse representation, Low rank matrix approximation, Matrix factorization, Image denoising, Robust principle component analysis
PDF Full Text Request
Related items