Font Size: a A A

Research On Sparse Reconstruction And Low-Rank Approximation Algorithms With Applications

Posted on:2018-06-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z Q HeFull Text:PDF
GTID:1318330542977591Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Sparsity and low-rank are the potential low-dimensional structure patterns of most natural signals.These characteristics provide an opportunity for data representation and analysis or knowledge understanding to unveil the inherent structure of the world.A sig-nal?vector?is called sparse if the number of nonzero elements in itself or the nonzero representation coefficients in a transform domain is much smaller than its dimension.As the extension of the sparsity,low-rank refers to the fact that the rank of a matrix?the number of nonzero singular values?is less than the the dimension?the number of rows or columns?of the associated matrix.The process of obtaining sparse solutions from?possibly noisy?low-dimensional measurements of a signal is called the sparse recon-struction,whereas using a small number of nonredundant low-rank factors to capture the principal components of the high dimensional data matrix?which may contain missing information?is referred to as the low-rank approximation.Sparse reconstruction and low-rank approximation have been widely used in many practical applications,such as signal processing,wireless communication,pattern recognition,machine learning,computer vision,etc.Pursuing efficient,robust and scalable sparse reconstruction and low-rank approximation algorithms are the foundations of their applications.In recent years,the development of compressed sensing theory has led many schol-ars to study sparse reconstruction and low-rank approximation algorithms.Although a multitude of relevant algorithms have been reported,the existing approaches rest upon some key restrictions or assumptions and may fall short or even break down in some practical scenarios.This dissertation concentrates on the algorithm aspect of sparse re-construction and low-rank approximation in nonuniform noise or impulsive noise envi-ronment,and the main contributions are summarized as follows.?1?For the problem of sparse support reconstruction from multiple measurement vectors?MMV?with a single dictionary,a novel sparse covariance fitting algorithm is proposed for the nonuniorm noise environment.Firstly,from the second-order covari-ance matrix of the MMV model,a single measurement vector?SMV?model without any knowledge of the unknown noise power is obtained by exploiting vectorization and linear transformation.Secondly,the asymptotic Gaussian distribution of the sample covariance matrix error is exploited to perform a prewhitening processing,and hence an SMV model is available under the standard Gaussian noise,and then the problem of the sparse support reconstruction is formulated as a nonnegative sparse optimization problem.Finally,the square error distribution of the SMV model is analyzed,and the regularization parameter selection is also discussed in a probability sense.The identifiability of the proposed algo-rithm is also discussed for different measurement matrices,and it is theoretically proved that the proposed algorithm can achieve the sparse support recovery with a larger sparse support than the measurement dimension.The simulation results in both the random mea-surement matrix and narrowband direction-of-arrival?DOA?estimation illustrate that the proposed algorithm has better reconstruction performance than existing approaches in nonuniform noise environment.?2?For the problem of sparse support reconstruction from MMV with multiple sin-gle dictionary,a novel sparse covariance fitting algorithm is developed for the nonuniorm noise environment.Firstly,from the second-order covariance matrices of multiple MMV models with multiple dictionaries,many SMV representation models are obtained by vectorizing the convariance matrices.Then,the unknown nonuniform noise power are eliminated through the linear transformation and prewhitening of these SMV models.Fi-nally,a robust weighted sparse optimization problem is formulated by using the mixed?2,1-norm,which can overcome the ambiguity issue resulting from the dictionary corre-lation.Based on the duality optimization theory,the regularization parameter selection is also discussed.The simulation results in both the random measurement matrix and wideband DOA estimation demonstrate that the proposed algorithm can not only achieve the sparse support reconstruction with larger sparse support,but can also address the ambiguity problem.?3?For the line spectral estimation problem of joint sparse signal and parameter re-construction under impulsive noise,a robust formulation is proposed,which includes the generalized?-norm?0<<2?data-fidelity fitting term added to a log-sum sparsity-promoting regularizer.The proposed formulation involves,besides the unknown sparse signal,a highly nonlinear frequency parameter.To handle this intractable nonconvex problem,an iteratively reweighted?2 approach is developed via majorizing the original objective function by a quadratic surrogate function.Each iteration is an approximate solution of the quadratic upper bound surrogate of the original objective function at the previous iteration.Simulation results illustrate that the proposed approach attains a sig-nificant performance improvement over the existing methods under impulsive noise.?4?For the problem of low-rank matrix approximation in the presence of impulsive noise and missing data,a unified formulation is proposed from the point of view of matrix factorization,which involves minimizing an?-norm?0<<2?based data-fidelity term plus structure-promoting constraints,including nonnegativity and/or sparsity.To han-dle this computationally intractable problem,a block iteratively reweighted algorithmic framework is developed based on the block majorization minimization.Each iteration of our algorithm framework is obtained by minimizing a regularized reweighted?2 surro-gate function with a closed-form minimizer in an interweaved block update fashion.The proposed approach can directly be utilized for?sparse or semi-sparse?nonnegative ma-trix factorization.Meanwhile,the nontrivial global convergence property of the proposed algorithm is also proved:the whole iteration sequence is convergent and converges to a stationary point.Numerical experiments using both synthetic data and real data demon-strate the superiority of the proposed algorithm.
Keywords/Search Tags:Sparse reconstruction, compressed sensing, low-rank approximation, matrix factorization, regularization
PDF Full Text Request
Related items