Low-rank matrix recovery(LRMR)addresses the problem of recovering an un-known low-rank matrix from a few linear measurements.In the modern world,LRMR arises in various settings throughout science,applied mathematics,and information processing technologies such as big data and artificial intelligence.Due to various types of noise caused by physical implementation or human error modeling,it becomes chal-lenging to acquire and recoup data efficiently and precisely.Existing LRMR models do not account for various forms of noise and perturbations.And one of the great-est obstacles of low-rank matrix recovery theory is applying the theoretical results to practice.Low-rank matrix recovery literature has focused on the setting where the original matrix is noiseless and the measurement operator is usually designed to satisfy some theoretical properties,such as the restricted isometry property and the null space property.In this thesis,we conduct a theoretical and numerical study on a low-rank matrix model by taking into consideration imprecise knowledge of both the original matrix and the measurement operator simultaneously.We introduce a new low-rank matrix model by incorporating a nonzero perturbation E:Rm×n→RMinto the measurement operator A which results in a multiplicative noise and a noise Z∈Rm×ninto the original matrix X which leads to the noise folding phenomenon.In Chapter 1,we give an introduction to the study.Then we start with a dis-cussion on the existing models in low-rank matrix recovery and explore the research gap.Subsequently,we introduce our proposed low-rank matrix model,which will be studied theoretically and numerically through all of the developed points of this dissertation.In Chapter 2,In the second chapter,we begin by introducing some valuable vector and matrix definitions.The following section then provides an overview of the optimization algorithms used in this article as well as a review of some key concepts of the metric operators that will be employed.In Chapter 3,by extending the results of Zhou et al.(2016),we first explore the restricted isometry property(RIP)constants of our new proposed model.Then,based on the RIP,we establish a sufficient condition for the robust and stable recovery of low-rank matrices via nuclear norm minimization(NNM)in the considered context.Using another important tool for analyzing low-rank matrix recovery,the Frobenius-robust rank null space property,we give a necessary and sufficient condition for the robust and stable recovery of low-rank matrices in cases of uncertainty in the mea-surement matrix and the measurement operator.The analysis has shown that our model after whitening is equivalent to the standard low-rank matrix recovery model.The only distinction is noise variance increase,which leads to noise amplification or the noise folding phenomenon.Numerical simulations on both synthetic and image data demonstrate that recovery error is a linear function of the original noise matrix variance,and the quality of the restored image in terms of PSNR and SSIM depends on the variance of the additive noise Z∈Rm×n;the recovery effect gets better with a smaller noise varianceσ0.In Chapter 4,to overcome the disadvantages of the NNM,we propose another non-convex surrogate of the matrix rank.Inspired by the superiority of the?pquasi-norm minimization for 0<p<1 compared to the?1-minimization in compressed sensing,a non-convex Schatten-p norm is proposed to replace the nuclear norm.Using the non-convex Schatten-p minimization and the RIP,the theoretical results provide a sufficient condition to guarantee a robust and stable recovery,as well as the upper bounds of recovery error.Numerical simulations are performed on both synthetic and image data to support the out-performance of the non-convex approach compared to the nuclear norm minimization method.In Chapter 5,we first study the noise folding phenomena in a completely per-turbed low-rank matrix model using the difference of nuclear norm and Frobenius norm(L*-F)model and present a stable recovery result based on the matrix version of RIP.Then we find the truncated difference of nuclear norm and Frobenius norm(Lt,*-F)model can also stably recover low-rank matrices in the considered case.The experimental study shows that the recovery error is robust to the original matrix noise and the perturbation level.Using three optimization algorithms(the difference of convex functions algorithms(DCA),forward–backward splitting(FBS),and alter-nating direction method of multipliers(ADMM)),a comparative study shows that the recovery performance improves when using the difference between the nuclear norm and Frobenius norm compared to the approach using the nuclear norm as a convex surrogate of the rank function.Finally,in Chapter 6,we present a summary of the key research findings and make suggestions for future research... |