Font Size: a A A

Algorithms And Error Analysis For Matrix Recovery

Posted on:2013-01-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:N ChenFull Text:PDF
GTID:1118330371480933Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Compressed sensing is an emerging novel theory for data acquisition, encoding and de-coding under the specific assumption that the data is sparse or compressible. In compressed sensing, the object we wish to acquire is a vector, while in many application of practical inter-est, we often wish to reconstruct an object in the form of a matrix, which is more convenient for data acquisition, modeling, processing and analyzing. However, these data often suffer from the problem of deficiency, loss, or corrupted with noises. The goal of matrix recovery is to acquire the exact data under these situations.In this thesis, we systematically review the basic theory and main methods of matrix recovery around the three core problem of modeling, algorithm designing, and error analysis. First, we present a quick overview of backgrounds, significance and research status of matrix recovery. The mathematic models and assumptions for matrix recovery are provided. We in-troduce the methods for solving these models, summarize the advantages and disadvantages of the methods, and then propose an elastic-net regularization model for matrix recovery. Second, the solution of the matrix elastic-net regularization model and the properties of the solution are discussed. We construct an iterative algorithm for solving the model, and pro-vide a convergence analysis of the algorithm. Moreover, we analyze the generalization error bounds of matrix elastic-net regularization under different assumptions. Finally, we char-acterize the conditions on the penalty function that lead to uniformβ-stability and present numerical experiments to illustrate the effectiveness and the stability of the algorithm.The main results of the thesis could be summarized as follows:(1) In the problem of low-rank matrix recovery, traditional algorithms obtain low-rank solutions by minimizing the nuclear norm of the matrix. However, the algorithms are unsta-ble when the data matrix is highly correlated. Therefore, we consider the matrix elastic-net regularization, which can lead to stable solutions by minimizing the empirical risk penalized with the combination of the nuclear norm and the Frobenius norm. Moreover, the solution of the model can be characterized as the fixed point of a contractive map by the proximity oper- ator. Then we construct a fixed point iterative scheme for solving the model. The theoretical results show that the sequence of iterates converges to the solution of the matrix elastic-net regularization.(2) The error bounds of the matrix elastic-net regularization model under the assumption of matrix restricted isometry property (RIP) are analyzed. We give the definition of the RIP for matrix recovery, and show that certain classes of random measurements satisfy the matrix RIP. The analysis indicates that the error is proportional to the number of degrees of freedom times the noise level under the matrix RIP. Error bound is also established for full-rank matrices.(3) In the framework of statistical learning theory, we give an all-round analysis on the convergence and the generalization performance of the matrix recovery algorithms under the operator assumptions. We describe the matrix recovery problem as a learning problem, and define some Hilbert-Schmidt operators. The generalization error bounds for matrix recovery are then obtained by estimates of Hilbert-Schmidt operators. It is worth mentioning that we also give an adaptive scheme to select the regularization parameter.(4) The stability of the matrix recovery algorithms is studied. We consider the uniformβ-stability of matrix recovery algorithms and characterize the conditions on the penalty function that lead to uniformβ-stability. In particular, we apply our results to show that the matrix elastic-net regularization is uniformlyβ-stable.
Keywords/Search Tags:Error analysis, matrix recovery, elastic-net regularization, proximity operator, statistical learning theory, singular value shrinkage operator
PDF Full Text Request
Related items