Font Size: a A A

PD Methods For Two Kinds Of Sparse Approximate Problems

Posted on:2015-09-05Degree:MasterType:Thesis
Country:ChinaCandidate:H T LiFull Text:PDF
GTID:2308330461473500Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Sparse approximation is the core for many real applications. For example, signal processing, compressed sensing, sparse inverse covariance selection and sparse logis-tic regression problems all can be cast as solving the general sparse approximation problem. The aim of a sparse approximation problem is to seek a good approximate, that is using as few elementary as possible. Four common methods approximately solve the simple or special cases of sparse approximation problems, including combi-natorial methods,l1-norm regularization methods, lp(0< p< 1)-norm regularization methods and l0-norm regularization methods. However, all these methods have not been applied directly to general problems which have two l0-norm constraints. So this thesis does research on two types of sparse approximation problems which have two l0-norm constraints.This thesis is concerned with two types of sparse approximation problems ac-cording to practical applications. One type controls different sparsities of different parts of the solution. The other type controls simultaneously the sparsity and the number of a fixed value (which is chosen differently dependent on the practical prob-lem). In chapter 2, we construct the mathematical model of the first type problem and analyze the corresponding first-order sufficient and necessary conditions. Then we apply the penalty decomposition method to solving this model. Block coordi-nate descent method is used to solve the penalty subproblems under every penalty parameter. What’s more, we use a greedy strategy to construct a special solution for the part which refers to the l0-norm. Under some reasonable assumptions and the condition that the l0 is the only nonconvex part, we prove that the BCD can seek local minimizers of the penalty subproblems and the PD method can also seek a local minimizer of the first type problem. In chapter 2, we also extend the model in which there are two l0-norms to control the different sparsities of different parts of the solution, to the model in which the two lo-norms occur as a part of the ob-jective function. Then we propose the corresponding PD method and make some theoretical analyses. In chapter 3, we do the similar work on the other type problem-s. First, we construct a mathematical model which has two l0-norm constraints to control simultaneously the sparsity and the number of a fixed value. Next, we ana-lyze the corresponding first^order sufficient and necessary conditions for this model, and propose the corresponding PD method for this problem. Then, we make some theoretical analyses which are similar to that in chapter 2. Block coordinate descent method is used to solve the penalty subproblems under every penalty parameter. What’s more, we construct a special solution by converting the part which refers to the l0-norm to a minimum cost network flow problem. In chapter 4, we put forward the future research directions at the same time making the conclusions and summa-rizing of this thesis. On the one hand, the models proposed by this thesis will be further investigated on real-world applications involving the sparsity. On the other hand, the models may be solved by other strategies such as combinatorial methods, lp(0< p≤1)-norm regularization methods.
Keywords/Search Tags:sparse approximate, sparsity control, l0 norm, penalty de- composition methods
PDF Full Text Request
Related items