Font Size: a A A

Numerical Methods For (?)_p Regularization Problems

Posted on:2014-12-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:L WuFull Text:PDF
GTID:1260330401473958Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
The lp regularization problem has wide applications in variable selection, sig-nal process, compressive sensing, data mining, financial optimization and so on. The research in lp regularization problems is a hot topic in the field of optimization. In this thesis, we study theory and numerical algorithms for solving lp regulariza-tion problems and its applications in compressive sensing. We are particularly interested in the case where p∈(0,1]. The case p=1corresponds to the l1regu-larization problem. It is a nonsmooth but Lipschitz continuous optimization prob-lem. In the case p∈(0,1), the problem is nonconvex and even non-Lipschitzian. It was proved that the lp problem with p∈(0,1) is strongly NP-hard. The study in this problem is challenging.Our first task is to develop numerical methods for solving the lp regularization problem with p=1and p=1/2. We focus our attention on the numerical methods for solving large scale problems. We first study numerical methods for solving large scale l1regularization problems. We develop methods in two different ways. One is to develop an iterative method for solving the problem directly. we make an improvement to the so-called spectral gradient method for solving smooth optimization problem. By the use of the generalized gradient of the objective function, we develop a quasi-spectral gradient method for solving l1regularization problem and establish its global convergence. On the other hand, we develop a nonsmooth equation based method for solving the l2-l1regularization problem. We first derive an equivalent nonsmooth equation reformulation to the problem. This equation enjoys some nice properties such as it is monotone, Lipschitz continuous and semismooth. Based on the reformulation, we propose an iterative method for solving l2-l1regularization problems. A good advantage of the proposed method is that the distance between the iterates and the solution set is decreasing. Moreover, the whole sequence converges to a solution of the problem. We apply the proposed methods to solve some practical problems arising from compressive sensing and image restoration. We compare the performance of the proposed methods with some well-known methods such as GPSR and SpaRSA methods. The results show that the proposed methods in this thesis perform better than the existing methods. They are practically efficient.We will pay particular attention to the numerical methods for solving1/2regularization problems. A large amount of numerical experiments have shown that the l1/2regularization problem takes a representative role in the lp regularization problem with p∈(0,1). We first extend the idea of the spectral gradient method to this problem and propose a quasi-spectral gradient method for solving1/2regularization problems. It should be pointed out that the quasi-spectral gradient method proposed in the thesis is not a simple generalization of the spectral gradient method in the solution of smooth optimization but its improvement. We notice that the l1/2regularization problem is not differentiable and even not Lipschitz continuous. Consequently, the generalized gradient in the sense of Clarke does not exist. To develop the quasi-spectral gradient method, we construct a function which takes the same role as the gradient in the case of smooth optimization. Based on this function, we develop a quasi-spectral gradient method and prove its global convergence.Another important contribution of the thesis is the derivation of an equivalen-t smooth constrained optimization reformulation to the l1/2regularization prob-lem. This reformulation is minimizing a smooth function subject to some simple quadratic inequality constraints and nonnegative constraints. The feasible set has a nonempty interior and the KKT point always exists. Moreover, the KKT points are the same as the stationary points of the l1/2regularization problem. The e-quivalence between the l1/2regularization problem and the smooth constrained optimization problem open a new path to develop numerical methods for solv-ing l1/2regularization problems. It makes it possible to apply numerical methods that are efficient in the solution of smooth constrained optimization to solve the l1/2regularization problem. Based on this reformulation, we propose a feasible direction method to solve the l1/2regularization problem. The feasible steepest descent direction in the method has an explicit expression and the method is easy to implement. Under mild conditions, we establish the global convergence of the proposed method. Our numerical experiments show that the proposed feasible direction method has a very good performance in computation.We will also study the optimality conditions for the l1/2regularization prob-lem. We derive a first order and a second order necessary conditions. We also give a second order sufficient condition for a point to be a local solution of the problem. Those conditions are extensions of the existing optimality conditions. In addition, we show that any point satisfying the first order necessary condition will not be a local maximizer of the l1/2regularization problem.This dissertation is typeset by software LATEX2ε.
Keywords/Search Tags:(?)_p regularization problems, Compressive sensing, Nonmonotoneline search, Nonsmooth equation, Spectral gradient projection method, Constrained optimization problems, Feasible steepest descent algorithm, Optimality condition, Global convergence
PDF Full Text Request
Related items