Font Size: a A A

Proximal Gradient Method For A Class Of Convex Optimization Problems

Posted on:2013-12-13Degree:MasterType:Thesis
Country:ChinaCandidate:J ZhouFull Text:PDF
GTID:2230330362968413Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Structured and large-scale convex optimization problems have wide applications in sig-nal processing, image processing, compressed sensing, multi-task learning and so on. Manyproblems in signal processing, image recovery, matrix complexity and machine learning can beformulated as convex optimization problems.In this paper, the objective function of the convex optimization is the sum of a smoothfunction and a non-smooth function. On one hand, proximal gradient method can overcome thedifficulties, which cannot be solved by traditional methods, resulting from the non-smoothnessof the objective function. On the other hand, we obtain a nice result by treating with the s-mooth part and non-smooth part of the objective function separately. The convex optimizationproblems, which we study in this paper, can be applied very well in practical problems such asvariable selection and sparse group Lasso.First, we study the proximal splitting method. Although every step is an optimizationproblem, analytical solution is derived. And the convergence analysis of the algorithm, includ-ing global convergence and linear convergence, is also done.
Keywords/Search Tags:convex optimization, group Lasso, proximal splitting method, global convergence, linear convergence
PDF Full Text Request
Related items