Font Size: a A A

Fast Gradient Algorithms for Structured Sparsity

Posted on:2015-09-02Degree:Ph.DType:Thesis
University:University of Alberta (Canada)Candidate:Yu, YaoliangFull Text:PDF
GTID:2470390017995560Subject:Computer Science
Abstract/Summary:
Many machine learning problems can be formulated under the composite minimization framework which usually involves a smooth loss function and a nonsmooth regularizer. A lot of algorithms have thus been proposed and the main focus has been on first order gradient methods, due to their applicability in very large scale application domains. A common requirement of many of these popular gradient algorithms is the access to the proximal map of the regularizer, which unfortunately may not be easily computable in scenarios such as structured sparsity. In this thesis we first identify conditions under which the proximal map of a sum of functions is simply the composition of the proximal map of each individual summand, unifying known and uncover novel results. Next, motivated by the observation that many structured sparse regularizers are merely the sum of simple functions, we consider a linear approximation of the proximal map, resulting in the so-called proximal average. Surprisingly, combining this approximation with fast gradient schemes yields strictly better convergence rates than the usual smoothing strategy, without incurring any overhead. Finally, we propose a generalization of the conditional gradient algorithm which completely abandons the proximal map but requires instead the polar---a significantly cheaper operation in certain matrix applications. We establish its convergence rate and demonstrate its superiority on some matrix problems, including matrix completion, multi-class and multi-task learning, and dictionary learning.
Keywords/Search Tags:Gradient, Proximal map, Algorithms, Structured
Related items