Font Size: a A A

Research On Accelerated Proximal Gradient Algorithm For Nonsmooth Composite Optimization Problems

Posted on:2023-02-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:T WangFull Text:PDF
GTID:1520306917980089Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Nonsmooth composite optimization problem is a very popular research topic in the field of mathematical optimization in recent years.It is widely used in image processing,compression sensing,machine learning,system recognition,collaborative prediction,low dimensional embedding,data mining,pattern recognition and other applications.Due to the advantages of simple iterative format and being suitable for solving large-scale problems,the proximal gradient algorithm is an important algorithm for solving nonsmooth composite optimization problems.However,it can be slowly in practical.Therefore,this paper mainly studies the acceleration strategy of the proximal gradient algorithm and analyzes its theoretical results and numerical performance.First,for the nonsmooth convex composite optimization problem,some effective acceleration strategies are proposed for the inertial proximal gradient algorithm to further optimize the theoretical results and numerical performance.Secondly,for nonsmooth and nonconvex composite optimization problems,two kinds of accelerated proximal gradient algorithms with better numerical performance are proposed by combining the proximal gradient algorithm and the inertial proximal gradient algorithm,and the theoretical results and numerical performance of the algorithms are analyzed.The main work and innovative achievements are as follows:In the first part,the inertial proximal gradient algorithms for solving nonsmooth convex composite optimization problems are studied.The main contents are as follows:(1)An adaptive nonmonotone stepsize strategy is proposed,in which the stepsize strategy is monotonically increasing after finite steps and convergent,which overcomes the problem that the line-search strategy will overestimate the gradient Lipschitz constant of the smooth part of the objective function or generate additional computations to slow down the convergence of the algorithm.The FISTA-type algorithm and the FISTA-type restart algorithm with adaptive nonmonotonic stepsize are designed.The sublinear convergence rate of the FISTA-type algorithm with adaptive nonmonotone stepsize is established,and the linear convergence rate of the FISTA-type restart algorithm with adaptive nonmonotone stepsize is proved based on the error bound assumption.The numerical results show that the stepsize strategy has a significant improvement on the numerical performance of the FISTA-type algorithm and the FISTA-type restart algorithm.(2)A new kind of rules for selecting parameters is proposed,in which the parameters are no longer limited to Nesterov’s rule,but have a wider selection range.A new proof method called "comparison method" is introduced to establish the abstract theoretical results of the inertial proximal gradient algorithm under the assumption of error bound.And the convergence results of six kinds of inertial proximal gradient algorithms are analyzed based on the abstract theoretical result.In particular,it is proved that the iterates generated by FISTA has strong convergence and O(1/k3)sublinear convergence rate,and the generated function values sequence has o(1/k6)sublinear convergence rate;also,the strong convergence of iterates generated by some inertial proximal gradient algorithms and sublinear convergence rates of generated iterates and function values with any order are proved.Moreover,based on the adaptive restart condition,a class of inertial proximal gradient algorithm with adaptive correction is also proposed.(3)A class of monotone inertial proximal gradient algorithms is proposed,in which monotonicity overcomes the problems of slow convergence of inertial proximal gradient algorithm caused by periodic oscillation in the late iteration period and divergence of algorithm due to extreme non-monotonicity in application.Under the assumption of local Holder error bound,the convergence results of four classes of monotone inertial proximal gradient algorithms are established.At the same time,the theoretical convergence results of the inertial proximal gradient algorithm based on error bound condition be extended to (0,1/2]-local Holder error bound condition.In addition,a hybrid monotone inertial proximal gradient algorithm is proposed by selecting special parameters,and the effectiveness of the algorithm is verified by numerical experiments.In the second part,the accelerated proximal gradient algorithm for solving nonsmooth and nonconvex composite optimization problems is studied.The main contents are as follows:(1)Based on the idea of adaptive nonmonotonic stepsize strategy,a variable stepsize strategy for nonconvex composite optimization problems is designed,and according to the function value informations at the iteration point,a transformational rule between the proximal gradient algorithm and the inertial proximal gradient algorithm is designed,then,a kind of variable stepsize nonmonotonic accelerated proximal gradient algorithm is proposed.Under the mild assumption,it is proved that any cluster point of the iterates generated by the proposed algorithm is a critical point and under the assumption that the objective function satisfies the Kurdyka-Lojasiewicz(KL)property,the linear and sublinear convergence rates of the algorithm based on KL parameter are established.The numerical results show that the variable stepsize strategy can effectively accelerate the convergence speed of the algorithm,and the proposed algorithm is significantly better than other existing algorithms for the four types of test problems:"convex+convex","convex+nonconvex","nonconvex+convex" and "nonconvex+nonconvex".(2)According to the informations at the iteration point,a new transformational rule between the proximal gradient algorithm and the inertial proxial gradient algorithm is designed,then,a modified proximal gradient algorithm is proposed.When the proposed algorithm is used to solve nonsmooth convex composite optimization problems,the transformational rule is no longer based on the function value informations,which avoids a large number of function value calculations.In the theoretical analysis,by constructing a potential energy function with sufficient descent,it is proved that any cluster point of the iterates generated by the proposed algorithm is a critical point,and under the assumption that the objective function satisfies the Kurdyka-Lojasiewicz(KL)property,the local convergence rate of the proposed algorithm is established.Numerical experiments verify the effectiveness of the proposed algorithm.
Keywords/Search Tags:Nonsmooth composite optimization, Proximal gradient algorithm, FISTA, Kurdyka-Lojasiewicz property, Error bound condition, Convergence
PDF Full Text Request
Related items