Font Size: a A A

Research On Splitting Algorithms For Convex Optimization Problems

Posted on:2022-01-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:F JiangFull Text:PDF
GTID:1480306722473864Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Convex optimization and variational inequality problems exist widely in mathematics,finance,industry,management and other research fields.With the advent of the era of big data,more involved information results in larger dimensions of practical problem.It is great of practical significance to develop efficient algorithms for these problems.For solving these problems,the researchers have suggested a number of iterative methods,such as interior point method,Newton method,projection method,proximal point algorithm,alternating direction method of multipliers,firstorder primal-dual algorithm and so on.In this thesis,we further study the splitting methods for convex optimization problems.We propose indefinite proximal point algorithm,indefinite proximal generalized alternating direction method of multipliers and three inexact first-order primal-dual algorithms.The convergence of these algorithms is analyzed,and the effectiveness of the new algorithms is verified by numerical experiments.The proximal point algorithm(PPA)has been widely used in convex optimization and many splitting methods fall into the framework of PPA.We relax the positive definiteness requirement of the proximal measure in PPA and propose indefinite PPAs.When the subproblem does not possess closed-form solution,we propose two inexact criteria to solve the subproblem approximately.The proximal generalized alternating direction method of multipliers(PGADMM)is a benchmark for linearly constrained convex optimization.In the literature,the proximal term can be indefinite.We derive the optimal lower bound of the proximal parameter,which plays a vital role in improving numerical performance.The first-order primal-dual algorithm(PDA)is a popular algorithm for solving a class of convex-concave saddle point problems.In some applications,at least one of the subproblems does not possess closed-form solution and their evaluation involves iterative algorithms.We propose three inexact PDAs.The first inexact PDA adopts the absolute error criteria,which are based on non-negative summable sequences,and solves the two subproblems approximately.Assuming that only one subproblem does not possess closed-form solution,the second and third inexact PDAs adopt a relative error criterion and solve the other subproblem approximately.The relative error criterion only involves a single parameter in the range of[0,1),which makes the method more applicable.For all inexact algorithms,we establish the global convergence and O(1/N)convergence rate,where N counts the number of iterations.For the second inexact PDA,we show the accelerated O(1/N2)and linear convergence rate under certain strongly convex assumptions.For the third inexact PDA,we establish the global Q-linear convergence rate of the distance between the iterates and the solution set,and the R-linear rate of the iterates under a mild calmness condition.
Keywords/Search Tags:convex optimization, proximal point algorithm, alternating direction method of multipliers, first-order primal-dual algorithm, linear convergence, inexact
PDF Full Text Request
Related items