Font Size: a A A

Research On Theory And Algorithms Of Several Nonconvex Sparse Optimization Problems

Posted on:2022-03-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:W J LiFull Text:PDF
GTID:1480306569987379Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Sparse optimization problems emerge in many scientific and engineering problems.These problems can be understood as finding the sparse solutions that satisfy certain con-dition,that is,the solutions whose most entries are zero.In view of the wide application fields of sparse optimization and the problems in existing research,this paper takes four classes of nonconvex sparse optimization problems as the research objects.Based on opti-mization theory and algorithms,combining with dynamic system theory and other related knowledge,we carry out the research work in three aspects of theoretical analysis,model improvement and algorithm design.The main research results are as follows.1.A class of nonsmooth,nonconvex and non-Lipschitz continuous lp(0<p<1)regularized sparse optimization problems are studied.Based on the necessary condition of the problem,we define a kind of stationary points for the problem and prove that it owns a stronger optimal capability.Based on the smoothing method,we propose a dy-namic algorithm modeled by a differential equation to solve the problem.We prove that the solution of the algorithm is globally existent and bounded,and further analyze the uniqueness of the solution.When the feasible region is bounded,all accumulation point of the solution are proved to be stationary points of the problem and global convergence of the algorithm is analyzed.In particular,for box constraints,we prove that the solution of the algorithm has the lower bound property and the nonzero entries of all accumulation points have common locations.Finally,by some numerical results,we show the good numerical performance of the algorithm.2.A class of nonconvex and discontinuous l0 regularized sparse optimization prob-lems are studied.By constructing a smoothing function for l0 norm,we propose a dynamic algorithm modeled by a differential equation for solving the problem and design a correc-tion method for the special cases,which are easily justified.We prove that the solution of the algorithm is global existent,unique,bounded and globally Lipschitz continuous.Besides,we prove that all accumulation points of the algorithm have a common support set and a unified lower bound for the nonzero entries.Combining the dynamic algorithm with the correction method,any corrected accumulation point is a local minimizer of the problem.Moreover,we analyze the equivalence on the local minimizers between the problem and the case with general box constraints.Finally,some numerical experiments are provided to show the efficiency of the algorithm in solving some sparse optimization problems.3.A class of nonconvex and discontinuous l0 regularized sparse optimization prob-lems with general convex constraints are studied,whose objective function is the sum of a nonsmooth convex loss function and an l0 regularization.We construct a relaxation function of the l0 regularization and propose a dynamic algorithm modeled by a differen-tial inclusion to solve the problem.We prove that the solution of the algorithm is global existent,bounded,and reaches the feasible region in finite time and remains there there-after.Moreover,we prove that the solution of the algorithm is a slow solution and any accumulation point of it is a Clarke stationary point of a class of nonconvex continuous relaxation problems.In the box-constrained case,we prove that all accumulation points of the solution own a unified lower bound property and have a common support set.Ex-cept for a special case,any accumulation point of the solution is a local minimizer of this nonconvex and discontinuous problem.In particular,the algorithm can be used to solve the general locally Lipschitz continuous but nonsmooth nonconvex problems,and has a simpler structure than the existing dynamic algorithms for solving these problems.Fi-nally,we give some numerical experiments to show the good numerical performance of the algorithm.4.A class of nonconvex and discontinuous sparse groupl0 regularized optimization problems are studied.Firstly,we give a DC formed continuous relaxation model of the problem and define a kind of stationary points.Then,we analyze the relationships between the defined stationary point and some existing stationary points,and analyze its some properties.Further,we prove the equivalence on the global minimizers between primal problem and relaxation problem.Then,we define the strong local minimizer of the primal problem and prove the equivalence between the strong local minimizer of primal problem and the stationary point of relaxation problem.Based on the DC structure of relaxation problem,we design two DC algorithms.We prove that any accumulation point of the iterates generated by them is a stationary point of the relaxation problem and also a strong local minimizer of the primal problem.Specially,all accumulation points have a common support set,their nonzero entries have a common lower bound and their zero entries can be converged within finite iterations.Moreover,we prove the global convergence and estimate the convergence rate for the algorithms.Finally,some numerical experiments are used to show the efficiency of the algorithms.
Keywords/Search Tags:sparse optimization, nonconvex optimization, differential equation, differential inclusion, DC algorithm, convergence
PDF Full Text Request
Related items