Font Size: a A A

Distributed Optimization Under Inexact First-Order Information

Posted on:2023-01-04Degree:MasterType:Thesis
Country:ChinaCandidate:K ZhuFull Text:PDF
GTID:2568306914973189Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of sensor network and big data technologies,we have seen plenty of emerging optimization problems for large-scale networked systems.How to solve this kind of problems via distributed algorithms using only local communication and computation has been paid much attention to and becomes a hot topic in the field of information science.Most existing distributed optimization algorithms rely on the first-order information of the objective functions and assume this first-order information is exactly available to each agent.However,in practical problems,we might only have some approximate first-order information of the objective functions by solving another auxiliary optimization problem.Therefore,it is natural for us to ask whether and how the distributed optimization can be solved with inexact first-order information.To this end,this thesis is focused on the distributed optimization problem with inexact first-order information.The main contribution of this thesis can be summarized as follows:(1)We focus on the subgradient boundedness assumption and discuss the convergence issue of some distributed primal-dual subgradient algorithms.We first show the convergence and effectiveness of this algorithm under the typical assumption that all related subgradient is uniformly bounded.Then,we develop a novel distributed normalized step size to remove the subgradient boundedness assumption.Moreover,the transient convergence performance of our algorithm has been greatly improved under this novel step size.(2)We consider the distributed constrained optimization problem using the ε-subgradient of local objective functions.To solve this problem,we propose two new distributed optimization algorithms by combining the primal-dual method and the normalized step size.We first analyze the influence of the subgradient error on the steady-state optimality of the algorithm and then figure out a set of sufficient conditions to ensure the exact convergence of the primal iteration sequence to the optimal solution set.The results show that the distributed optimization can indeed be exactly solved even with approximate first-order information of the objective functions only if the accumulated subgradient error is not too large.(3)We consider a class of distributed optimization problem based on first-order inexact Oracle under different communication topologies.We assume each node can repeatedly call an inexact oracle of the local objective functions and be given some approximate first-order information.We propose an iterative rule for the primal domain variable with this inexact first-order information.The influence of Oracle exactness has been discussed for both fixed and time-varying communication topologies.We also specify a group of sufficient conditions to ensure that the iterative sequence converges to the optimal solution set exactly.The proposed algorithms can solve the distributed optimization problem effectively even the exact first-order information of objective functions cannot be utilized.
Keywords/Search Tags:distributed optimization, normalization, ε-subgradient, first-order algorithm, inexact Oracle
PDF Full Text Request
Related items