Font Size: a A A

Distributed Optimization Algorithms In Unbalanced Directed Networks

Posted on:2021-07-24Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y Y XiongFull Text:PDF
GTID:1488306569983479Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
As a key research direction in the autonomous intelligent unmanned systems,distributed cooperative control of multi-agent systems has been undoubtedly attracting a surge of attentions in recent years due to its particularly favorable structure for large-scale networks where data samples are distributed across the networks.In distributed schemes,nodes take actions independently by utilizing local data and exchanging information constantly with their neighbors to achieve certain network goals.As a result,distributed algorithms offer many advantages over traditional centralized ones,including lower cost,higher flexibility,stronger robustness,and better extensibility,among others.Many practical issues can be solved within the framework of distributed optimization problems,such as large-scale machine learning,distributed economic dispatch in smart grids,formation control,distributed estimation in sensor networks,and so on.These applications require the design of completely distributed optimization algorithms,in which each node interacts only with its neighbors to collectively minimize the objective function represented by a sum of local objective functions.This thesis mainly focuses on a challenging and practical scenario in distributed optimization area where the interactions among nodes are modeled by general unbalanced directed graphs.After proposing several distributed algorithms for minimizing fixed objective function,we further consider distributed optimization problems from an online optimization perspective to address the dynamic aspect of the problems where the sequence of local cost functions may vary in an uncertain and even adversarial manner.The main contents,results and contributions of this thesis are summarized as follows:Chapter 2 is centered on the constrained distributed optimization problem over unbalanced networks.To solve this problem,we propose a consensus-based algorithm that incorporates a rescaling subgradient technique into the distributed methods with momentum term to address the unbalancedness of digraphs.The implementation of the algorithm resorts to row stochastic adjacency matrix,which implies that each node can independently assign weights on the information it received.Importantly,our algorithm has stronger adaptability to practical applications as it allows nodes to remain in possibly nonidentical convex sets,leading to that the projection operators on different constrained sets render our distributed multi-step subgradient algorithm interesting and challenging.Under standard assumptions,we rigorously prove that each node can converge to an global optimal solution of the optimization problem.Simulation results show that the proposed algorithm achieves faster convergence than the related method.Chapter 3 is concerned with distributed optimization problem over unbalanced networks with limited communication capacity.To alleviate the communication bottleneck,we design a novel quantized distributed gradient tracking algorithm,termed Q-DGT,to minimize a finite-sum of strongly convex and smooth functions.In sharp contrast to the subgradient-based quantized decentralized algorithms,the quantization scheme cannot be incorporated into decentralized gradient tracking algorithm(DGT)directly as it will generate an accumulated quantization error,resulting in the linear convergence rate that cannot be guaranteed.By carefully designing,the Q-DGT is robust against quantization errors,while inherits the exact linear convergence rate from the family of DGT algorithms.Importantly,the Q-DGT eliminates the need of initial local gradient computations in comparison with DGT.The dynamic adaptive update rule of the quantization levels is also explicitly characterized,which is capable of avoiding saturation of the quantizers.Furthermore,we provide the fixed quantization level which preserves the exact linear convergence performance.Finally,we apply the proposed algorithm to the ridge regression problem in sensor networks to validate the theoretical results.Considering that the dynamic aspect of distributed optimization has not been fully addressed in the literature,we need to extend distributed optimization algorithms to the online setting,in which the local cost functions vary with time and are only revealed to individuals after each node has made a decision at each iteration.Chapter 4 investigates a distributed online constrained optimization problem with differential privacy where the network is modeled by an unbalanced digraph with a row-stochastic adjacency matrix.To address such a problem,a distributed differentially private algorithm without introducing a trusted third-party is proposed to preserve the privacy of the participating nodes.Under mild conditions,we show that the proposed algorithm attains an O(log T)expected static regret bound for strongly convex local cost functions,where T is the time horizon.Moreover,we remove the need for knowing the time horizon T in advance by adopting Doubling Trick scheme,and derive an O(T1/2)expected static regret bound for general convex local cost functions.Our results reveal a tradeoff between the privacy level and the optimization accuracy and coincide with the best theoretical regrets that can be achieved in state-of-the-art algorithms.Finally,we apply the proposed algorithm to the localization problems in sensor networks to validate the theoretical results.Chapter 5 studies a distributed online constrained optimization problem over timevarying unbalanced digraphs without explicit subgradients.In sharp contrast to the existing algorithms relying highly on the use of double stochastic information mixing matrices and explicit subgradients,we design a novel consensus-based distributed online algorithm with a local randomized zeroth-order oracle and then rescale the oracle by constructing row-stochastic matrices,which aims to address the unbalancedness of time-varying digraphs and eliminate the need of subgradient information.Under mild conditions,the dynamic regret over a time horizon is shown to asymptotically converge to a bounded region at a sublinear rate for convex cost functions provided that the accumulated variation grows sublinearly with a specific order.Importantly,the size of the region is proportional to a designed parameter of the oracle and thereby can be controlled.Moreover,the counterpart of the proposed algorithm in which subgradients are available is also provided,along with its dynamic regret bound.The results reflect that the convergence of our algorithm is essentially not affected by the zeroth-order oracle.Simulations on distributed targets tracking problem in sensor networks are employed to demonstrate the effectiveness of the proposed algorithm.
Keywords/Search Tags:Multi-agent systems, distributed optimization algorithms, unbalanced directed networks, online optimization, differential privacy, dynamic tracking
PDF Full Text Request
Related items