Font Size: a A A

Distributed Optimization Algorithms in Large-Scale Directed Networks

Posted on:2018-11-14Degree:Ph.DType:Thesis
University:Tufts UniversityCandidate:Xi, ChenguangFull Text:PDF
GTID:2478390017990104Subject:Mathematics
Abstract/Summary:
In the interconnected world of today, distributed computation and optimization over large-scale multi-agents networks are ubiquitous. The applications can be found in various fields ranging from machine learning, signal processing, to computational finance. The increasing interest in distributed optimization is further motivated by the emergence of big data application. In one aspect, large datasets push centralized computation to the limit and the need for distributed algorithms arises quite naturally. On a similar note, transmitting the data collected in a distributed manner to a center is either too costly or violates privacy. In this thesis, we aim to solve the distributed optimization problem over multi-agent networks, where each agent has local private information and objective functions. The goal is to have agents collaborate with each other to optimize the sum of these local objective functions. Existing algorithms mostly deal with the corresponding problems under the assumption that the underlying multi-agent network is strongly-connected and undirected, i.e., if agent i sends information to agent j, then agent j also sends information to agent i. The contribution of this work lies in the relaxation of such assumptions on the network topology. In particular, we assume the communication between the agents is described by a directed graph. We mainly propose four algorithms, Directed-Distributed Subgradient Descent (D-DSD), Directed-Distributed Projection Subgradient (D-DPS), DEXTRA, and ADD-OPT. D-DSD and D-DPS focus on the case when the objective functions are convex, but not necessarily differentiable. Both of the proposed algorithms achieve convergence rates of O(ln k/[sq rt]k), where k is the number of iterations. D-DPS is the generalization of D-DSD from unconstrained cases to constrained cases. When the objective functions are relaxed to be smooth, i.e., they are convex as well as differentiable, we propose DEXTRA and ADD-OPT that accelerate the convergence to a linear rate O( tauk) for 0< tau <1. Moreover, ADD-OPT supports a wider and more realistic range of step-sizes than DEXTRA. All four algorithms achieves the best known rate of convergence for this class of problems under the appropriate assumptions.
Keywords/Search Tags:Distributed, Algorithms, Optimization, DEXTRA, Agent, Objective functions
Related items