Font Size: a A A

Optimization over networks: Efficient algorithms and analysis

Posted on:2014-12-23Degree:Ph.DType:Thesis
University:University of Illinois at Urbana-ChampaignCandidate:Lee, SoominFull Text:PDF
GTID:2458390008454582Subject:Engineering
Abstract/Summary:
A number of important problems that arise in various application domains can be formulated as a distributed convex constrained minimization problem over a multi-agent network. The problem is usually defined as a sum of convex objective functions over an intersection of convex constraint sets. The first part of this thesis is focused on the development and analysis of efficient distributed algorithms for a constrained convex optimization problem over a multi-agent network where each agent has its own objective function and constraint set. We propose gradient descent algorithms with random projections which use various communication protocols.;Second, we propose an asynchronous gossip-based random projection (GRP) algorithm that solves the distributed problem using gossip type communications and local computations. We analyze the convergence properties of the algorithm for an uncoordinated diminishing stepsize and a constant stepsize. For a diminishing stepsize, we prove that the iterates of all agents converge to the same optimal point with probability 1. For a constant stepsize, we establish an error bound on the expected distance from the optimal point to the iterates of the algorithm. In addition, we consider a variant of the method that uses a mini-batch of consecutive random projections and, also, establish its convergence. Furthermore, we provide simulation results on a distributed robust model predictive control problem.;In the second part of the thesis, we discuss an efficient epoch gradient descent algorithm for obtaining fast and exact solutions of linear support vector machines (SVMs). SVMs penalized with the popular hinge-loss are strongly convex but they do not have Lipschitz continuous gradient. We find SVMs that have both strong-convexity and Lipschitz continuous gradient using a smooth approximation technique.;First, we present a distributed random projection (DRP) algorithm whereby each agent exchanges local information only with its immediate neighbors at each iteration. With reasonable assumptions, we prove that the iterates of all agents converge to the same point in the optimal set with probability 1. In addition, we consider a variant of the method that uses a mini-batch of consecutive random projections and establish its convergence. Experiments on distributed support vector machines demonstrate fast convergence of the DRP algorithm. It actually shows that the number of iterations required for convergence is much smaller than that for scanning over all training samples just once.
Keywords/Search Tags:Over, Algorithm, Distributed, Problem, Convergence, Convex, Efficient
Related items