Font Size: a A A

Distributed computation and optimization over networks

Posted on:2012-03-25Degree:Ph.DType:Dissertation
University:The University of OklahomaCandidate:Lu, JieFull Text:PDF
GTID:1458390011957080Subject:Engineering
Abstract/Summary:
This dissertation is devoted to the development of efficient, robust, and scalable distributed algorithms, which enable agents in a large-scale, multi-hop network to cooperatively compute a global quantity, or solve an optimization problem, with only local interactions and without any centralized coordination. Algorithms of this nature are attracting growing interest from a number of scientific communities due to their broad application, for example, to autonomous agent coordination and control in mobile ad hoc networks, distributed signal processing and data fusion in wireless sensor networks, and studies of opinion dynamics in social networks.;In this dissertation, we address three fundamental problems in the area, namely: averaging, solving of positive definite linear equations, and unconstrained separable convex optimization. Based on a blend of tools and ideas from system, optimization, and graph theories, we construct a novel set of distributed algorithms---including continuous- and discrete-time, gossip and asynchronous---which solve these problems over undirected networks with arbitrary (and, in some cases, time-varying) topologies and agent memberships. We also analyze the properties of these algorithms, including their convergence rates and complexity characteristics, and compare them with existing schemes, showing analytically and numerically that our algorithms possess several appealing features.;The major contributions of this dissertation are as follows: first, we show that Lyapunov stability theory may be used to shape the behavior of asynchronous distributed algorithms. This finding allows us to introduce the notion of greedy, decentralized, feedback iteration control, leading to a class of Controlled Hopwise algorithms, which are highly bandwidth/energy efficient in wireless networks. The finding also creates a new paradigm in the design of asynchronous distributed algorithms, where iterations are opportunistically controlled, as opposed to being randomized.;Second, we show that the Bregman divergence of the Lagrangian of a separable convex optimization problem may be used to form a common Lyapunov function. This result enables us to derive a family of Zero-Gradient-Sum algorithms, which yield nonlinear networked dynamical systems on an invariant manifold, and which differ fundamentally from, and have pros and cons over, the existing subgradient algorithms. The derivation also shows that a gossip variant within the family generalizes the classic Pairwise Averaging, and the family itself is a natural generalization of several well-known algorithms for distributed consensus, to distributed convex optimization.;Finally, we provide a series of analysis of the properties of our algorithms (e.g., boundedness, asymptotic and exponential convergence, lower and upper bounds on convergence rates, scalability) on various networks (e.g., path, cycle, regular, complete, and general graphs), describing explicitly the dependency of such properties on network topologies, problem characteristics, and algorithm parameters, including the algebraic connectivity, Laplacian spectral radius, and function curvatures.
Keywords/Search Tags:Distributed, Algorithms, Optimization, Networks, Over
Related items