Font Size: a A A

Distributed algorithms for optimization and variational inequality problems: Addressing competition, uncertainty, and misspecification

Posted on:2016-01-10Degree:Ph.DType:Dissertation
University:The Pennsylvania State UniversityCandidate:Kannan, AswinFull Text:PDF
GTID:1470390017485826Subject:Industrial Engineering
Abstract/Summary:
The first part of this work considers computation of equilibria associated with Nash games that lead to monotone variational inequalities, wherein each player solves a convex program. Distributed extensions of standard approaches for solving such variational problems are characterized by two challenges: (1) Unless suitable assumptions are imposed on the mapping arising in the specification of the variational inequality, iterative methods often require the solution to a sequence of regularized problems, a naturally two-timescale process that is harder to implement in practice; (2) Additionally, algorithm parameters for all players (such as steplengths, regularization parameters, etc.) have to be chosen centrally and communicated to all players; importantly, these parameters cannot be independently chosen by a player. Motivated by these shortcomings, we present two practically implementable distributed regularization schemes that work on a single-timescale; specifically, each scheme requires precisely one gradient or projection step at every iteration. Both schemes are characterized by the property that the regularization/centering parameter are updated after every iteration. To aid in distributed settings requiring limited coordination across players, the schemes allow players to select their parameters independently and do not insist on central prescription of such parameters. We conclude with an application of these schemes on a networked Cournot game with nonlinear prices.;In the second portion of our work, we consider stochastic variational inequalities under pseudomonotone settings. Referred to as pseudomonotone stochastic variational inequality problems or PSVIs, such problems emerge from product pricing, fractional optimization problems, and subclasses of economic equilibrium problems arising in uncertain regimes. In the first part of the paper, we observe that a direct application of standard existence/uniqueness theory requires a tractable expression for the integrals arising from the expectation, a relative rarity when faced with general distributions. Instead, we develop integration-free sufficiency conditions for the existence and uniqueness of solutions to PSVIs. In the second part of the paper, we consider the solution of PSVIs via stochastic approximation (SA) schemes, motivated by the observation that almost all of the prior SA schemes can accommodate monotone SVIs. Under various forms of pseudomonotonicity, we prove that the solution iterates produced by extragradient SA schemes converge to the solution set in an almost sure sense. This result is further extended to mirror-prox regimes and an analogous statement is also provided for monotone regimes, under a weak-sharpness requirement, where prior results have only shown convergence in terms of the gap function through the use of averaging. Under strong pseudomonotonicity, we derive the optimal initial steplength and show that the mean-squared error in the solution iterates produced by the extragradient SA scheme converges at the optimal rate of O (1/K). Similar rates are derived for mirror-prox generalizations and monotone SVIs under a weak-sharpness requirement. Finally, both the asymptotics and the empirical rates of the schemes are studied on a set of pseudomonotone and non-monotone variational problems.;The third part studies networked settings where agents attempt to solve a common problem, whose information is both misspecified and only partly known to every agent. We consider a convex optimization problem that requires minimizing a sum of misspecified agent-specific expectation-valued convex functions over the intersection of a collection of agent-specific convex sets, denoted by X1,...,Xm. The agent objectives are misspecified in a parametric sense and this misspecification may be resolved through solving a distinct stochastic convex learning problem. We consider the simultaneous resolution of both problems through a joint set of schemes in which agents update their decisions and their beliefs regarding the misspecified parameter at every step. The former combines an agent-specific averaging step and a projected stochastic gradient step while parameter updates are carried out through a projected stochastic gradient step. Given such a set of coupled schemes, we provide both almost sure convergence statements as well as convergence rate statements when either X i = X for every i or X = ∩i=1 mXi. Notably, we prove that when X = ∩i =1Xi and agent objectives are strongly convex, we recover the optimal rate of stochastic approximation of O(1/√K) in the solution iterates despite the presence of averaging (arising from the consensus step) and learning (arising from misspecification). When strong convexity assumptions are weakened to mere convexity but Xi = X for every i, we show that the averaged sequence over the entire K iterations displays a modest degradation in the convergence rate from the optimal rate in terms of function value. When the averaging window is reduced to K/2, we recover the optimal rate of convergence in function values. Preliminary numerics are provided for an economic dispatch problem with misspecified cost functions. (Abstract shortened by UMI.).
Keywords/Search Tags:Variational, Problem, Distributed, Misspecified, Schemes, Optimal rate, Optimization, Monotone
Related items