Font Size: a A A

Research On Distributed Optimization Theory Over Networked Multi-agent Systems

Posted on:2019-06-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:J L ZhuFull Text:PDF
GTID:1318330545458214Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the fast development of science and technology,networked multi-agent systems have been received significant interest in academic and indus-trial communities.The objectives of the systems are to solve large-scale and complicated problems.Networked multi-agent systems consist of numerous agents and the systems are formed by interaction between the agents.More-over,networked multi-agent systems are of excellent characters of coordina-tion,distributivity and autonomy etc.Numerous decision,learning and control problems in networked multi-agent systems can be posed as optimization prob-lems.For instance,resource allocation problems in communication networks,estimation and detection problems in sensor networks,distributed learning and regression problems in machine learning and control,distributed tracking prob-lems and multi-agent coordination problems in multi-agent systems,optimal control problems in smart grids,etc.Such problems require the design of dis-tributed optimization algorithm.In this thesis,we consider the distributed opti-mization problems over networked multi-agent systems,where a set of agents collectively solves a global optimization problem with the objective function given by the sum of locally known objective functions.Moreover,each agent only utilizes its local information and exchanges information with its neighbors over networked multi-agent systems.This thesis researches distributed opti-mization theory to solve the distributed optimization problem over networked multi-agent systems.The main contributions of this thesis includeWe consider a distributed constrained optimization problem over networked multi-agent systems,where cost function of each agent is private.More-over,we assume that the networks are time-varying and directed.In order to address such problem,a fully decentralized stochastic subgradient pro-jection algorithm is proposed over time-varying directed graphs.How-ever,since the graphs are directed,the weight matrix may not be a doubly stochastic matrix.Therefore,we overcome this difficulty by using weight-balancing technique.By choosing appropriate step-sizes,we show that it-erations of all agents asymptotically converge to some optimal solutions.Furthermore,by our analysis,convergence rate of our proposed algorithm is O(log T/T)under local strong convexity,where T is the number of it-erations.In addition,under local convexity,we prove that our proposed algorithm can converge with rate O(log T/?T).We consider a distributed constrained optimization problem over time-varying networked multi-agent systems,where each agent only knows its own cost function and its constraint set.However,the local constraint set may not be known in advance,or consists of huge number of components in some applications.To deal with such cases,we propose a distributed stochastic subgradient algorithm over time-varying networks,where the estimate of each agent projects onto its constraint set by using random projection technique and the implement of information exchange between agents by employing asynchronous broadcast communication protocol.We show that our proposed algorithm is convergent with probability 1 by choosing suitable learning rate.For constant learning rate,we obtain an asymptotic error bound,which is defined as the expected distance be-tween the estimates of agent and the optimal solution.We also establish another asymptotic upper-bound of errors,which is the difference between the global objective function value at the average of the estimates and the optimal value.We study the performance of diffusion LMS(Least-Mean-Square)algo-rithm for distributed parameter estimation problem over sensor networks with quantized data and random topology,where the data are quantized be-fore transmission and the links are interrupted at random times.Moreover,sensor network is a specific networked multi-agent system.To achieve un-biased estimation of the unknown parameter,we add dither(small noise)to the sensor states before quantization.Therefore,we first propose a dif-fusion LMS algorithm with quantized data and random link failures.Fur-thermore,we analyze the stability and convergence of the proposed algo-rithm and derive the closed-form expressions of the MSD(Mean-Square Deviation)and EMSE(Excess Mean-Square Errors),which characterize the steady-state performance of the proposed algorithm.We show that the convergence of the proposed algorithm is independent of quantized data and random topology.Moreover,the analytical results reveal which fac-tors influence the performance of the proposed algorithm,and we show that the effect of quantization is the main factor in performance degra-dation of the proposed algorithm.We final provide computer simulation results that illustrate the performance of the proposed algorithm and verify the results of the theoretical analysis.We consider a distributed online optimization problem over networked multi-agent systems,where the allocated local cost function of each agent possible changes with time and each agent only knows its allocated func-tion in hindsight.Moreover,we assume that the network is time-varying and directed.To solve such problem,a distributed stochastic subgradi-ent online optimization algorithm is proposed over time-varying directed graphs.However,the directed graph could generate a asymmetric weight matrix,which is not doubly stochastic matrix.To overcome this difficulty,we employ weight-balancing technique.By choosing appropriate learning rates,we prove that our proposed algorithm can achieve logarithmic re-gret under strongly convexity and square-root regret under convexity with probability 1.The experimental results show the proper performance of our proposed algorothm.We consider a private distributed online optimization problem(PDOOP)where a set of agents aim to minimize the sum of locally convex cost func-tions while each desires that the local cost function of individual agent is kept differentially private.To solve such problem,we propose differen-tially private distributed stochastic subgradient online optimization algo-rithm over time-varying directed networked multi-agent systems.We use differential privacy to preserve the privacy of participating agents.We show that our algorithm preserves differential privacy and achieves loga-rithmic expected regret under locally strong convexity.Moreover,we also show that square-root expected regret is obtained under local convexity.Furthermore,we reveal the trade-off between the privacy level and the performance of our algorithm.In summary,this thesis mainly focuses on distributed optimization theory over networked multi-agent systems.The research works in this thesis will have certain referential significance for the study and development of optimization theory in the future.
Keywords/Search Tags:Networked multi-agent systems, Distributed optimization, Distributed online optimization, Differential privacy, Diffusion LMS
PDF Full Text Request
Related items