Font Size: a A A

Research Of Fast Distributed Optimization Algorithm Of Multi-agent System

Posted on:2021-05-27Degree:MasterType:Thesis
Country:ChinaCandidate:L F ZhengFull Text:PDF
GTID:2428330611964013Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the rapid development of communication technology and computer technology,emerging services,such as cloud computing,the internet of things,and social networks,have increased the type and scale of data in human society at an unprecedented rate,and the world has entered the era of networked big data.In the face of increasingly data scales,traditional centralized computing methods are often powerless or need to consume a large amount of computing resources and time overhead due to the limitation of the single-computer computing capacity.In response to this type of situation,many scholars have begun to shift their goals to multi-agent systems,seeking ways to solve complex large-scale problems in a cooperative manner with multi-agents.In addition,considering that problems in machine learning,wireless sensor network,statistical learning and other fields which are involved in real life and production are usually modeled as optimization problems,while traditional centralized optimization algorithm often consumes a lot of computing resources and takes a lot of time to get the optimal solution when solving largescale optimization problems,so distributed optimization of multi-agent system becomes the hot issues which attracts a large number of scholars to participate in the research.Different from the traditional centralized optimization algorithm,a multi-agent system distributed optimization theory sets a network containing multiple agents.The network can be divided into fixed undirected network,fixed directed network,time-varying undirected network,time-varying directed network and so on.A large-scale complex problem is decomposed into multiple relatively simple local objective problems which is distributed to multiple agents in the network.Each agent only needs to process the local objective problem and interact with its neighbor agents.The global optimal solution can be solved after several iterations.Although the multi-agent distributed optimization algorithm reduces the requirements for the computing power of each computing unit and reduces the time overhead when achieving the optimization goal,it is still worth improving the performance of the algorithm.Among the existing methods for improving the performance of algorithms,the methods mainly used are Nesterov method,Heavy-ball method,stochastic gradient method and so on.Therefore,this paper mainly studies large-scale complex optimization problems,and designs distributed optimization algorithms that can quickly converge to the global optimal solution.The research work is mainly divided into the following three points:(1)Considering the optimization problem over time-varying unbalanced directed network,a fast distributed optimization algorithm based on Heavy-ball method is proposed.Due to actual communication environment,especially in the application fields such as sensor networks,the communication between agents may change over time and be directed.How to eliminate network imbalance has become an important challenge in the research of distributed optimization algorithms over multi-agent system.By using both row and column stochastic matrices,the algorithm eliminates the conservativeness caused by the use of the double-stochastic matrix and extends the scope of application of the algorithm.Unlike the existing work,the algorithm no longer needs to calculate the Perron feature vector used to estimate the stochastic matrix.Under the assumptions that the global objective function is strongly convex and each local objective function has a Lipschitz continuous gradient,this paper proves that the proposed algorithm can converge to the global optimal solution at a linear rate if the momentum parameters and step-sizes are selecting from proper ranges.The introduction of momentum term can improve the convergence rate of the algorithm.In addition,the un-coordinate step-sizes allow each agent to autonomously determine the step-size within a suitable range,which improves the flexibility of the system.Simulation examples such as logistic regression and support vector machine prove the practicability of the algorithm and the correctness of the theoretical results.(2)For large-scale complex optimization problems,a gradient tracking algorithm based on stochastic average gradient is proposed.Considering the application scenarios of machine learning,the data size involved in optimization problems is large,and each complex local objective function can be further decomposed into the sum of multiple instantaneous functions.The original distributed optimization algorithm based on gradient tracking often spends a lot of computing resources and time calculating the gradient of the local objective function.How to quickly solve large-scale problems have become an urgent problem.Based on the classic distributed gradient tracking algorithm and the stochastic average gradient mechanism,a distributed stochastic gradient tracking algorithm is proposed.This algorithm allows each agent to randomly select an instantaneous function for gradient calculation at each moment.Because the probability of each instantaneous function being selected is equal,the expected value of each agent's estimation of the gradient is equal to the gradient of the local objective function which means that the gradient is unbiased.By analyzing the algorithm and transforming the algorithm into a primal-dual form algorithm and using the characteristics of the unbiased gradient,this paper proves that the proposed algorithm can converge to the global maximum at a linear rate when the step size does not exceed a given upper bound.Through some simulation experiments,the effectiveness of the algorithm and the correctness of the theoretical analysis has been demonstrated.Comparing with other algorithms,the proposed algorithm greatly reduces the resources consumption and time cost when the same precision solution is achieved.(3)Considering the optimization problems with a composite objective containing smooth and non-smooth terms and based on Nesterov method,a distributed continuous convex approximation algorithm is proposed.Considering that in many optimization problems,a part of objective functions are non-smooth.For example,in machine learning,1-norm is often added to the objective problem to ensure the sparsity of the solution.In order to solve this kind of optimization problem and achieve the goal of fast convergence,this paper studies Nesterov method,gradient tracking algorithm and continuous convex approximation algorithm,and combines them to design a distributed optimization algorithm.Using the small gain theorem,this paper proves that the proposed algorithm can converge to the global optimal solution at a linear velocity when the objective function is strongly convex.When the objective function is non-convex,the proposed algorithm can also converge to a stationary point in a sub-linear rate.The simulation experiment demonstrates the practicability of the algorithm and the correctness of the theoretical analysis.
Keywords/Search Tags:Multi-agent system, distributed algorithm, convex optimization, stochastic gradient, momentum method
PDF Full Text Request
Related items