Font Size: a A A

Distributed Online Learning Algorithms Based On Conditional Gradient Method

Posted on:2020-04-28Degree:MasterType:Thesis
Country:ChinaCandidate:Q DongFull Text:PDF
GTID:2370330575471937Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
In recent years,with the rise of the Internet,emerging fields such as cloud computing and big data have emerged,and the result is an explosive growth in the amount of data.Massive data processing has become a hot issue in industries such as engineering applications.In fact,many optimization algorithms have been proposed,using multi-agent networks to extract useful information from large-scale data sets.This optimization algorithm combined with multi-agent network is divided into centralized algorithm and distributed algorithm.Among them,the distributed algorithm assumes that each node in the multi-agent network has no primary and secondary points,and the nodes communicate with each other to share computing resources,which effectively reduces the risk of the centralized algorithm being caused by the failure of the network central node.The existing distributed algorithms usually assume that the node data in the multi-agent network is static,and wait until the data of all the nodes in the network are collected before processing the data.This offline learning method will bring high communication costs.Therefore,it is of great significance to study the distributed algorithm under the ouline learning mode.In this paper,a distributed online conditional gradient algorithm(DOCG)and a fast distributed online conditional gradient algorithm(Fa-DOCG)are proposed for the problem that multi-agent network node data is dynamic stream data.First of all,in order to overcome the problem that the high-dimensional constraints in existing distributed online optimization algorithms is hard to be calculated,a DOCG algorithm is proposed in this paper.The purpose is to use the non-projection feature of the conditional gradient algorithm to realize the substitution of the linear optimization step to the projection step and solve the problem of the projection calculation bottleneck caused by the high-dimensional constraint.Secondly,in order to solve the problem that the DOCG algorithm converges slowly,the Fa-DOCG algorithm is proposed.Finally,by defining the corresponding Regret bound to characterize the performance of online estimation,this paper further proves that DOCG algorithm and Fa-DOCG algorithm have the Regret of O(T3/4)and O(logT)respectively.The two algorithms proposed in this paper combine the regularization function and the local information of each node to construct a new time into this function,thus realizing the real-time processing of node data in multi-agent networks,and effectively solving the gradient of traditional conditional gradient method.Not sensitive to the problem.Finally,the effectiveness of the two algorithms is proved theoretically.The analysis shows that the projection operation does not affect the convergence of the algorithm,but affects the convergence speed of the algorithm.In contrast,the linear approximation optimization replaces the projection operation,which can be to some extent.Improve the convergence speed of the algorithm.Figure[20]table[0]reference[44].
Keywords/Search Tags:distributed optimization, conditional gradient, projection-free, online learning, Regret bound
PDF Full Text Request
Related items