Font Size: a A A

Fast Distributed Online Learning Algorithms In Networks

Posted on:2022-06-18Degree:MasterType:Thesis
Country:ChinaCandidate:X Y ShenFull Text:PDF
GTID:2518306341956119Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
With the rapid development of Internet information technology,especially with the advent of the 5G era,which makes smart devices grow rapidly and constructs a large-scale network with smart appliances,vehicles,equipment and tools.These tools collect large amounts of real-time data,execute the complex computing tasks and provide important services to significantly improve people's lives and enrich collective productivity.At present,an urgent problem is to process the massive dynamic streaming data.However,the conventional batch learning method cannot afford the processing of streaming data,and some existing algorithms are designed based on centralized network,which can easily cause the communication overload of the central node and reduce the efficiency of data processing.However,there is no central node in the distributed algorithm,and all nodes in the network communicate and cooperate with each other to handle the global task,which can effectively avoid the communication overload of the central node and enhance the robustness of the distributed network.Therefore,it is of great practical significance to investigate distributed online methods for underlying optimization problems.This dissertation proposes the distributed adaptive online gradient algorithm with weight decay(WDDAOG)and the distributed online gradient algorithm with dynamic learning step(DADABOUND)for distributed online learning problems.The concrete work describes as follows:The first part mainly improves the distributed gradient descent algorithm to make it have a faster convergence rate.First,we extend the original distributed gradient descent algorithm to online settings.Then,we use the moment estimates of the original gradient as the update direction,and impose the weight decay on nodes'update.In such case,we state the WDDAOG algorithm and prove the dynamic regret of WDDAOG,and improve the regret from O(n(?))to O((?)).Finally,we conduct the experiments to evaluate the performance.The results depict that the WDDAOG algorithm outperforms the existing distributed algorithms,and also verify theoretical results of regret bound for different networks.The second part is mainly inspired by gradient clipping.We apply gradient clipping to the learning rate,so that the existing algorithm can avoid the extreme learning rate and have better performance.Firstly,we use the moment estimates of the gradient to replace the original gradient information.Secondly,we state the distributed online learning algorithm with dynamic learning rate(DADABOUND)by using the learning rate clipping technique,and prove the dynamic regret that has a dynamic regret bound of O((?)).Finally,we conduct the experiments and test the effectiveness of our proposed algorithm.Figure[20]table[3]reference[82]...
Keywords/Search Tags:distributed optimization, online learning, adaptive gradient, momentum estimate, dynamic regret
PDF Full Text Request
Related items