Font Size: a A A

Research On Convergence Of Decentralized Machine Learning Algorithms For Unstable Network Environments

Posted on:2024-09-09Degree:MasterType:Thesis
Country:ChinaCandidate:L X ZhangFull Text:PDF
GTID:2568306920950919Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Decentralized machine learning has a huge advantage in breaking the communication bottleneck,and with the scale of hundreds of training clusters in deep learning today,it has become indispensable to accelerate training using decentralized architecture.Decentralized machine learning ideas have been applied to various fields such as large-scale deep learning,federation learning,edge computing and etc.Most decentralized learning algorithms focus on reducing communication overhead and do not consider the possibility of network instability.On the one hand,the unstable network environment makes the data of the communication process missing,and on the other hand,it makes the data transmitted during the communication process unreliable.Existing analyses of unstable networks have various limitations,such as considering only a single scenario,considering only centralized settings,stronger unrealistic assumptions and etc.Therefore,this paper integrates a variety of instability factors to explore the impact of including unstable network connections as well as noise on the convergence of decentralized machine learning algorithms.Meanwhile,we extend the problem to the scenario of non-convex optimization and explore the convergence of decentralized stochastic gradient descent optimization algorithms in theoretical and practical applications.Specifically,the work in this paper contains the following main contribution points:1.In this paper,we consider the problem of non-convex decentralized optimization over unstable network connections and propose a robust decentralized stochastic gradient descent algorithm to accommodate this situation.In addition to some standard assumptions,we only assume a realistic and necessary bound for network instability to analyze the convergence of the proposed learning algorithm.By choosing a suitable learning rate,the algorithm achieves a convergence rate of O((?)),where n denotes the number of workers and K denotes the total number of iterations.The theoretical results in this paper have consistent convergence rates with the decentralized stochastic gradient descent algorithm under reliable network connectivity.And the algorithm achieves a linear acceleration ratio of training i.e.,the convergence rate is linearly proportional to the number of workers.In addition,the theoretical results in this paper are also applicable to the general case where the data are not independently and identically distributed.2.In this paper,the decentralized algorithm mentioned above is applied to noisy scenarios.Specifically,we propose a generic noise model covering different noise classes,such as channel noise,compression noise and differential privacy noise.Without considering the type of noise and only assuming that the parameters perturbed by the noise are bounded,we demonstrate that the proposed algorithm maintains its original convergence performance in the noisy environment.Also,this paper shows the specific numerical and experimental effects of noise on the convergence of the proposed algorithm.In this paper,we consider two types of unstable factors and present the impact of these unstable factors on the convergence of decentralized algorithms,and the proposed algorithm is applicable to various decentralized optimization scenarios and can be applied to deep learning,federal learning,and IoT under unstable networks.
Keywords/Search Tags:Decentralized Machine Learning, Stochastic Gradient Descent, Unstable Networks
PDF Full Text Request
Related items