Font Size: a A A

Research On Distributed Optimization Method Of Neural Network In Edge Computing

Posted on:2022-07-23Degree:MasterType:Thesis
Country:ChinaCandidate:B J GuoFull Text:PDF
GTID:2518306575483084Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the rise of Mobile Edge Computing(MEC)technology,it is more and more common for people to deploy edge computing nodes close to users to alleviate service congestion of central cloud nodes and improve user experience.However,with the rise of Internet of things technology,the amount of data that edge nodes need to process is also increasing.How to improve the efficiency of model training and timely data transmission with communication equipment has become an urgent problem to be solved in edge computing scenarios.In recent years,optimizing the communication cost in distributed training of edge nodes has become one of the effective means to solve this problem.In distributed training,we can reduce the communication cost and improve the training efficiency by setting threshold and screening gradient.However,the selection of threshold is a difficult problem.It is time-consuming and laborious to find the appropriate threshold only by user's experience and multiple training.In order to solve this problem,an adaptive compression strategy based on gradient partition is proposed on the basis of distributed training combined with the characteristics of parameter distribution during training to alleviate the problem of high communication cost between computing nodes and improve the efficiency of distributed training.Firstly,the gradient parameters in the training of neural network are statistically analyzed,and then the distribution characteristics are predicted.On the basis of the distribution characteristics,the gradient is divided into the key area and the sparse area.At the same time,combined with the information entropy of gradient distribution,the reasonable threshold is selected to screen the gradient value in the partition,and only the gradient value larger than the threshold value is updated,so as to reduce the communication cost and improve the training efficiency.This strategy does not rely on user experience to find the threshold,gradient sparsity,so as to improve the efficiency of training.Finally,two computing nodes are used and different training models are built on Tensor Flow framework to simulate the proposed adaptive compression strategy based on gradient partition.In the same network environment and different network models,the proposed algorithm is evaluated experimentally.At the same time,a method without any compression scheme is set as the reference group,and a compression method based on gradient variance is used as the comparison group.The experimental results are analyzed by analyzing the convergence,compression ratio and training throughput of the algorithm.The results show that the adaptive compression strategy based on gradient partition can improve the training throughput to a certain extent,and can achieve good convergence under the premise of ensuring the training accuracy.It also increases the compression ratio of gradient and improves the efficiency of distributed training to a certain extent.Figure 26;Table 4;Reference 63...
Keywords/Search Tags:MEC, neural network, distributed training, gradient compression, training efficiency
PDF Full Text Request
Related items