Font Size: a A A

Fast Shrinkage Algorithm In Distributed Optimization

Posted on:2022-06-21Degree:MasterType:Thesis
Country:ChinaCandidate:S Y LiuFull Text:PDF
GTID:2518306479493084Subject:Statistics
Abstract/Summary:PDF Full Text Request
Big data has brought new opportunities,but also new challenges to the field of statistics because of the large number and high dimensions of samples.Statistical inference in big data is limited by the storage and calculation time in a single computer.When the data is stored in a distributed framework,the method of transmitting all the data into the same processor for statistical inference is disallowed by high communication cost,computer memory and privacy protection.A naive distributed algorithm to aggregate the local estimates by means of averaging only obtains an accurate estimator in the linear objective function.In the current common distributed storage form,it is necessary to design an algorithm suitable for the distributed framework.In this paper,we proposed two communication-efficient gradient enhanced fast shrinkage(CGEAS and CAGEAS)algorithms.In each iteration,each computer node computes parallelly and transmits the local estimates to the central processor,which aggregates the local estimates and broadcasts the updated one to all computers.The gradient-enhanced loss is an approximate function of the global loss,which is suitable for parallel computation and relieves the high deviation caused by the local loss function.In addition,Nesterov accelerated gradient gradient(NAG)is used to accelerate the convergence rate of the gradient enhanced loss algorithm.Compared with the Fast Iterative Threshold Shrinking Algorithm(FISTA),the proposed algorithms does allows rough initial parameters and retains the convergence rate of FISTA.In addition,two stable CGEAS and CAGEAS algorithms are proposed,which corrects the defects of the later estimation oscillations caused by CGEAS and CAGEAS algorithms due to NAG accelerating algorithm.The revised algorithms relaxes the homogeneity hypothesis between distributed storage data by increasing the proximal point algorithms.In this paper,we proved the convergence rate of algorithms proposed,and the experimental simulation verified that these communication-effective gradientenhanced fast shrinkage algorithms achieves theoretical effect,which have good performance compared with other distributed algorithms.
Keywords/Search Tags:Optimized gradient algorithm, Gradient-enhance loss, Global rate of convergence, Communication-efficient, Likelihood estimation
PDF Full Text Request
Related items