Font Size: a A A

Research And Application Of The Primal-dual Algorithm For Large-scale Machine Learning

Posted on:2022-10-14Degree:MasterType:Thesis
Country:ChinaCandidate:C L ZhangFull Text:PDF
GTID:2518306605468704Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
Many image processing and machine learning tasks,such as total-variation denoising,image processing,supervised learning and unsupervised learning,can be transformed into convex-concave saddle point problems.The primal-dual algorithm is a popular and efficient op-timization tool to solve such a saddle point problem.For example,by introducing a dual variable,some convex minimization problems with linear constraints and graph-guide reg-ular minimization problems can be transformed into saddle point problems.The empirical risk minimization problem can also be transformed into a saddle point problem by dual functions.Indeed,image denoising,image compressive sensing and other problems in im-age processing can be transformed into a saddle point problem to obtain better performance.This paper mainly carries out the research from the following three aspects.1.In recent years,aiming at the common convex-concave bilinear saddle point problem in machine learning,the convergence rate of the traditional stochastic primal-dual hybrid gradient(SPDHG)method is sub-optimal.In order to improve the convergence rate of SPDHG,This paper design a fully implicit iterative scheme and introduce variance reduced and momentum acceleration techniques,and propose a variance reduced stochastic primal-dual hybrid gradient algorithm(VR-SPDHG)and its accelerated variant(AVR-SPDHG).In addition,we also give a strict theoretical analysis.We prove that both VR-SPDHG and AVR-SPDHG converge linearly for strongly convex problems.For non-strongly convex problems,they have convergence rates of O(1/K)and O(1/K2)(where K is the number of outer loops),respectively.Finally,a large number of numerical experimental results show that the performance of AVR-SPDHG algorithm is obviously better than that of the existing state-of-art algorithms.2.At present,there is little research on saddle point optimization with privacy protection.On the basis of the general stochastic primal-dual hybrid gradient algorithm,this paper use the differential privacy method based on gradient perturbation,and propose a stochastic primal-dual hybrid gradient algorithm with differential privacy protection(DP-SPDHG),which makes up for the deficiency of differential privacy algorithm in the field of saddle point problem.The privacy guarantee performance and convergence performance of the proposed algorithm are analyzed theoretically.And the advantages of the algorithm are verified by experiments.Especially in the training neural network,compared with the pri-vacy protected SGD algorithm(DP-SGD),under the same privacy budget,the proposed DP-SPDHG algorithm has lower training error and higher classification accuracy.3.As an important theory of sparse signal recovery,Compressive Sensing(CS)optimization methods usually produce good performance when the signal is sparse in some transform do-mains.In recent years,many methods that combining deep learning with traditional iterative optimization algorithms have been proposed and achieved exciting performance in the field of sparse signal recovery.The primal-dual algorithm is guaranteed in theory and is simple and effective.It plays an important role in the field of image processing,we first transform the CS model into a saddle point problem,and then propose a novel Learned Primal-Dual Network,called LPD-Net.The proposed LPD-Net explores the special structure of the l1-norm regularization term and converts the original problem into two sub-problems that are easy to solve.Moreover,we also present a transformed Sigmoid function to improve the performance of the proposed network.Experimental results demonstrate that the proposed LPD-Net outperforms other existing state-of-the-art methods for image CS reconstruction tasks.
Keywords/Search Tags:primal-dual algorithm, variance reduced, momentum acceleration, differential privacy, compressive sensing, learned optimization algorithm
PDF Full Text Request
Related items