Font Size: a A A

PID-based Optimization Method With Applications

Posted on:2022-01-20Degree:MasterType:Thesis
Country:ChinaCandidate:L L ZhangFull Text:PDF
GTID:2518306341456594Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Machine learning and Deep learning have achieved remarkable success in many practical fields,such as image classification,object detection and face recognition,but training deep neural networks on large scale datasets is still time-consuming.At present,the first-order optimization algorithm is a widely used method for solving large-scale optimization problems,such as the momentum gradient algorithm.Two commonly used momentum gradient methods are the Heavy Ball method and the Nesterov Accelerated Gradient method.When the momentum direction is favorable,the momentum gradient algorithm can usually speed up the optimization,but when the momentum term accumulates too much historical gradient information during the iteration process,it will easily lead to overshoot problem,which will cause oscillations during the iteration process,this hinders the convergence speed of the algorithm to a certain extent.The "Proportional-IntegralDerivative" Controller is a feedback control method that adjusts the system through the error of the control system.It can effectively overcome the overshoot phenomenon and is robust.It has been widely used in unmanned driving and intelligent robots,etc.In practical applications,the differential term in the PID controller usually adopts the first-order difference approximation,which makes it impossible to effectively use the second-order information in the optimization process.Although PID is widely used,there are still few researches on the convergence analysis method of PID algorithm.Based on this,this article focuses on the fast optimization algorithm and application based on PID controller.The detailed studies are as follows:The first chapter briefly introduces some commonly used optimization algorithms in machine learning and deep learning,the research background,research status and research significance of large-scale optimization,and also makes a detailed description of the related links between control theory and optimization algorithms.The second chapter mainly proposes a PID algorithm convergence analysis method based on regular conditions.This paper uses the similarity between the optimization algorithm and the control algorithm,uses the integral quadratic constraint framework to transform the algorithm into a dynamic system with feedback,combined with common regular conditions and KYP lemma,the parameter domain of PID algorithm linear convergence is given.In the third chapter,aiming at the overshoot phenomenon in the momentum gradient method,combining the "Proportional-Integral-Derivative" controller and the QuasiNewton method,a kind of Preconditioned Momentum Gradient algorithm is proposed.This method not only can effectively overcome the overshoot phenomenon of the original momentum gradient algorithm,but also can effectively improve the efficiency of the algorithm through the selection of preconditioning factors.Numerical experiments verify that the proposed algorithm is effective.In the fourth chapter,to solve the optimization problem of different network structures in deep learning,we combine the adaptive restart strategy and propose an adaptive preconditioning momentum gradient algorithm.Numerical experiments verify the effectiveness of the proposed algorithm.The final chapter is a summary of this dissertation and the future the work.
Keywords/Search Tags:Machine learning, Deep learning, Momentum gradient algorithm, PID controller, Convergence analysis
PDF Full Text Request
Related items