Font Size: a A A

Research On Stochastic Optimization Algorithm And Rational Approximation

Posted on:2019-07-23Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y ChengFull Text:PDF
GTID:2370330551959979Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Stochastic optimization problem is an optimization problem with random factors.It is one of the main mathematical forms need to use the tools such as probability statistics,random process and random analy-sis.At present,with the huge demand of optimization algorithms related to large-scale learning and big data,random optimization algorithm has become one of the fields of concern in machine learning,and The conver-gence speed of the algorithm is the core of the study.In this paper,we have in-depth and systematic research on the convergence rate of stochastic op-timization problems.the specific contents are as follows:Firstly,two classical supervised learning problems,that is,least squares and logistic regression are concerned and two accelerated stochastic gradi-ent algorithms are proposed.On the one hand,basing on the non-strong convexity of the loss function,we consider the convergence theory of the learning algorithm,and the optimal convergence rate O?1/n2?is obtained,where n is the number of samples.One the other hand,we verify the the-oretical results and show the faster convergence speed and better general-ization ability through carrying out the numerical experiments on synthetic data and some standard data sets.Secondly,the least-square regression problem that the objective func-tion consists of the L1regular term is concerned and an effective acceler-ating stochastic approximation algorithm is proposed.Basing on a non-strong convexity condition and using a smooth function to approximate the L1regular term,the convergence speed of the learning algorithm is consid-ered,we obtain the convergence speed of the algorithm O?ln n/n?.Lastly,we consider the Newman type rational interpolation approxi-mation problem of|x|?,and discuss the convergence rate of the operator Newman-?at the adjusted tangent nodes X={tan4n2 k?}nk=1,and finally get the exact approximation order O(1n2?).The result not only contains the ap-proximation result in the case of?=1,but it is better than the conclusion when the node group is selected for the first and the second type of Cheby-shev nodes,equidistant nodes etc.
Keywords/Search Tags:Stochastic optimization, Least square regression, Logistic regression, Convergence rate, Rational interpolation approximation
PDF Full Text Request
Related items