Font Size: a A A

Convergence Of Gradient Method With Momentum

Posted on:2021-02-19Degree:MasterType:Thesis
Country:ChinaCandidate:X L PengFull Text:PDF
GTID:2428330605453632Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
From the proposal of the neuron model in 1943 to the rise of deep learning,neural networks have been developed for more than 70 years,and neural networks have begun to have an increasingly important impact on human life.At present,neural network technology has been used in various fields,including computer vision,natural language processing,speech recognition,biomedicine,and robot control,etc,and have achieved some success in these various fields.However,there are not many theoretical analyses on neural networks.Which is exactly the reason why our article analyzes the convergence of the back-propagation algorithm for the momentum term for neural networks theoretically.In our paper,a three-layer feed-forward neural networks model is considered,whose learning rate is set as a constant,and the momentum coefficient is set as an adaptive variable to accelerate and stabilize the training process of network parameters.This paper gives the corresponding convergence results,and the proof of the conclusions in detail.In addition,two experiments have been performed in this article.The experimental results further verify the correctness of the results of this article;Compare with some prior results,our conclusion is more general,because the network output layer may have any number of neurons and the bias term is also taken into account.
Keywords/Search Tags:feed-forward neural networks, back-propagation algorithm, momentum, convergence
PDF Full Text Request
Related items