Font Size: a A A

Research And Application On Self-Organizing RBF Neural Network Base On Fast Gradient

Posted on:2023-07-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:M L MaFull Text:PDF
GTID:1528307100475884Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Artificial intelligence is a discipline that uses computers to simulate human thought processes and intelligent behavior.It has developed rapidly and has become a frontier science that intersects many disciplines.Artificial neural network is an important branch of artificial intelligence and has been widely used in many fields such as industry,agriculture,transportation,medicine and military.Radial Basis Function(RBF)neural network is a common feedforward shallow artificial neural network.It has the advantages of simple structure,strong approximation ability and simple training,and therefore it has been applied in many fields.The key to the wide application of RBF neural network is to obtain the appropriate network state(structure and parameters).In order to obtain appropriate network state,in recent years,many algorithms have been proposed to train RBF neural network.Among them,gradient-based algorithms have become the most commonly used training algorithms of RBF neural network because of the advantages of simple calculation,fast training speed and easy implementation.However,these gradient-based algorithms are susceptible to the vanishing gradient problem,which leads to the phenomenon of "premature learning".It has always been a research topic how to continuously obtain enough effective gradient information to ensure the smooth progress of these gradient-based algorithms and improve their learning performance.Based on the in-depth study of RBF neural network and gradient-based algorithms,this thesis designs strategies to obtain effective gradient information to help RBF neural network quickly converge to the ideal network state.The main research work and innovations are as follows:1)Design on RBF neural network based on accelerated gradient algorithmIn the training process of RBF neural network,it will be difficult to update parameters when samples are located in the saturation region of neuron activation function.In this case,the gradient value of the parameter will be equal to or close to zero.To solve this problem,an accelerated gradient algorithm,with an adaptive stretching mechanism,is designed to keep the samples away from the saturation region of neuron activation function.First,an indirect detection mechanism,based on the instantaneous convergence rate of cost function and instantaneous decay rate of gradient,is developed to detect the slow learning process caused by vanishing gradient problem.Second,an amplification gradient strategy,based on the gradient decay rate and the learning situation of the network,is designed to stretch the action width of neuron activation function adaptively.The strategy increases the parameter gradient value by helping these samples escape from the saturation region of neuron activation function,so that the learning process converges to better solution quickly.Finally,the proposed algorithm is analyzed theoretically,and the acceleration convergence effect and practicability are explained theoretically.2)Design on RBF neural network based on hybrid gradient learning algorithmFor RBF neural network,the learning process is prone to fall into the local optimum,saddle point or flat point of cost function surface during the training process,which is one of the common problems that limit the learning performance.To cope with this problem,a hybrid gradient learning algorithm is proposed to improve the learning performance of RBF neural network.First,a hyperplane gradient,based on a hyperspace constructed by a solution population,is introduced,which can cross the surface of cost function.Then,the search process can escape from the local optimum,saddle point or flat point of cost function surface.Second,an adaptive learning rate is designed to restrict the search process into this hyperspace by balancing the hyperplane gradient and the traditional gradient.Then,during the learning process,the hyperspace is constantly contracted to approximate the global optimal solution.Finally,the convergence and complexity of the proposed algorithm are analyzed in detail to ensure the feasibility of the proposed algorithm theoretically.3)Design on structural self-organizing RBF neural network based on adaptive gradient spaceThe structure of RBF neural network has great influence on its gradient space.Based on the study of the interaction between network structure and network gradient space,a self-organized RBF neural network is designed to quickly obtain enough effective gradient information to accelerate the convergence of the network.First,an adaptive expansion and pruning mechanism of gradient space,based on the integrity and orthogonality of hidden neurons,is designed.Then,the effective gradient information is constantly added to the gradient space and the redundant gradient information is eliminated from the gradient space.Second,with an adaptive expansion and pruning mechanism,the neurons are generated or pruned accordingly.In this way,a self-organizing RBF neural network which reduces the structure complexity and improves the generalization ability is obtained.Then,the structure and parameters in the learning process can be optimized simultaneously.Finally,the stability of network structure adjustment and the effectiveness of second-order algorithm to obtain effective gradient are analyzed theoretically,which ensures the implementation of the algorithm.4)Design on soft sensor model for effluent parameters of wastewater treatment process based on self-organizing RBF neural networkIt is difficult to measure effluent parameters directly in the process of urban wastewater treatment.In order to solve this problem,on the basis of an in-depth analysis of the urban wastewater treatment,a dynamic soft measurement model of effluent parameters based on self-organizing RBF neural network in the process of wastewater treatment is proposed.This model solves the problems of existing forecasting methods such as serious lag,high cost,serious pollution and low precision.First,the designed model dynamically selects the key related variables of each effluent quality parameter according to the historical data,which reduces data redundancy and ensures the response speed of the model to the abrupt change of process variables.Second,the selected key related variables are pre-processed into the network with nonlinear normalization,which balances the effects of each key related variable while reducing the influence of abnormal data and improving the accuracy and stability of the model.Finally,the designed model further improves the prediction accuracy by dynamically constructing RBF neural network.
Keywords/Search Tags:Radial basis function neural network, self-organizing network structure, gradient-based learning algorithm, vanishing gradient, dynamic soft sensor model
PDF Full Text Request
Related items