Font Size: a A A

Study On The Receding Horizon Optimization Methods In Neural Network Predictive Control

Posted on:2016-01-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z F FanFull Text:PDF
GTID:1108330503952855Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
To solve the neural network predictive control problems of unknown nonlinear systems using local optimization, the BP and RBF neural networks are selected to form one step or multi-step predictive models, the Newton-Raphson and Levenberg-Marquardt algorithms perfom the receding horizon optimization tasks of BP neural network two step predictive control and RBF three step one respectively. Nevertheless, the simulation results show that the performance is strongly affected by the initial value problem of the algorithms, that is to say, the algorithms often converge to local minima, and lead to unexpected results. The strategy uses the current value of manipulate variable as initial value which proposed by some literatures, is useless for the problem solving. Based on the determinate area where the global minimum exists, the manipulate variable with optimal performance is proposed as initial value, and this can improve the control performance due to the fact that the objective function value of optimization result is not more than the one with optimal performance. Through dynamic correction of weight factor, there will be at least one local minima between the current manipulate variable and the optimal performance one, and the algorithm will converge to this minima subsequently. An inverse neural network is used to calculate the manipulate variable with optimal performance. Both theoretical analysis and the simulation results show that the initial value problem can be solved.To solve the neural network predictive control problems of unknown nonlinear systems using global optimization, the one step predictive model is formed by a feedforward network with only one hidden layer which uses the sigmoid active function. A global optimization algorithm is designed based on the branch and bound framework, and the interval analysis is used to bound neural network’s output. Since nature expansion as well as Taylor series expansion often caursed broad interval bounding, a linear parallel expansion method is proposed. It is proved that linear parallel expansion is one of the expansion methods, and the obtained interval function is an inclusion function. Simulation results show that the linear parallel expansion may obtain a more sharpper interval, and significantly reduce the optimization time.
Keywords/Search Tags:predictive control, neural networks, receding horizon optimization, nonlinear system, initial value problem, interval expansion
PDF Full Text Request
Related items