Font Size: a A A

Event-Triggered Control Schemes Of Nonlinear Systems Through Adaptive Critic Designs

Posted on:2022-03-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y W ZhangFull Text:PDF
GTID:1488306317494234Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Adaptive critic design(ACD),which combines the ideas of reinforcement learning,dy-namic programming and neural networks,has been widely investigated to solve various com-plex control problems.It is an effective technique to solve the nonlinear Hamilton-Jacobi-Bellman equation and avoids the problem of "curse of dimensionality".Compared with the traditional control methods,the ACD is a self-learning approach,the agent requires to interact with the environment and adopts critic network to give the reward or punishment,thereby op-timize the control strategy.During the past decade,extensive ADP-based literature have been reported to address varieties of control problems of discrete-time systems and continuous-time systems with optimal control,trajectory tracking,robust stabilization and so on.These methods adopt iterative method to obtain the optimal performance index function such that the optimal controller is acquired.In recent years,the event-triggered control(ETC),which has the supe-riority to reduce communication and computational resource,has attracted extensive attention in the control community.Based on the previous researches,this thesis develops ACD-based ETC approaches to solve different control problems.The main contributions of this thesis are listed as follows1.In order to deal with the optimal control problem of discrete-time nonlinear systems,a policy gradient-based event-triggered control approach is developed.Compared with tradi-tional methods,this approach updates the control policy by using the gradient of the Q-function with respect to the control law.Therefore,the system dynamics is not require anymore.In or-der to obtain the approximate event-triggered optimal control law,an actor-critic structure is established.Meanwhile,a triggering condition which guarantees the closed-loop system to be asymptotically stable is designed and the control law is updated according to the gradient of the Q-function when the triggering condition is violated.2.In order to address zero-sum game problems for discrete-time nonlinear systems,a novel deterministic policy gradient-based ETC scheme is proposed.It is a data-based off-policy learning approach.The actor-critic-disturbance structure is established to obtain the approximate event-triggered optimal control law and the worst disturbance law.A triggering condition is deduced to guarantee the input-to-state stability of the closed-loop system,the control law and the disturbance law are tuned aperiodically at triggering instants only to reduce the computational and communication burden.To increase the usage efficiency of system data,the experience replay technique is employed to design a novel weight updating law.3.An ACD-based ETC method is developed to solve the zero-sum game problem of continuous-time multi-player systems.To begin with,a model neural network is employed to reconstruct the unknown multi-player nonlinear system by measured input and output data.In order to obtain the approximate event-triggered optimal control law and the worst disturbance law,a critic-only framework is established.To reduce the computational and communica-tion burden,an event-triggering condition which is suitable to multiple controllers is designed and the controllers are updated only when the triggering condition is violated.Moreover,the Lyapunov stability analysis shows that the closed-loop system is stable.4.An ACD-based robust ETC approach is proposed to address the nonzero-sum game problem of unknown multi-player nonlinear systems with constrained inputs and model un-certainties.To relax the requirement of system dynamics,a neural network-based identifier is constructed by using the system input-output data.Through designing a nonquadratic value function which reflects the bounded functions,the system states,and the control inputs of all players,the event-triggered robust stabilization is converted into an event-triggered con-strained optimal control problem.The approximate event-triggered optimal control law is ob-tained by establishing a critic-only framework.Furthermore,a novel triggering condition is designed by using Lyapunov stability theorem.The developed robust controller is updated at triggering instants only.5.An ACD-based event-triggered tracking control scheme is presented to deal with the optimal tracking control problem of unknown multi-player nonlinear systems.By constructing a neural network-based observer with input-output data,the system dynamics of the unknown multi-player nonlinear system is obtained.Subsequently,the optimal tracking control problem is converted to an optimal regulation problem by establishing a tracking error system.Then,the optimal tacking control policy for each player is derived by solving coupled event-triggered Hamilton-Jacobi equation via a critic network.Meanwhile,a novel weight updating rule is designed by adopting concurrent learning method to relax the persistence of excitation condi-tion.Moreover,an event-triggering condition is designed by using Lyapunov's direct method,which guarantees the uniform ultimate boundedness of the closed-loop multi-player systems.Finally,the conclusion and the perspective of future work are provided.
Keywords/Search Tags:Adaptive critic designs, adaptive dynamic programming, reinforcement learning, optimal control, neural networks, event-triggered control
PDF Full Text Request
Related items