Font Size: a A A

Learning Techniques in Receding Horizon Control and Cooperative Control

Posted on:2011-11-02Degree:Ph.DType:Dissertation
University:The Chinese University of Hong Kong (Hong Kong)Candidate:Zhang, HongweiFull Text:PDF
GTID:1448390002460297Subject:Engineering
Abstract/Summary:
Two topics of modern control are investigated in this dissertation, namely receding horizon control (RHC) and cooperative control of networked systems. We apply learning techniques to these two topics. Specifically, we incorporate the reinforcement learning concept into the standard receding horizon control, yielding a new RHC algorithm, and relax the stability constraints required for standard RHC. For the second topic, we apply neural adaptive control in synchronization of the networked nonlinear systems and propose distributed robust adaptive controllers such that all nodes synchronize to a leader node.;Receding horizon control (RHC), also called model predictive control (MPC), is a suboptimal control scheme over an infinite horizon that is determined by solving a finite horizon open-loop optimal control problem repeatedly. It has widespread applications in industry. Reinforcement learning (RL) is a computational intelligence method in which an optimal control policy is learned over time by evaluating the performance of suboptimal control policies. In this dissertation it is shown that reinforcement learning techniques can significantly improve the behavior of RHC. Specifically, RL methods are used to add a learning feature to RHC. It is shown that keeping track of the value learned at the previous iteration and using it as the new terminal cost for RHC can overcome traditional strong requirements for RHC stability, such as that the terminal cost be a control Lyapunov function, or that the horizon length be greater than some bound. We propose improved RHC algorithms, called updated terminal cost receding horizon control (UTC-RHC), first in the framework of discrete-time linear systems and then in the framework of continuous-time linear systems. For both cases, we show the uniform exponential stability of the closed-loop system can be guaranteed under very mild conditions. Moreover, unlike RHC, the UTC-RHC control gain approaches the optimal policy associated with the infinite horizon optimal control problem. To show these properties, non-standard Lyapunov functions are introduced for both discrete-time case and continuous-time case.;Cooperative control of networked systems (or multi-agent systems) has attracted much attention during the past few years. But most of the existing results focus on first order and second order leaderless consensus problems with linear dynamics. The second part of this dissertation solves a higher-order synchronization problem for cooperative nonlinear systems with an active leader. The communication network considered is a weighted directed graph with fixed topology. Each agent is modeled by a higher-order nonlinear system with the nonlinear dynamics unknown. External unknown disturbances perturb each agent. The leader agent is modeled as a higher-order non-autonomous nonlinear system. It acts as a command generator and can only give commands to a small portion of the networked group. A robust adaptive neural network controller is designed for each agent. Neural network learning algorithms are given such that all nodes ultimately synchronize to the leader node with a small residual error. Moreover, these controllers are totally distributed in the sense that each controller only requires its own information and its neighbors' information.
Keywords/Search Tags:Receding horizon control, RHC, Cooperative, Learning techniques, Networked
Related items