Font Size: a A A

A Deep Reinforcement Learning Approach To The Supervisory Control Of Discrete Event Systems With Linear Temporal Logic Constraints

Posted on:2024-05-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:J J YangFull Text:PDF
GTID:1528307340973799Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
Controllers for industrial systems must satisfy multiple requirements simultaneously,such as ensuring product quality,production efficiency,and the safety of both workers and machines.These requirements increase the complexity of controller design.Moreover,as modern industrial systems continue to grow in scale and complexity,traditional controller design methods become increasingly inadequate,necessitating the exploration of new solutions.Reinforcement learning is an efficient controller design approach that aims to control complex systems by learning optimal policies.Compared to traditional methods,reinforcement learning offers greater adaptability and flexibility,automatically adjusting control strategies to cope with varying environmental conditions.Furthermore,the autonomous learning process of reinforcement learning eliminates the need for manual intervention,significantly reducing the cost and complexity of controller design.These characteristics have led to the widespread application of reinforcement learning in the field of industrial control.Discrete event systems,driven by events,find extensive applications in manufacturing,healthcare services,communication systems,transportation,and more.The design of supervisory controllers for discrete event systems has long been a critical research problem.Under the control of supervisory controllers,discrete event systems operate safely while meeting control requirements.Specifically,the controller eliminates behaviors that do not meet control requirements,ensuring the system’s maximally permissive behavior.This study,grounded in discrete event systems theory and employing finite-state automata as modeling tools,combines supervisory control theory with reinforcement learning theory to propose a novel approach to supervisory controller design for discrete event systems.The main contributions of this paper are as follows.1.Introducing deep reinforcement learning theory,the study first employs modular supervisory control theory to derive a series of module controllers for large-scale discrete event systems.These controllers,along with their corresponding subsystems,are used as inputs to a neural network.After continuous training,a coordinator for these module controllers is obtained,ensuring that the system does not deadlock under the influence of these module controllers.The paper introduces a model-based deep reinforcement learning algorithm that integrates discrete event system supervisory control theory.Compared to the standard deep Q-learning algorithm,the proposed algorithm converges faster.Furthermore,unlike traditional supervisory control theory methods,the system’s behavior in the proposed approach is approximated by a neural network,eliminating the need to store all experienced states during training,thus significantly reducing computational complexity.2.By disabling controllable events,a coordinator capable of handling multiple module controllers is obtained,ensuring that the system’s operation complies with control requirements and safety.Simultaneously,the system’s behavior approximates the maximally permissive behavior.The improved algorithm significantly narrows the action selection range in deep reinforcement learning,thereby enhancing training efficiency.Additionally,input and output data normalization is applied to the neural network to further improve training efficiency.The relationship between the scale of discrete event systems and the neural network’s scale is investigated to guide the selection of appropriately sized neural networks,thereby enhancing the method’s practicality.3.Linear temporal logic expressions are employed to model complex control requirements in discrete event systems.To reduce computational complexity,the invariants part of these expressions(invariants expressed in linear temporal logic that can be replaced by corresponding finite-state automata)remains modeled by automata.Using supervisory control theory for discrete event systems,a supervisory controller that satisfies the automata-based control requirements is first obtained.This controller encompasses all behaviors in the original system that satisfy the invariants.Subsequently,based on the obtained controller and the remaining logic expressions,a system controller satisfying the entire linear temporal logic expression is derived using reinforcement learning theory.As discrete event systems may involve uncontrollable events,the final step is to incorporate these uncontrollable events into the obtained system controller to achieve a supervisory controller that satisfies control requirements.A novel algorithm,combining supervisory control theory and reinforcement learning theory,replaces the occurrence of uncontrollable events in discrete event systems with probabilities.Supervisory control theory is capable of eliminating obviously impossible events,reducing the search space of the reinforcement learning algorithm,and enhancing learning efficiency.This approach is equally applicable within a deep reinforcement learning framework.These innovative contributions collectively advance the field of supervisory controller design for discrete event systems,providing new methods and tools for addressing control challenges in large-scale discrete event systems.
Keywords/Search Tags:Discrete event system, Deterministic finite automata, Supervisory control theory, Reinforcement learning, Neural network, Linear temporal logic
PDF Full Text Request
Related items