| With the rapid development of electronic and communication technology,distributed control based on multi-agents has become the key to solving large-scale complex system problems.How to design and implement distributed controllers is an important issue in the control of multi-agent systems.The problem of cooperative output adjustment of multi-agent systems is an important topic in multi-agent cooperative control.The goal is to seek a distributed control strategy that enables the system to gradually track the reference signal and finally reach a steady state.In the existing literature,most of the research is based on a continuous time system,but in practical application,the digital computer is mostly used for control,which is a discrete application.The discrete system does not contain derivative terms,which simplifies the design process.Therefore,it is very practical and innovative to apply the research of discrete-time multi-agent systems to the hot issue of collaborative output regulation.In this paper,a reinforcement learning algorithm is used to carry out consistency control for a multi-agent system and optimize control while realizing output adjustment of a discrete multi-agent system.The main contents are as follows:(1)Under a fixed directed topology,the output regulation problem of the discrete multi-agent system is discussed.It is proved that the maximum consensus region of a general discrete system coincides with the maximum gain region of the linear quadratic regulator,and the sufficient and necessary conditions for such systems to achieve collaborative output regulation are verified.Secondly,based on the Q-learning algorithm,a branch of reinforcement learning,a distributed output regulation control protocol is designed.Compared with the traditional control protocol,the system dynamics model is omitted and the real-time coordinate parameters of the agent are taken as the iterative data,which can easily solve the optimal solution of the HJB equation in Bellman’s optimal theorem.The optimal control of output regulation is realized in discrete multi-agent systems when the model is unknown.(2)In this paper,the finite time control theory is used to study the optimal control of discrete multi-agent output regulation in fixed directed topology structure,the finite time state feedback controller is designed,and the local error formula of the distributed control protocol constructed by the Q-learning algorithm is optimized.On the premise that the system satisfies Nash equilibrium and admissible control protocol,the convergence and stability of the system are proved respectively.Ensure that the system can achieve the model’s unknown optimal control premise,and the convergence time of local errors is significantly reduced by 50%.(3)For multi-agent systems with directional switching,the optimal output control problem under uncertainty is considered.In a Q-learning environment,a model-free iterative strategy for discrete multi-agent systems is proposed,and a global optimal control protocol under a finite time state is given.By using Gerschgorin’s disk theorem and average analysis method,a stable sufficient,and necessary condition for a discrete multi-agent system with topological switching to achieve optimal control is proved.The correctness of the proposed conditions is proved by numerical simulation. |