| In recent years,the multi-agent consensus problem has become a hot research topic in the field of artificial intelligence,and it has also served as an important research foundation for cooperative control of multi-agent systems and has received constant attention from researchers in various disciplines.Multi-agent consensus means that agents has the ability to act autonomously.According to predefined local communication,calculation and control rules,to achieve global complex control objectives,which has a strong engineering value and practical application significance.Currently,they are already in the industry,military,aerospace and other fields have a wide range of applications.In the traditional multi-agent consensus protocol,the next state change of the agent is generally calculated through the combination of itself and the neighbor current state within the communication range,and has nothing to do with the history of the agent.However,for agents,because of their limited communication range,historical status information still has certain reference significance.On the other hand,with the increase of the computing performance of embedded computers and the reduction of the cost of semiconductor chips,the agents have the ability to process more status information.Agents can help to enhance their own learning and reasoning capabilities and make them more intelligent by backtracking the historical state.Through the backtracking of the state,the agent can know the movement trend of its own and its neighbors' agents based on the historical state calculations,and thus make decisions that are more in line with the target's expectations for the next movement state,effectively improving convergence speed and robustness of the multi-agent consensus protocol.In this paper,multi-agent rendezvous is used as the convergence objective of multi-agent consensus,and multi-agent consensus protocol based on state backtracking under discrete time is studied.The main research contents and results are as follows:(1)Design multi-agent consensus protocol with Prediction based on state Backtracking.A virtual neighbor is added to the agent to generate a state gain,and the virtual neighbor is selected by adopting a way that the agent traces back its own historical state to predict the next location state and select it as a virtual neighbor.Further,by tracing the state of the neighbor agent in the communication range,the movement trend between the neighbor agent and the neighbor agent is judged,and a more suitable virtual neighbor is selected as the state gain according to the difference in the movement trend to accelerate the multi-agent consensus protocol.The rate of convergence.In addition,multi-agent topology connectivity is enhanced through additional control inputs.(2)Design multi-agent consensus protocol based on state backtracking with dynamic constraints.The connectivity of multi-agent network topology is an important precondition for the consensus of multi-agent state convergence.By tracing back the state of the multi-agent itself and the neighbor agents within the communication scope,the movement trend of the agent is compared with the historical state of the neighbors.In order to assess the quality of the agent's connectivity.For agents with poor connectivity quality,the use of dynamic constraints to reduce the sampling step size is targeted to maintain multi-agent network topology connectivity and improve the robustness of multi-agent consensus protocols.(3)Design multi-agent consensus protocol for neighborhood adjustment based on state backtracking.Experiments have found that the excessive number of Agent communication connections may not necessarily promote the convergence of multi-agent conformance protocols.Therefore,the method of assessing the quality of multi-agent connections through the historical state backtracking is also adopted,and the auction algorithm is used to reduce the quality of agents with poor connectivity.The number of communication connections to improve the topological connectivity quality of the agent,so that the agent state can gradually converge to a consistent. |