With the rapid development of science and technology,the application field of agent technology is also expanding rapidly,among which multi-agent system is a major research hotspot.Multi-agent system is a complex dynamic system.In the face of changing external environment and flexible assignment of tasks,only using manual formulation method will lead to inefficiency and unable to adapt to dynamic changes.In order to solve the above problems better,it is necessary to make the multi-agent system have the ability of self-adaptation and self-learning.At the same time,how to improve the cooperative ability of multi-agent is also a main research direction.Based on the existing research results,this thesis studies multi-agent collaboration from different perspectives.The main contents and achievements of this paper include the following aspects:Firstly,an improved genetic algorithm based on co-evolution is proposed.Traditional genetic algorithm can not reflect collaboration in multi-agent collaboration problem.To solve this problem,this paper improves its fitness evaluation function and proposes a new COEGA algorithm.Compared with the standard genetic algorithm,the effectiveness of COEGA is validated.To a certain extent,the traditional genetic algorithm lacks the ability of collaboration in multi-agent collaboration problem.Secondly,an improved reinforcement learning algorithm based on co-evolution is proposed.Traditional reinforcement learning algorithms have many repetitive explorations in multi-agent collaboration problems.In order to solve this problem,a new communication Q network is added and a new CODQN algorithm is proposed.The comparison experiment with the standard DQN algorithm in multi-agent navigation problem is carried out.The validity of CODQN algorithm is verified.To some extent,the problem of low learning efficiency of DQN algorithm in multi-agent collaboration problem is solved.Thirdly,a feedback-based hybrid multi-agent cooperative control algorithm is proposed.Based on the neural network,NGA algorithm is proposed to improve the insufficiency of coding generalization of GA algorithm.Aiming at the shortcomings of genetic algorithm and reinforcement learning in multi-agent collaboration problem,a new CGDQN algorithm is proposed.Based on Unity3 D platform,a simulation Multi-agent Cooperative confrontation environment is built,and the experimental design is carried out.At the same time,a modular experiment of PSU is realized,which reduces the computational burden.Compared with the COEGA algorithm and CODQN algorithm proposed in this paper,the effectiveness of CGDQN algorithm is verified.To a certain extent,it solves the problem that genetic algorithm falls into the ”premature” phenomenon and the learning time of reinforcement learning algorithm is too long in the multi-agent collaboration problem. |