Font Size: a A A

Research On Formation Control And Path Planning Based On Distributed Learning

Posted on:2020-06-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y ChongFull Text:PDF
GTID:2428330575458069Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
A great deal of research works have been concentrated on the control strategy of multi-agent systems due to both of their actual potential in abundant applications and theoretical challenges.In the significant amount of early studies,a large proportion of attention has been mainly paid on the three-dimensional formation control of multiple unmanned aerial vehicles,meanwhile a number of studies on the formation control of unmanned ground mobile vehicles system.Multi-agent's formation control,which is obviously one of the most energetically research themes in the field of multiple agents'systems,is designed to actuate multiple intelligent agents to obtain the appointed theory of constraints under normal conditions.We concentrates on the characteristics of the description of formation control's programs in the aspect of the capability on sensing the environment and the agents'topology about the interaction.A multi-agent formation control algorithm is mainly proposed to get efficient swarm action.Firstly,in order to control the leader,the obstacle avoidance algorithm in the unknown environmental model based on the fuzzy Q-learning is presented in this paper.Considering that the amount of computation in data processing will increase with the increase of dimension,the probability fuzzy method is used to discrete the data after grouping and combination operations.Then using the Soft-max strategy to choose actions,and the learning and convergence speed can be accelerated.Then the real-time performance and effectiveness of the three-dimensional obstacle avoidance based on fuzzy Q-learning are verified by MATLAB simulation experiment.Then,in the leader-follower topology,the following agent can indirectly use the strategy learned by the leading agent.In the obstacle avoidance process,the formation can be flexibly selected according to the different subtasks and the destination can be reached without any encounter.Finally,this paper researches likewise flocking with multiple agents relying on a model-free reinforcement learning algorithm.Specifically,Peng's Q-learning which uses a changed learning rate is adopted by the multi-followers to study a control strategy that is in favor of flocking according to a leader-follower structure.Learned strategies are in relation to ones which are solved using changed optimal control by assessing the mean of overall cost in accomplishing tasks with the help of a cost function.The experimental results testify the feasibility and effectiveness of the learning method raised by the paper which guarantees agents to achieve the flocking within a leader-follower formation,at the same time,moving in a non-stationary random environment.
Keywords/Search Tags:Multi-agent, Formation Control, Q-learning, Leader-follower, Obstacle Avoidance
PDF Full Text Request
Related items