Font Size: a A A

Research On Local Path Planning Algorithm For Mobile Robots Oriented To Home Environment

Posted on:2019-06-02Degree:MasterType:Thesis
Country:ChinaCandidate:N LiFull Text:PDF
GTID:2428330566498288Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
In recent years,mobile robots have developed rapidly in the area s of logistics and warehousing,and their related technologies have gradually become the research hotspots.Path planning is the key to mobile robot navigation technology.And as machine learning has made breakthroughs in artificial intelligence robot,the application of related machine learning algorithms in robotics ha ve been a hot topic both at home and abroad.This paper includes the improved DWA(Dynamic Window Approach)in traditional algorithms,reinforcement learning and deep reinforcement learning in machine learning algorithms for mobile robot navigation,the specific research content is as follows:First,the paper addresses the disadvantages of the traditional DWA,using the method to improve the evaluation function,by introducing the result of global path planning as the reference trajectory,make the robot follow the trajectory planned by A* or other algorithms as much as possible.Simultaneously,the work also proposes the evaluation sub-function of direction,the evaluation sub-function of smoothing speed and the evaluation sub-function of acceleration,separately,ensure the direction,smoothness and speed of movement.The algorithm is verified in the Mr.Nice mobile robot,experimental results show that the algorithm optimizes the path of mobile robot,improves the directionality,smoothness and rapidity of robot.Second,the paper proposes the path planning algorithm for mobile robot based on reinforcement learning.In order to avoid the problem of the “dimension disaster”,the information of obstacles around the robot represented by the LIDAR and the position of the target are discreted into limited states.In addition,a continuous reward function was designed so that the robot can get the reward for each action and improved the training results effectively.Finally,a simulation environment is established in Gazebo to learn and train the agents.The training results verify the effectiveness of the algorithm.Simultaneously,an experiment is conducted on an actual robot to verify that the algorithm can complete the navigation task in the realistic environment.Then,aiming to the problems of slow convergence and discontinuous states in reinforcement learning.This paper combines deep reinforcement learning algorithm and uses neural network to fit Q_table.The environment model is designed in a continuous space by using the distribution of obstacles in the area of 120° in front of the robot and the orientation of local target.Simultaneously,there are too few valid samples in the training pool in the mobile robot problem for deep reinforcement learning and the learning efficiency is low,we use the priority experience replay technology to increase the probability that effective samples are sampled and trained.Establish the simulation environment in Gazebo,and the training results verify the effectiveness of the algorithm.Finally,the simulation comparison experiment of the deep reinforcement learning and the reinforcement learning is performed,which shows that the deep reinforcement learning has the stronger adaptability to the complicated obstacles.The comparison experiment between the improved DWA,the reinforcement learning and the deep reinforcement learning show that the improved DWA and deep reinforcement learning are more adaptive to dynamic obstacles,the deep reinforcement learning and the reinforcement learning have advantages in computational efficiency.
Keywords/Search Tags:Path Planning, DWA, The Reinforcement Learning, The Deep Reinforcement Learning
PDF Full Text Request
Related items