Font Size: a A A

Research On Mobile Robot Navigation Based On Deep Reinforcement Learning

Posted on:2022-03-12Degree:MasterType:Thesis
Country:ChinaCandidate:K P GaoFull Text:PDF
GTID:2518306725493004Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Navigation means generating a route and controlling the robot to move from the current position to the destination.It is one of the core components of mobile robots.Mobile robots are entering all walks of life,bring difficulties and great challenges to autonomous navigation technology.On the one hand,navigation requires obstacle avoidance and high efficiency.The robot should not collide with static and dynamic obstacles during the movement and can reach the destination as soon as possible.On the other hand,navigation requires generalization,because sensor errors and changes of scene affect the status of the robot in the real scene.Although many traditional navigation algorithms are proposed,the algorithms lack the ability of perception and learning.Deep reinforcement learning has a strong understanding and decision-making capability.It has huge application prospects in robot navigation.In this paper,researches navigation based on deep reinforcement learning is studied to realize intelligent obstacle avoidance and efficient movement,help the robot flexibly adapt to a variety of scenarios and reduce the computing costs.This paper focuses on hierarchical navigation structure and generalization and proposes a reinforcement learning navigation algorithm and an migration algorithm for navigation.The algorithms are verified in both virtual and real scenes,which strongly proves the effectiveness of the algorithms.The contents are as follows:1.In this paper,a hierarchical reinforcement learning navigation algorithm is proposed.The local planner of the algorithm is based on the Deep Determin-istic Policy Gradient algorithm.The end-to-end,no-map local planner gets rid of mapping and manual evaluations.The continuous action space provides greater mobility.The lightweight network structure enables it to deploy on a low-cost platform.The global planner is a combination of reinforcement learning and PRM.We propose a connection method based on value function to establish dense road maps rapidly.To efficiently train and evaluate the algorithms,we build a customized simulation environment.Experiments in several scenes show the effect of the algorithms.2.The paper studies the migration of reinforcement learning navigation and gives guidance for cross-scene navigation and proposes a transfer algorithm based onstate mapping and one-step reconstruction.The generalization is improved by increasing the number of MDPs and random noises between scenes with little changes in state.For scenes with large changes,the method of state mapping avoids retraining and one-step reconstruction error speeds up the transfer and improves the efficiency of data use.Compared with retraining,the transfer algorithm achieves a comparable result with a smaller sampling amount.3.We apply the navigation system in real office scenarios.Combining localization and mapping,the paper constructs a reinforcement learning navigation system.The system deployed on a Turtlebot3-Waffle navigate intelligently and avoid dynamic obstacles in the real scene.Experiments show that the methods proposed in this paper have a good performance in planning and obstacle avoidance.They also provide a strong migration ability for policies.Experiments in real scenes also prove the algorithms are practical.
Keywords/Search Tags:Navigation, Robot Intelligence, Reinforcement Learning, DDPG
PDF Full Text Request
Related items