Font Size: a A A

A Research On Navigation Strategy Of Mobile Robot Based On Deep Reinforcement Learning

Posted on:2022-10-21Degree:MasterType:Thesis
Country:ChinaCandidate:R HuangFull Text:PDF
GTID:2518306524489704Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
Mobile robots have been widely used in economy and society,covering industry,agriculture,commerce and other fields.Autonomous navigation ability is the most basic function of mobile robot,which is the premise of other functions.Navigation technology of mobile robot includes three elements: environment awareness,mapping and positioning,and path planning.Among them,it takes a lot of time and energy to build a map for the mobile robot,and with the change of navigation environment,the map also needs to be redrawn to ensure the navigation effect.In some cases,such as military reconnaissance,resource exploration,earthquake relief and so on,mobile robots cannot get the environment map before performing navigation tasks,which greatly limits its wide use.This paper focuses on the navigation technology of mobile robot based on deep reinforcement learning.We used the lidar equipped with the mobile robot as the input and the deep reinforcement learning algorithm as the decision module to guide the mobile robot to avoid obstacles and reach the target position in the simulation environment.It is unnecessary to establish map information for the robot in advance,since it will interact with the environment continously and get feedback from the environment.As a result,the robot finally learn potential navigation strategies.The input sources of deep reinforcement learning include distance information,obstacle information,sub-target information,angle information and others,which make decision process more efficient.1.In this paper,h-DQN(Hierarchical Deep Q-Learning),a Hierarchical architecture model based reinforcement Learning navigation method,is applied to the navigation task of mobile robot.The learning process of hierarchical architecture divides the navigation task into two stages: selecting sub-goals and executing specific actions.Specifically speaking,the navigation process of traditional mobile robot is regarded as a series of continuous sub-problems in time sequence,and the navigation task is completed by constantly solving the sub-problems,which improves the success rate of the mobile robot to perform the navigation task in complex environment.2.We add LSTM(Long Shrot-Term Memory)network to mobile robot navigation algorithm,which speeds up the convergence speed of reinforcement learning algorithm and improves the generalization ability when facing new environment.In the experiment part,we set up two simple navigation environments and two complex ones using Gazebo emulator,respectively using DQN and h-DQN algorithms to guide Turtle Bot3 robot to perform navigation tasks.The experimental results show that both DQN and h-DQN can complete the navigation task,but the later has a higher success rate when facing complex environment.In addition,reinforcement learning algorithm using LSTM memory module has faster convergence speed.When the algorithm is transplanted to a new environment,h-DQN makes good use of the priori knowledge of pre-training,while DQN may have the problem that the algorithm can not convergence.
Keywords/Search Tags:hierarchical reinforcement learning, memory, mapless navigation, mobile robot, path planning
PDF Full Text Request
Related items