Font Size: a A A

Research On Robot Path Planning Under Unknown Environment Based On Deep Reinforcement Learning

Posted on:2019-09-09Degree:MasterType:Thesis
Country:ChinaCandidate:X J BuFull Text:PDF
GTID:2428330566498277Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
Service robots at the completion of various tasks often need to move in and out of the room,which requires robots to in indoor environment,independent access to environmental information,and then for path planning and navigation,navigation for autonomous mobile robot is very important.In dynamic unknown environment,the mobile robot in a local environment,and sometimes cannot obtain accurate target coffret,so the mobile robot can only according to the feedback from local information to plan their own path.How to utilize the known information effectively becomes the key problem of path planning in the dynamic unknown environment.The traditional methods of path planning under dynamic environment need to rely on map information,the known path planning under the premise of map information,cannot in the unknown environment through the visual servoing path planning.In this paper,path planning is carried out in the case of robot without map information through deep reinforcement learning.First of all,to establish a differential system for mobile robot,in order to validate the accuracy and reliability of the model,a variety of kinematics simulation is carried out in MATLAB,including point stabilization and tracking hyperbola,tracking test of circular curve,and so on,to prove the reliability of the kinematic model.The appearance of mobile robot was built based on ROS and kinematics model,on which the added to access visual sensor and collision,speed odometer sensor,and each sensor data collected in the form of a message posted on the corresponding theme,by subscribing to these themes can easily obtain robot real-time acquisition of environmental information,path planning lay the foundation for later.Then,the model of intensive learning is established.This article uses the combination of the Q-learning strategy algorithm with gradient A3 C decision-making training algorithm for robot movement,in view of the characteristics in this paper,the differential of mobile robot,through the depth of the device image as input,the robot velocity and angular velocity value as the output,end-to-end training model is established.Experiments were carried out in Gazebo environment to verify the effectiveness of the algorithm.Finally,in view of the reinforcement learning training time long shortcomings,puts forward a way of training based on the minimum depth of field information,optimize the process of the construction of the state space,in order to improve the efficiency of learning in the training,and compares the common training methods validation,training at the same time,this paper puts forward the training method of learning more efficient.The experiments of obstacle avoidance in unknown environment and dynamic environment are also carried out.Experimental verification is carried out in the real environment,which verifies the validity of the algorithm to the unknown environment path planning,and realizes the exploration of the unknown environment and map drawing.
Keywords/Search Tags:Deep reinforcement learning, No-map information, Kinematic constraints, Path planning, Unknown environment
PDF Full Text Request
Related items