Font Size: a A A

Research On Home Robot Path Planning

Posted on:2020-07-21Degree:MasterType:Thesis
Country:ChinaCandidate:J ZengFull Text:PDF
GTID:2428330575474007Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
The path planning of robots has always been a hot topic in the study of robot motion control.The path planning of the robot to the trajectory from the specified starting point to the ending point,and it is required to be able to avoid obstacles during the movement and the shortest distance traveled.The current path planning requires a lot of time to construct the map.This paper is based on the the "trial and error"mechanism to study the path planning problem under the condition without the map.The paper is organized as follows:(1)Using the reinforcement learning algorithm to solve the path planning problem.Research and analysis of current deep intensive learning algorithms,using Dueling Double Q-network(DDQN)based on value function and Deep Deterministic Policy Gradient(DDPG)based on strategy search in unknown environment route plan.In the training phase,a specific reward and punishment function is designed to solve the problem of instability and state space sparseness in the process.Radomly set the initial robot position and target position to make the sample space diversity and the system converge.The introduction of convolutional neural networks(CNNs)can generalize the state of the environment and improve the ability to avoid obstacles in unknown environment.This paper using limited radar data and the target location to deveolp an effective tracking strategy on dynamic target in various home environments,as well as the obstacle avoidance.The experimental results show that the DQN,Dueling Double DQN and DDPG algorithms based on priority sampling develop a strong generalization ability in different environments.(2)Through hierarchical reinforcement learning to improve the ability of obstacle avoidance and tracking dynamic goals.The main idea of it is to decompose the original task into multiple sub-tasks.In this paper,using DDPG algorithm to train the obstacle avoidance network and the tracking network respectively in the unknown environment.After that,integerate the two individual network into a whole network based on the minimum obstacle disctance.The method overcomes the shortcomings of the path planning algorithm that need complete global environment static or dynamic obstacle motion information,realizes the coordinated control between global navigation and local navigation,and has good adaptability to the unknown environment.
Keywords/Search Tags:reinforcement learning, path planning, unknown environment, robot vehicle
PDF Full Text Request
Related items