Font Size: a A A

Learning Robotic Grasp Using Deep Reinforcement Learning

Posted on:2021-06-20Degree:MasterType:Thesis
Country:ChinaCandidate:X HeFull Text:PDF
GTID:2518306557987229Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
Grasping is a main way of robot interaction with the real world.At present,robot grasping has been using widely in working environment such as industrial production line,which can replace human to complete simple and repetitive work.However,in the unstructured working environment such as home,warehouse and logistics,objects may have various types and placed randomly,which makes traditional robot grasping methods no longer applicable.A more adaptable and generalized robot grasping method is needed.Robot grasping method based on deep learning has certain ability to face unstructured environment but its training process depends on manually labeled data sets.Besides,robot has no ability to learn independently and to adapt to the environment.In addition,only by grasping,robot cannot grasp effectively in some cases.In view of the above problems,this paper learns robotic grasp using deep reinforcement learning method in the unstructured environment.The specific content and research results are as follows:First,the theoretical basis and main methods of deep reinforcement learning are studied,the advantages of using deep reinforcement learning in robot grasping are analyzed,and a robot grasping method based on Deep Q-Network(DQN)is proposed,modeling the robot grasping process.The grasping model is trained by self-supervised learning.Second,a prioritized experience replay method based on power-law distribution is proposed to improve the efficiency of sample utilization during training.In the simulation environment,the grasping model is trained by the improved method,and the comparative experiment is carried out.The experimental results show that the average grasping success rate is 87.7% in single object scene and is 83.4% in multi object scene.The prioritized experience replay method improves the learning speed of the robot by 900 times.Third,a method learning synergies between churning and grasping with DQN is proposed,which uses churning to change the environment and create more space for grasping,to solve the problem of insufficient grasping space when objects are placed closely.The simulation results show that,the method of churning and grasping improves the average grasping success rate by 6.4% and 25.8% respectively in the random scene and the dense scene compared with the method of grasping only.It is proved that this method can complete the task which is difficult to complete only by grasping.Finally,the robot grasping experiment platform is built,the robot hand-eye calibration is completed,and the robot grasping experiment in real environment is carried out.The pretrained grasping model is proved to be used in the real environment,and the robot acquired grasping ability as well as synergies between churning and grasping by self-supervised learning.Experimental results show that the proposed method is effective and generalized in unstructured working environment.
Keywords/Search Tags:robot grasping, deep reinforcement learning, unstructured working environment, synergies between churning and grasping
PDF Full Text Request
Related items