Font Size: a A A

Deep-learning- Based Robot Grasp Detection Using RGB-D Sensor

Posted on:2020-08-24Degree:MasterType:Thesis
Country:ChinaCandidate:J XiaFull Text:PDF
GTID:2428330623959812Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Grasping is one of the essential skills of intelligent robots.It remains a significant challenge that robot grasp objects in unstructured and dynamic environments such as warehouse and living room.Based on robot grasping in unstructured environments,two robot grasp detection methods are proposed,which applied in pick-and-place task and daily manipulation task for novel object in new environment.A robot grasp system using Kinect RGB-D sensor and UR5 robot is developed to verify the practical effect of these methods.Planar grasp detection methods based on convolution neural network is studied for robot pick-and-place task,taking RGB image as input.In order to improve the speed and accuracy of planar grasp detection method,a two-model cascaded grasp detection method is proposed firstly.In this method,general object detection network R-FCN is used to locate grasp point,and a convolutional neural network is built to estimate the grasp angle.Then a new grasp detection model is proposed based on optimizing the structure of two-model cascaded grasp detector.Comparing to the two-model cascaded method that required short training time,the single model method gain better overall performance.The two proposed methods reached 93.8% and 94.2% Top-1 grasp rectangle accuracy on Cornell grasp dataset,and their inference speed are 17.5 fps and 22.7 fps respectively.The grasp detection task towards robot daily manipulation is divided to two subproblem:(A)decide which areas of an object can be grasped to perform the task,and(B)how to get a stable grasp on such areas.The graspable area representing task constraints is determined by the object categories and part-level affordance.A similar Mask R-CNN network is used for object instance segmentation and affordance detection in image.The graspable areas that filtered outlier is obtained according to the mapping between image pixels and organized point clouds.It is difficult to get full 3D data of object in real life,so a grasp pose detection method is proposed based on partial point clouds.One channel grasp image is used to represent the grasp candidate,and the position-sensitive convolutional neural network is built to select Top-N grasp pose.The experiments on open dataset and tests in real scenario have proved the good performance of graspable area extracting and grasp pose detection method.Based on the research above,a real autonomous robot grasping system including perception,planning and interactive control is built to verify the feasibility and availability of proposed grasp detection methods.
Keywords/Search Tags:Robot Grasp Detection, RGB-D, Deep Learning, Convolutional Neural Network, Object Affordance, Grasp Pose Detection
PDF Full Text Request
Related items