Font Size: a A A

Robotic Grasp Detection Based On RGB-D Image

Posted on:2020-12-14Degree:MasterType:Thesis
Country:ChinaCandidate:K Y ZhangFull Text:PDF
GTID:2428330572469372Subject:Mechanical design and theory
Abstract/Summary:PDF Full Text Request
Grasping is a necessary step for robots to attain general-purpose utility such as sorting,assembly and pick-and-place.An efficient,reliable and robust grasp method is important to improve efficiency and reduce cost.Traditionally,grasp implementation requires much expert knowledge,however,it is time-consuming and task-specific,lacking reusability and robustness.In this paper,a deep learning model based on CNN method is proposed to predict graspable locations for object.This method describes grasp rectangle with five-dimensional representation,and detects grasp based on dividing image into grids.Also,the paper calibrates a Kinect sensor with a robot arm in eye-to-hand way based on ArUco,and creates a mapping from image frame to robotic coordinate frame with the Kinect intrinsic parameters,hand-eye matrix and robot kinematics matrix.Furthermore,the paper develops a robot arm control platform and completes grasping experiments with this platform.In the first chapter,the background,significance and research contents are given.Also,this chapter introduces the research status of the robotic vision and grasping,including hand-eye calibration and grasping detection.The second chapter calibrates the color and IR sensor of Kinect,and registers RGB and depth image.The eye-to-hand calibration equation and two-step solution are given.In the end,hand eye calibration experiments are accomplished based on ArUco library.In the third chapter,a deep learning grasp detection model based on RGB-D image is proposed.Firstly,the five-dimensional representation and corresponding evaluation metrics are given.Then,this chapter describes the output form,loss function and network structure of the model.Last,the data preprocessing and anchor size generation based on K-means++clustering is introduced.Grasping from location in the RGB-D image to robot manipulation are described in the fourth chapter.We evaluate the model trained in different ways in the test dataset and real objects images,then creates the robotic kinematics model and gets the transformation between robot pose and joint angles.The tool coordinate frame is constructed based on least-square method to accomplish motion path.In the fifth chapter,we develop a robot grasping system based on Kinect sensor and achieve the general function such as tool/work coordinate settings,I/O control and manual operation.With this platform,we do the grasping experiments and get good results.In the last chapter,the innovations and contributions of this paper is summarized,and some suggestion are given for ablation study.
Keywords/Search Tags:robot arm, robot vision, grasp detection, RGB-D image
PDF Full Text Request
Related items