| Most of the current mature robotic grasping technologies can only be applied to structured scenes with a fixed layout,and it is difficult to accurately grasp multiple types of objects with many features and random placement in unstructured scenes.In order to enable the grasping robot to perform accurate and stable grasping operations on various types of objects whose positions are not fixed,this thesis designs a manipulator grasping system based on deep learning.First,the Kinect v2 camera is used to collect color and depth images of the captured scene,and the mask is obtained by segmenting the target object in the color image through the trained Mask R-CNN network.Secondly,combined with the depth image,the 2D pixels of the mask area are converted into the 3D scene point cloud of the target object.Thirdly,the transformation relationship between the point cloud template and the scene point cloud is obtained by using the point cloud registration algorithm,and the grasping pose in the camera coordinate system is calculated according to the relationship.Finally,the hand-eye calibration parameters are used to convert the grasping pose to the end grasping pose of the UR5 manipulator.According to the grasping pose,the joint angles of the manipulator are inversely solved to guide the manipulator to complete the grasping task.This thesis mainly completes the following research work:1.In order to obtain the 3D measurement information in the 2D image,this thesis designs a vision system based on the Kinect v2 camera.Use the Kinect v2 camera to capture the image of the captured scene,and calibrate the Kinect v2 camera and the grasping system to obtain the camera parameters and hand-eye calibration parameters.2.In order to obtain the grasping pose of the manipulator end,this thesis uses the Mask R-CNN instance segmentation network to segment the mask of a single target object in the image,and then combines the depth image to convert the 2D pixels of the mask area into the 3D point cloud of the target object,and establish a point cloud template library for the target object.Then,the SAC-IA and the ICP point cloud registration algorithm are used to obtain the rigid body transformation relationship between the point cloud template and the scene point cloud,and the manipulator end grasping pose is further calculated using this relationship combined with the hand-eye calibration parameters.3.In order to make the manipulator end effector move to the grasping pose to complete the grasping task of the target object,this thesis designs a manipulator grasping control system.The tool center point of the UR5 manipulator is calibrated by the four-point method,the kinematic model of the UR5 manipulator is established according to the standard D-H modeling method,the accuracy of the kinematic model is checked,and the manipulator grasping control process is designed.4.In order to test the actual grasping effect of the grasping system,this thesis builds a manipulator grasping experimental platform and designs an actual grasping experiment.The experimental results show that the average success rate of the grasping system for multiple grasping of a single object and multiple objects reaches 92% and86.25% respectively.Therefore,it is shown that the grasping system can accurately,stably grasp various types of objects whose positions are not fixed. |