Font Size: a A A

Research On Scattered Parts Recognition And 3D Pose Estimation For Robot Picking

Posted on:2022-09-10Degree:MasterType:Thesis
Country:ChinaCandidate:C WangFull Text:PDF
GTID:2518306572961539Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Robotic automatic picking has broad application prospects in object handling,logistics sorting,loading and unloading,etc.,especially in the upcoming aging society.However,the degree of intelligence of robots is generally low and cannot meet the picking needs under complex conditions.One of the key technologies to realize the intelligent picking of robots is pose estimation,which is to determine the position and posture relative to the robot of the picking target through data collected by sensors such as vision and distance.Thanks to the continuous improvement of computing power,deep learning technology has made considerable progress in computer vision,which also provides new technical ideas and foundations for robot perception technology.In order to meet the picking needs of robots in complex environments,this paper uses deep learning technology to establish a pose information extraction network,and combines the point cloud matching technology to realize the pose estimation of scattered objects.The specific content is as follows:Firstly,the RGB-D(color-depth)sensor is selected,and the information collection and processing of the sensor is completed.The internal and external parameters of the color camera and the depth camera are calibrated using the camera mathematical model and the calibration principle.The pixel alignment of the depth image to the color image is realized through the calibration result,and the depth image is subjected to time-series median filtering,which improves the imaging quality of the depth image.Secondly,based on Mask R-CNN,a neural network model for target pose information extraction is constructed.After target detection,bounding box branch and mask branch,a keypoint detection branch is added to achieve the multi-task learning of target detection,instance segmentation and keypoint detection.The RGB-D sensor is used to collect data and a dataset that can be used for the training of the network model is semiautomatically created based on it.After training the model with the custom dataset and the public dataset Line MOD,the detection effect of the model is verified on the testset.Subsequently,through the prediction results of the neural network and the CAD model,the pose estimation of the target and it's accuracy optimization are completed.By mapping the image coordinates of the keypoints predicted by the neural network with the actual three-dimensional coordinates of the keypoints,the rough pose of the target is calculated,and the random sampling consistency method is used to reduce the influence of the wrong keypoints on the pose.Using the mask predicted by the neural network as a reference,the target point cloud is extracted from the depth map,and the model point cloud is obtained from the CAD model in combination with the rough pose.Finally,the accurate pose of the target is obtained by registering the two sets of point clouds.In order to test the reliability of the entire pose estimation algorithm,the accuracy of the estimated pose is calculated on the public dataset and the custom dataset,and the results proves that the algorithm has a good effect.Finally,an experimental platform is built with the UR5 robot and the BY-E140 gripper as the main body,equipped with RGB-D sensors.After the hand-eye calibration is completed,the task of picking the target in the custom dataset is realized,which verifies the pose estimation algorithm reliability and practicality.
Keywords/Search Tags:robot picking, pose estimation, keypoint detection, instance segmentation, point cloud registration
PDF Full Text Request
Related items