Font Size: a A A

Research On Robot Automatic Grasping Technology Of Complex And Stacked Scene

Posted on:2023-01-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:L WuFull Text:PDF
GTID:1528307172953409Subject:Materials Processing Engineering
Abstract/Summary:PDF Full Text Request
Robots can significantly improve the automation and intelligence level of industrial production by replacing the laborers to conduct the grasping work.Due to the lack of environmental awareness,the traditional robotic grasping system cannot meet the automatic grasping needs of complex parts under stacked scenarios.Considering that 3D vision has the advantages of high precision,rich perceptual information,and high deployment flexibility,3D vision-based robot automatic grasping technology formed by combining the 3D vision with the robot has become the mainstream direction of robot grasping in complex scenes.?is robot grasping technology uses point cloud features to obtain the feature matching results,and the 3D object recognition methods are introduced to estimate the object pose under the 3D sensor coordinate system.And then the robot hand-eye transformation matrix is utilized to obtain the object pose under the robot coordinate system,so as to guide the robot to grasp the object.However,the cluttered scenes and 3D data noises result in the low descriptive power and robustness of the local features extracted from 3D point clouds;the industrial parts are complex and stacked on each other,resulting in low recognition accuracy and computational efficiency;robot kinematic errors and joint wear lead to the low accuracy of the hand-eye parameter.To solve these challenging problems,this dissertation conducts in-depth research on 3D local feature description,robust and efficient 3D object recognition,and hand-eye parameters optimization,and a robot grasping system based on 3D vision is developed.?e specific research work is as follows:Aiming at the problems of poor description ability and low robustness of existing 3D local feature description methods,this dissertation has studied the transformation featuresbased local feature description methods.Firstly,a point-pair transformation feature histogram(PPTFH)local feature is designed to efficiently encode the local surface information.In addition,on the basis of the key point-neighborhood point transformation features,a fast transformation feature histogram-based local feature is designed,which is not only strong descriptive and robust but also has extremely high computational efficiency.?e experimental results show that both of the two local features designed in this dissertation have achieved excellent comprehensive performance in application scenarios such as 3D object recognition,object retrieval,and point cloud registration.Aiming at the problem of low 3D object recognition accuracy and computational efficiency under complex parts stacking scenes,this dissertation has studied a 3D object recognition method based on high compatibility matches clustering algorithm.Firstly,a highcompatibility match clustering approach is proposed,which converts complex multi-instance recognition into simple single-target recognition from the feature level,and solves the problem of multi-instance recognition in complex parts stacking scenes.?en,the point pair feature constraint is introduced to remove false matches,and a local reference framebased pose estimation algorithm and a scene visibility-based pose verification algorithm are respectively proposed,resulting in the precisely object pose estimation and verification.Experimental results show that,in two multi-instance recognition datasets,the recognition accuracy of ten instances identified by the proposed method is superior to the existing methods,reaching 86%,and the computational efficiency is improved by at least 50%.Aiming at the problem that the calibration accuracy of hand-eye parameters is limited by the kinematic errors of robot joints,a hand-eye parameters optimization method is proposed on the basis of pose graph optimization.By analyzing the accuracy of each pose transformation matrix in the robot hand-eye calibration model,the hand-eye parameter calibration model is refactored.Based on the fact that the camera calibration accuracy is much better than the robot’s absolute accuracy,the optimization model of camera absolute pose is established,and the camera absolute pose is optimized via the pose graph optimization approach.Based on the optimized camera absolute pose results,the robot hand-eye parameter is optimized by the least square method.?e experimental results show that,compared with the existing robot hand-eye parameter calibration methods,the proposed method can not only improve the accuracy of robot hand-eye parameter but also has strong robustness.Based on these key techniques,an offline teaching-based robot grasping method is proposed,and an automatic robot grasping system based on 3D vision is developed.?e robot grasping experiment is carried out on the scattered and stacked tee pipes and steering arms.?e experimental results show that the recognition success rates of the tee pipe and steering arm reach 97.50% and 98.33% separately,and the grasping success rates of the counterparts reach 91.67% and 95.83% separately,which verifies the effectiveness of the developed robot grasping system and provides a technical basis for the engineering application of robot automatic grasping of complex parts under stacked scenes.
Keywords/Search Tags:Robot grasping, Stacked scene, Point cloud local feature description, 3D object recognition, Hand-eye calibration
PDF Full Text Request
Related items