Font Size: a A A

Research On Target Recognition And Pose Acquisition Technology For Robot Assembly

Posted on:2022-02-01Degree:MasterType:Thesis
Country:ChinaCandidate:G L LiFull Text:PDF
GTID:2518306512470544Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
With the widespread application of industrial robots,the level of my country's manufacturing industry is rapidly improving.However,traditional teaching-type industrial robots can no longer adapt to the production and assembly mode of multiple varieties and small batches.At this stage,intelligent industrial robots with vision have gradually become the main direction of the future development of manufacturing enterprises.In the actual production process,in order for the robot to complete the assembly task of the product autonomously and efficiently,the recognition of the workpiece and the acquisition of the pose are the prerequisites for the robot's grasping and assembly.The existing target recognition and pose estimation methods are affected by factors such as the single texture of the workpiece,similar shapes,and environmental noise.The accuracy and efficiency of the algorithm need to be improved.To this end,this paper takes assembly parts in the industrial production field as the research object,combines the point cloud of the part CAD model with the visual point cloud data,and studies the method of target recognition and pose acquisition based on the depth camera.The main work of this paper is as follows:(1)Calibration of kinectv2 vision system and preprocessing of point cloud data.Based on the KinectV2 vision system calibration principle,the calibration of the camera is completed by Zhang's plane method;in view of the large amount of scene point cloud data,uneven distribution and more noise,the point cloud filtering algorithm is used to filter,which realizes the down-sampling of visual point clouds,removal of cluttered backgrounds and elimination of noise outliers;at the same time,the use of RANSAC and Euclidean distance-based clustering segmentation algorithm to achieve the segmentation of the visual point cloud of parts in the scene.By preprocessing the visual point cloud data,it provides a good data foundation for subsequent target recognition and pose acquisition.(2)Based on the CAD model of the part,the construction of the point cloud model library is completed by setting the virtual camera;in view of the problem that the existing VFH point cloud global feature descriptor is prone to misrecognize different parts under similar poses,a problem is proposed.An improved VFH feature descriptor;the nearest neighbor search algorithm based on kd-tree is used to complete the retrieval and query of the visual point cloud in the CAD model point cloud library.The experimental results show that the overall recognition accuracy of the recognition algorithm in this paper for different types of workpieces is 84.5%,and the recognition accuracy can reach 100%in the range of large shooting angles[58.28°,90°];for the same type and different sizes of workpieces as a whole the recognition accuracy rate is 54.6%,the recognition accuracy rate is increased to 77.5%in the angle[58.28°,90°].Finally,based on the results of the simulation data,the relationship model between the length and size distortion of the visual point cloud of the optical axis part and the shooting angle is established,which provides a theoretical basis for the conclusion of the simulation experiment.(3)Research on the pose estimation method of target parts based on point cloud registration.Aiming at the problem that the ICP algorithm is easy to fall into the local optimum when the initial point cloud poses are very different,this paper combines the SAC-IA and NDT coarse registration algorithms with the ICP algorithm to perform point cloud registration.The simulation experiment results show that the SAC-IA+ICP algorithm combination is better than the NDT+ICP algorithm combination in terms of registration error and time-consuming;the actual part registration results show that the average root mean square error of the SAC-IA+ICP algorithm is 0.951 mm,the average time is 809ms,which can meet the actual assembly requirements in terms of registration accuracy and real-time.(4)KinectV2 target recognition and pose acquisition experiment for assembly.The visual point clouds is collected by KinectV2,and the accuracy verification experiment of the recognition algorithm was completed,and the reasons for the misrecognition in the experimental results were analyzed;and the conversion relationship between the camera coordinate and the robot base coordinate was established by the KinectV2 camera calibration experiment;based on the relative measurement method,the robot gripper is subjected to arbitrary rotation and translation transformation,the relative value of the pose before and after the transformation is calculated,and the relative value of the actual pose is compared to verify the accuracy of the pose acquisition algorithm.The experimental results show that the pose acquisition the maximum translation error of the algorithm is 3.178mm,and the maximum rotation angle is 1.433°.
Keywords/Search Tags:Intelligent assembly, 3D point cloud, CAD model, Target recognition, Point cloud registration, Pose estimation, KinectV2 depth camera
PDF Full Text Request
Related items