The efficiency and accuracy of 3D object recognition and pose estimation directly determine the efficiency and stability of robot capture,which is crucial for automatic robot sorting,handling,assembly and inspection.At present,three-dimensional object recognition and pose estimation have better recognition and positioning results when the texture information is rich and the occlusion is less ideal.However,in practical industrial applications,due to factors such as single material texture,uneven illumination,and severe occlusion,existing methods cannot meet the accuracy and efficiency required for identification.To this end,this paper studies the object recognition and pose estimation method based on point pair feature descriptors,and realizes the accurate recognition and accurate pose estimation of the target object in the scene point cloud data acquired by the surface structure light 3D device.The specific work of this paper is as follows:(1)The related technology of point cloud preprocessing for 3D object recognition is studied.In the process of three-dimensional object recognition,sensor noise and extraneous data exist,resulting in poor recognition accuracy and efficiency.Therefore,firstly,for the measurement scene data,a k-dimension tree(k-d tree)of three-dimensional point cloud data is constructed,and the search relationship of the neighborhood points is established,and the calculation based on the isolated point and the calculation of the point cloud normal vector is realized;The grid sampling algorithm with adaptive coefficients is used to eliminate the redundant data in the measurement scene.Finally,the Euclidean segmentation algorithm is studied,and the segmentation of the scene data is completed,thus achieving the removal of irrelevant data.By preprocessing the measured 3D point cloud data,the interference of noise and irrelevant data is greatly reduced,which improves the good data foundation for the recognition of the target object.(2)The method of point pair feature descriptor and point pair feature mismatching removal is studied.The point pair feature encodes two surface points and normal vectors of the object to form a four-dimensional vector of the point distance and the normal angle.In the matching process,when the normal direction of the point corresponding to the scene and the scene is inconsistent,the constructed point pair features are prone to large errors.Therefore,by unifying the model and the normal orientation of the scene,the error of model and scene matching is reduced,and the accuracy of object recognition and pose is improved.(3)The construction of point pair feature sets is studied.For the scene data,point pair feature are constructed between two points.If there are too many reference points,the massive point pair features will reduce the recognition efficiency.If the reference points are small,the point pair features will less affect the recognition accuracy.Therefore,the construction of the point pair feature set is studied,that is,the scene data is uniformly sampled as the reference point,and the model bounding box size is used as the radius constraint.The reference point only constructs the point pair feature with the points within the radius,which is significantly reduced.The number of point pairs feature sets.Through experimental verification,the reference point sampling can greatly improve the recognition efficiency without affecting the recognition accuracy. |