| Three-dimensional point cloud is an important visual data format,and a large number of application results have been obtained in the fields of robotics,reverse analysis,and autonomous driving.The pose estimation of the 3D point cloud is one of the important goals in the analysis process.There are still many difficulties in point cloud-based three-dimensional object recognition and pose estimation.For example,the point cloud of the object acquired by some three-dimensional vision sensors will be missing due to partial line of sight occlusion,or the corresponding relationship cannot be found on the point cloud due to the complex structure of the three-dimensional object.This will cause many difficulties in the analysis and recognition of point cloud data..One of the important prerequisites for solving the pose estimation problem is to perform more stable point cloud registration.The accuracy of the registration result will directly affect the effect of the follow-up work of pose estimation.On the other hand,the special unstructured format of point cloud data also brings challenges to the use of methods such as deep learning.In the computer vision problem of two-dimensional image processing,deep learning has made breakthroughs in many fields,and in many aspects,the processing effect has been better than traditional computer vision methods.The two-dimensional vision task corresponding to point cloud registration is image matching.Point cloud registration is a three-dimensional promotion of the image matching problem in two-dimensional computer vision.At present,there are two main types of point cloud registration algorithms,which are directly based on the global distribution state.The solved transformation estimation algorithm and the feature matching algorithm based on local feature extraction.The former directly estimates the transformation parameters between point cloud pairs based on the global distribution state,and the latter is based on several steps:key point detection,feature description extraction,and feature matching And transform estimation.This paper introduces the deep learning method to the field of 3D vision,and proposes the research of 3D point cloud pose estimation based on deep learning.The work included in the algorithm proposed in this paper is:1 Based on the aggregation of interest points based on the region growing algorithm,the salient characteristics of each point are calculated according to the covariance matrix,the growing interest points are determined according to the ISS algorithm,and then the interest point set is obtained by the region growing algorithm.2 Based on the feature extraction of the multi-scale analysis method,in the determined interest point set,take the growth point as the center,and perform feature extraction on multiple k-nearest neighbor scales and then mix them to obtain the basic mixed feature of each point.At the same time,in order to extract the secondary features of the subsequent MLP module,the neighborhood information is further defined for each point,which is convenient for the input of the neural network.3 3D point cloud pose estimation based on neural network,based on RPMNet,DCPNet network,which involves multiple modules,including MLP secondary feature extraction,Transformer supplementary features,similarity matching matrix generation,and then optimized by Sinkhorn iteration,And finally the corresponding point SVD is calculated to get the transformation matrix.Finally,a comparative experiment on the proposed algorithm and analysis of the results on a data set widely used in the current point cloud field proves the effectiveness and robustness of the point cloud position and attitude estimation algorithm. |