| Driven by practical applications such as autonomous driving,robot navigation,and virtual mixed reality,3D vision has become one of the research hotspots in the field of artificial intelligence in recent years.In different forms of 3D data,point cloud has been widely used in many 3D scene understanding tasks due to the advantages of simple form and maintaining the original geometric information of3 D space.However,due to the occlusion between objects and the difference in reflectivity of the target surface material in the point data acquisition process,the collected point cloud data is often sparse and incomplete,which means that part of the shape structure is missing and thus results in the loss of geometric and semantic information.As imcomplete point clouds affect the effect of subsequent 3D scene understanding tasks,it is necessary to perform high-precision restoration of incomplete point clouds.With the enhancement of computer computing power,the availability of large data sets,and the development of efficient neural network systems,the point cloud processing method based on deep learning gradually replaces the traditional method and becomes the mainstream research direction.In the deep-learning-based point cloud completion methods,it is difficult to enable the completion network to generate high-quality complete point cloud shapes when the point cloud missing rate is large.At the same time,for the partly missing and sparse original point cloud data collected by 3D sensors,how to generate a complete dense point cloud with good fine-grained features and uniform distribution using the point cloud completion method is the key for downstream tasks.Aiming at the above problems,this thesis constructs a dual-feature fusion point cloud completion network and a dual-scale point cloud completion network based on high-frequency feature fusion.The main research contents are as follows:(1)Point cloud data in the real world is often affected by occlusion and light reflection,leading to incompleteness of the data.Large-region missing point clouds will cause great deviations in downstream tasks.This paper propose a Dual Feature Fusion Network(DFF-Net)to improve the accuracy of the completion of a large missing region of the point cloud.Firstly,a dual feature encoder is designed to extract the global and local features of the input point cloud.Subsequently,the two kinds of features are fused and fed into a decoder to directly generate a point cloud of missing region that retain local details.In order to make the generated point cloud more detailed,a loss function with multiple terms is employed to emphasize the distribution density and visual quality of the generated point cloud,and a better completion effect is achieved.A large number of quantitative and qualitative experiments show that our DFF-Net is better than the previous state-of-the-art(SOTA)methods in the aspect of large-region point cloud completion.(2)The raw point cloud data collected by 3D sensors is usually incomplete and sparse,which seriously affects the task efficiency of downstream applications based on point cloud.By combing the advantages of point cloud feature aggregation and voxel feature aggregation,this paper propose a dual-scale point cloud completion network(DSNet)based on high-frequency feature fusion.DSNet uses a global feature analysis network at the voxel scale to extract global shape relationships,and a local process at the point cloud scale to compute detailed shape features.Specifically,this paper firstly design a high-frequency fusion module for feature alignment to enable cross-scale interaction of point cloud and voxel features.Then,based on the coarse-to-fine generation strategy,this paper also designe a point cloud refinement module to refine the local features of the sparse point cloud,and finally obtain a dense point cloud with uniform distribution and fine-grained characteristics.Quantitative and qualitative experimental results show that our DSNet can effectively reconstruct high-fidelity dense point clouds,outperforming current state-of-the-art completion methods. |