| With the development of artificial intelligence technology,semantic segmentation of 3D data has gained widespread attention.Compared with techniques such as object classification,object detection and object recognition,semantic segmentation is a high-level task that can provide deep understanding of many complex scenes and is widely used in fields such as autonomous driving and augmented reality.Most of the semantic segmentation network models consider more small-scale point clouds,but in practical application scenarios,the scale of point clouds is often large,and their number can reach millions of levels.To address the above problems,this thesis proposes a semantic segmentation network model,which can efficiently process large-scale point cloud data,and the network is built to effectively aggregate local features of large-scale point clouds by building a local feature module to finally achieve the segmentation and extraction of semantic information of large-scale point clouds.The main research work is as follows.This thesis first discusses the characteristics of different 3D data representation methods,and also analyzes the performance of deep learning related theories in semantic segmentation.After that,the structure and principles of existing point cloud semantic segmentation network models are analyzed to lay the theoretical foundation for the subsequent establishment of modules and semantic segmentation models for processing large-scale point cloud features.Secondly,in order to efficiently process large-scale point clouds,the downsampling strategy of the network model needs to meet the conditions of low computational time consumption and small memory consumption.The performance of six sampling methods,such as farthest point sampling and random sampling,in terms of time consumption and memory consumption is analyzed,and the experimental results show that the low computational complexity and small memory consumption of random sampling are more suitable for processing large-scale point cloud semantic segmentation.In order to better aggregate large-scale point cloud features a local feature module is built,which consists of three specific modules.The local feature encoding module uses the neighborhood search method to find the K points in the Euclidean space with the closest relative distance,and performs relative position encoding to add redundant information to each point cloud information.In order to avoid the problem of losing key point cloud feature information by applying random sampling methods,the attention pooling module is built to learn key point cloud features autonomously and aggregate the feature information effectively,and then the residual network module is used to expand the receiving domain of the point cloud to better preserve the local feature information of the point cloud.Then,we build the semantic segmentation network model for processing large-scale point clouds in this thesis,which follows the encoding-decoding form to build the overall jump connection,firstly,we downsample the input point clouds several times,and use the local feature encoding module,attention pooling module and residual network module to form the local feature module to aggregate the point cloud feature information several times,and then use the multilayer perceptron to extract the large-scale point cloud feature information in multiple Finally,the output of the semantic category information of the largescale point cloud is completed by the fully connected layer network.Finally,experimental analysis is performed on two datasets,S3 DIS and SemanticKITTI,and cross-validation is performed on S3 DIS dataset,and the experiments show that the largescale point cloud semantic segmentation network model designed in this thesis is feasible.The performance is compared with other known point cloud semantic segmentation network models on S3 DIS dataset and SemanticKITTI dataset respectively,and the experiments show that the performance of the network model in this thesis is significantly improved.The performance of each module of this thesis is analyzed by ablation experiments on SemanticKITTI dataset,and the experiments show that the local module designed in this thesis is useful for large-scale point cloud processing,and finally the segmentation results of the large-scale point cloud semantic segmentation network model built in this thesis are visualized and analyzed.Through the visualization experimental results,it can be intuitively seen that the local feature module designed in this thesis and the semantic segmentation model built in this thesis can effectively process large-scale point clouds. |