| In recent years,with the rapid development of artificial intelligence technology,a large number of application scenarios have turned to be intelligent and automated.Among them,autonomous driving has become an area of great interest.Self-driving cars should be able to accurately perceive the surrounding environments for autonomous route planning and obstacle avoidance.LiDAR(Light Detection and Ranging)devices can acquire 3D data of the surrounding scene to make up for the shortage of optical cameras.Therefore,LiDAR point cloud semantic segmentation,which predicts pointwise semantic categories,is crucial for autonomous driving.However,the complex and diverse environments in autonomous driving scenarios,the large variety of object scales,and the various spatial characteristics of the point clouds collected by different devices bring great difficulties to accurate and robust semantic segmentation of LiDAR point clouds.This thesis addresses a series of problems in the semantic segmentation of LiDAR point clouds and proposes several robust and generalized techniques.The main content is as follows:(1)Robust semantic segmentation for multi-scale targets in LiDAR point clouds.The objects and background stuff vary significantly in their sizes and structures in complex scenes.In this thesis,a point cloud semantic segmentation method is proposed to combine voxel-based and point-based approaches.First,a scale-adaptive fusion mechanism is designed in a voxel-based framework to selectively fuse multi-scale features to utilize the information at different levels.Further,a local point refinement module is designed to aggregate the neighborhood information of the point cloud to compensate for the geometric details lost during voxelization.As validated on several public datasets,the proposed method improves the segmentation accuracy for objects of various scales and produces fine-grained point cloud semantic segmentation results.(2)Domain adaption for LiDAR point cloud semantic segmentation.To address the problem that the semantic segmentation models trained under full supervision cannot be effectively transferred to other datasets,this thesis analyzes the underlying reasons and proposes an unsupervised domain adaptation method that combines simulated data sampling and self-training.First,a data alignment method based on simulated scanning is proposed to resample the labeled data in the source domain by imitating the sampling pattern of the target domain data and the simulated data is used to train the segmentation model to fit the characteristics of the target domain data.Secondly,a self-training method that combines a hybrid scene training and a category-aware rectification is proposed to guide the model to adapt to the contextual relationships in the target domain.The proposed method is validated on two public datasets and significantly improves the segmentation accuracy of the LiDAR point cloud semantic segmentation model on the target domain.In summary,this thesis focuses on the core topic of semantic segmentation of LiDAR point clouds which is essential for practical applications.A series of effective approaches are proposed for different requirements,including improving semantic segmentation performance with multi-scale feature fusion and local point refinement and improving the generalization capability of segmentation models with simulated scanning and self-training.All of them are valuable for the related research in point cloud segmentation and facilitate the development and wide application of autonomous driving technology. |