Font Size: a A A

Outdoor Scene Understanding Based On 3D Laser Point Clouds

Posted on:2019-07-09Degree:MasterType:Thesis
Country:ChinaCandidate:Y F GuFull Text:PDF
GTID:2370330566484561Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Realizing the scene understanding of 3-D laser point cloud data is an important research content in the field of computer vision.This paper mainly studies the problem of classification based on outdoor 3-D laser point cloud data and proposes two classification methods.In this paper,the structure of Panorama Bear Angle(PBA)image is proposed.First,the 3-D laser point cloud is projected onto the spherical surface with the viewpoint as the sphere center to establish the correspondence relationship between the laser ranging point and the image pixels.Then we use the relative location relationship of the laser ranging point in the 3-D space to calculate the gray value of the corresponding pixel.The PBA image overcomes the limitations of the traditional graph model structure that relies on the orderly input of the 3-D laser point cloud,and solves the problem of gray gradient and weak detail rendering capabilities,and provides guarantees for subsequent feature extraction and image segmentation.In the first classification method,this paper uses the image pyramid model to extract the texture features of PBA images on multiple scales.Then this paper derives the "point cloud pyramid" model from the image pyramid model.The point cloud pyramid model was used to extract the local features of the 3-D laser point cloud at multiple scales.Then,a random forest classifier is used to perform feature screening on the extracted high-dimensional features,and initial classification of the 3-D laser point cloud data is implemented.After the initial classification,superpixel segmentation is performed on the PBA images.Within each superpixel block,the classification is performed again based on the results of the initial classification,so as to correct partial misclassification points and further improve the classification accuracy.In the second classification method,this paper uses the convolutional neural network to automatically extract 3-D laser point cloud features.First,we select multiple viewing angles on the horizontal plane where the viewpoint is located.According to the pinhole camera model,each laser ranging point is projected onto a plane corresponding to each viewing angle.According to a specific color mapping algorithm,a two-dimensional image corresponding to each perspective is generated.This set of images is then used as input to the full convolutional neural network model.After the semantic segmentation of the image is completed,the results are inversely mapped back into the 3-D laser point cloud,and the category labels of each laser ranging point can be obtained.This has significant advantages in efficiency and resource utilization compared to neural networks that directly process 3-D laser point clouds.This paper selects the 3D laser point cloud dataset published by ETH Zurich and MINES ParisTech to verify the above two classification algorithms,and the verification results are compared with the research results of the foreign research group.The results show that the two classification methods in this paper have great advantages in classification accuracy and classification efficiency.
Keywords/Search Tags:3D Laser Point Cloud, Multi-Scale Feature Extraction, Scene Understanding, Full Convolutional Neural Network, Semantic Segmentation
PDF Full Text Request
Related items