| As an important bearing body of earthquake,the seismic damage information of buildings is not only an important indicator to measure the degree of disaster,but also an important basis for scientific earthquake emergency rescue and disaster assessment.For the complex scene in the disaster area,LiDAR point cloud can quickly obtain three-dimensional surface information and abundant spatial structure information of ground objects with high precision.It should not only be used as auxiliary data of satellite optical images,but also excavate the seismic damage information of buildings in the disaster area contained in the point cloud.At present,most methods for extracting seismic damage information of buildings in disaster areas using only point clouds as data sources still focus on traditional step-by-step classification and extraction,which is difficult to meet the requirements of timeliness of emergency rescue and automatic and intelligent recognition of remote sensing features.Deep learning,as one of the intelligent data processing technologies,can quickly train models from big data,recognize complex patterns,automatically extract important and suitable features,discard factors that have nothing to do with discrimination,and replace human feature engineering.It is very suitable for high density,High-precision,large data volume characteristics of point clouds.However,there are few applications of 3D deep learning for target classification and recognition in large-scale geological and seismic disaster point cloud scenes.Therefore,aiming at the identification of seismic damage buildings in earthquake point cloud,this paper proposes to use 3D deep learning method to establish the model of single seismic damage building classification recognition and post earthquake point cloud scene segmentation.Based on the PointNet and PointNet++ 3D deep learning network,using the airborne LiDAR point cloud data of Haiti after the earthquake in 2010,this paper carries out the extraction experiments of building seismic damage information.The main research work is as follows:(1)Combining traditional point cloud seismic damage building extraction and 3D point cloud deep learning object classification and recognition,this paper analyzed the feasibility of using 3D point cloud deep learning to classify and recognize seismic damage building,then compared with the current point cloud deep learning single classification and semantic segmentation dataset,and focused on analyzing the PointNet and PointNet++ network structures in the 3D deep learning.(2)Constructing dataset for seismic damage identification of buildings:the n DSM point cloud is obtained by CFS filtering and normalization preprocessing of the LiDAR point cloud data in the experimental area.Then combining the building damage characteristics extracted from point cloud data and images of the pre-earthquake and post-earthquake,It is determined that the seismic damage buildings are divided into three types of damage: fully collapsed,partially collapsed and uncollapsed.Point clouds of buildings are selected and labeled in the experimental area,and the sample dataset is organized into HDF5 and Pickle format.It is determined to divide the seismic damage point cloud scene into five categories: background,vegetation,collapse,partial collapse and uncollapsed.And the appropriate scenes from the preprocessed point cloud data are selected and labeled,then the experimental data is organized into HDF5 and Pickle format.(3)In the seismic damage buildings classification experiment,based on the characteristics of the PointNet++ network and the shape of collapsed and partial collapsed point cloud samples,a sample enhancement method including inverse distance interpolation,symmetry and top projection is proposed.After sample enhancement processing for fully collapsed and partially collapsed samples,not only the number of fully collapsed buildings and partially collapsed buildings is increased,the samples are more comprehensive and diverse,and the problem of uneven distribution of sample number is solved.The classification accuracy of fully collapsed and partially collapsed has increased by about 30% and 20%.The average classification accuracy and kappa coefficient of the model have increased by more than 10%.The classification accuracy difference between fully collapsed and uncollapsed,partially collapsed and uncollapsed is reduced from 40% and 30% to about 15%.(4)In the point-based scene segmentation experiment,the segmentation result of PointNet and PointNet++ for the same dataset was compared,and the network pooling method was modified to compare the overall segmentation accuracy of max pooling,mean pooling,and combined pooling.The segmentation accuracy of combined pooling is about 3% higher than the other two pooling results,but the overall difference is not obvious.The overall segmentation accuracy of PointNet++ is about 4% higher than that of PointNet,and the identification of the target boundary is more accurate in the visualization results.In addition,the application effects of the established single classification model and the scene segmentation model were compared in the same verification scene.The single classification model showed good classification results for collapsed,partial collapsed,and uncollapsed.Although PointNet and PointNet++ for the segmentation of the four target ground objects is not completely consistent with the original scene,the visualization results are still considerable.Experiments also show that it is feasible to apply 3D deep learning to the seismic damage point cloud scene. |