Font Size: a A A

Based On Feature Fusion Of 3D Point Cloud Scene Understanding Of Research And System Implementation

Posted on:2022-04-22Degree:MasterType:Thesis
Country:ChinaCandidate:T LiFull Text:PDF
GTID:2518306341954069Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
The classification and segmentation task of 3D point cloud scene understanding is the basis of 3D point cloud data analysis,which is widely used in automatic driving,relic protection,mapping geography,medical detection,and other practical applications.With the continuous development of deep learning and 3D point cloud acquisition technology,researchers began to do a lot of research on the classification and segmentation methods of point clouds.Although most of the existing methods solve the problems of disordered,unstructured,and incomplete information of point clouds,they still fail to fully excavate the fine-grained local geometric information and contextual characteristic information of point clouds.And there are still some problems such as "apparently similar targets cannot be effectively distinguished”,"small target missegmentation" and "rough segmentation edge".Aiming at the above problems,this paper mainly studies the 3D point cloud scene understanding based on feature fusion,realizes the tasks of shape classification,part segmentation,and semantic segmentation in the scene understanding,and focuses on the point cloud shape features,local geometric features,spatial context information and feature fusion.The main work completed in this paper is as follows:1.Feature extraction method fusing fine-grained multi-scale information.To solve the problem of apparent similar targets that cannot be distinguished effectively caused by inadequate mining of fine-grained local geometric features in existing methods,this paper proposes a feature extraction method that integrates fine-grained multi-scale information.First,we learn the fine-grained multi-scale features of local regions by constructing the convolutional layer of graph attention.Then,we use the spatial attention mechanism to emphasize the importance of different scale features and integrate the multi-scale features into the fine-grained local features with more discriminative power.Experiments show that this method can fully excavate the fine-grained local characteristics of the 3D point cloud model,enhance the ability of network fine classification,and generally improve the accuracy of shape classification,part segmentation,and semantic segmentation tasks in 3D scene understanding.2.Feature fusion method based on contextual attention RNN coding.To solve the problems of small target missegmentation and rough segmentation edge caused by the lack of combining fine-grained local context information in existing methods,this paper proposes a feature fusion method based on contextual attention RNN coding.First,the context information between different local regions is captured by a bidirectional long short-term memory network.Then,through an improved grouping attention mechanism to highlight the importance of the different local features,and the local region features are weighted and fused into the global semantic features containing fine-grained local geometric information and context information.Finally,to minimize the training error of the model,the loss function of the network in this paper considers both classification loss and similarity loss.Experiments show that this method can fully capture the correlation between local regions,obtain the fine-grained spatial context information and edge information of the point cloud,better segment different objects when there is an overlap of objects,and improve the performance of the classification and segmentation task of the point cloud.3.Designed and built a 3D point cloud scene understanding system.Based on the research on the point cloud classification and segmentation algorithm proposed in this paper,the demand analysis and overall design of the three-dimensional point cloud scene understanding system based on feature fusion are carried out.We use VScode,Vue,Python,and Qt to implement and complete the user login,data processing,model training,model testing,result display and other modules.The overall accuracy of the proposed network in the shape classification task is 92.9%,which is 3.7%and 2.2%higher than PointNet and PointNet++,and 0.3%higher than A-CNN,Point2Sequence,and AGCN.In the part segmentation task,the average instance crosswise ratio reaches 85.6%,which is 0.5%and 0.4%higher than DGCNN and Point2Sequence.In the semantic segmentation task,the average crosscutting ratio reaches 66.5%,which is 8.7%and 9.9%higher than Grid-GCN and AGCN.In conclusion,this paper proposes a robust and feasible method for 3D point cloud scene understanding,which can effectively alleviate the problems such as "apparently similar targets cannot be effectively distinguished","small targets are missegmented",and "segmentation edges are rough" in complex scenes,finally designs and realizes a friendly interface,simple operation of the three-dimensional point cloud scene understand system.
Keywords/Search Tags:3D point cloud scene understanding, feature fusion, graph attention network, attention mechanism, bi-directional long short-term memory
PDF Full Text Request
Related items