3D model classification is one of the fundamental problems in 3D data processing.Its research results can be applied to many other 3D data processing tasks,such as 3D reconstruction,3D segmentation and 3D registration.Therefore,the research on 3D model classification has very important theoretical significance and practical application value.In this paper,the background of 3D model classification,research status at home and abroad,different representations and classification methods of 3D model are summarized.Based on view features and shape distribution features,a method of 3D model classification which adopts GoogLeNet network,Convolutional Neural Network(CNN),attention mechanism and softmax is studied.The research content of this paper mainly includes the following parts:1.The method of 3D model classification based on view is studied.3D model is preprocessed,and then it is projected into 2D views.GoogLeNet is used to extract view features.They are combined with softmax function to classify 3D models.2.The method of 3D model classification based on view and shape distribution features is studied.D1,D2,D3,Fourier and Zernike shape distribution features are adopted to represent the shape of 2D view.GMNet is proposed,which is composed of 1D CNN and GoogLeNet.1D CNN is used to extract deep level features from D1,D2,D3,Fourier and Zernike shape distribution features.GoogLeNet is used to extract view features.They are fused for 3D model classification.3.The method of 3D model classification based on attention mechanism,multiple features and multiple neural networks is studied.CBAM(Convolutional Block Attention Module)attention mechanism is introduced into GoogLeNet.At the same time,convolution mode in GoogLeNet is replaced by deep separable convolution.STN(Spatial Transform Network)is used to process 2D views,and 1D CNN is used to extract global shape distribution features.Features are fused for 3D model classification.In this paper,we use ModelNet40 to train and test the proposed network.The classification accuracy in the proposed network has reached 90.24%,90.60%,and 93.63%,respectively. |