With the rapid development of deep learning technology,deep learning has also been applied to various fields of research.Image classification is a classic research content.Using deep learning technology to complete image classification tasks generally requires large-scale data sets.When large-scale data sets cannot be obtained,the training of classification models will be limited.The researchers solve the problem of classification model training when the sample images are insufficient through the few-shot learning method.Graph neural network is an important model for few-shot learning,which extracts few-shot image features through feature extraction network,and then uses them as nodes in graph convolutional network,The message propagation between nodes is completed through graph convolution operation,and image classification is performed using the feature information in the nodes.First,aiming at the problem of information loss in the process of image feature extraction,the feature extraction network of the graph neural network classification model is improved.Capsule attention network is designed by combining capsule network with global attention mechanism,capsule attention network and convolutional network constitute the feature extraction module of classification model.The extracted image feature vector is input into the graph convolutional network node,and the image classification prediction is completed after information dissemination.Compared with the previous few-shot classification model,the improved capsule attention graph convolutional network has a certain degree of improvement in classification accuracy.Second,during the node update process of the graph convolutional network,updating nodes with only the latest computed adjacency matrix results in unstable classification models,and as the number of network layers changes and the number of training iterations increases,the model will also appear overfitting.In the graph convolution operation,the idea of residual network is used,and the upper layer adjacency matrix with a certain weight is added when calculating the adjacency matrix of this layer,so as to improve the instability and overfitting of the training model of graph convolution network with different layers.After experimental comparison,the improved residual graph convolutional network improves the accuracy and stability in the task of few-shot image classification.Third,aiming at the problem of low efficiency of convolution operation information transfer in graph convolution network,the idea of graph attention mechanism is used,and residual graph self-attention network is designed based on residual graph convolution network and self-attention mechanism.After the graph convolution operation,adding the self-attention mechanism can fully exploit the relationship between the features of the input nodes and speed up the feature information propagation between nodes.The improved residual graph self-attention network fully completes the information interaction in the graph network composed of image feature nodes,so that the relationship between nodes is mined more carefully and fully,and the classification accuracy is improved to a certain extent. |