| In recent years,deep neural networks have become an important research method in the field of computer vision.Under the condition of large samples,the image classification methods based on deep neural networks are effective.However,in image classification tasks with small sample size,deep neural network-based image classification methods still have problems of overfitting and insufficient generalization ability.Ensemble learning is an effective method to alleviate the overfitting of deep models.This paper investigates the image classification problem under small sample size based on ensemble learning and deep learning related theories,and proposes three deep ensemble models with the following details.Bagging PrototypeNetwork(Bag-ProtoNet)is proposed based on Bagging algorithm.The deep model training process will lead to a large model variance due to the limitation of sample size,and Bagging algorithm can effectively reduce the model variance.Based on this,this paper proposes an ensemble prototype network based on Bagging algorithm.In this model,the training set is independently sampled several times to build different training subsets,and then several different prototype networks are trained,and finally the classification results of different networks are integrated to achieve the purpose of ensemble classification effect.In this model,due to the different training subsets,different prototype networks can extract common features of samples from different feature spaces and prefer different classification tasks,thus enhancing the generalization performance of the model and thus improving the classification accuracy.A generalized adaptive boosting algorithm(GAdaBoost)is proposed,and proposing the inductive prototype network(GA-ProtoNet)The AdaBoost integration algorithm inputs the error rate of the model as an influence parameter to the next base model,thus continuously reducing the loss of the base model and thus the bias of the whole model,which can no longer be reduced according to the traditional AdaBoost algorithm due to overfitting in the case of limited sample size.In the case of limited sample size,the depth model can no longer reduce the model bias according to the traditional AdaBoost algorithm due to overfitting.Based on this problem,this paper extends the traditional adaptive boosting algorithm to the field of small-sample image classification and proposes the GAdaBoost algorithm,based on which an inductive prototype ensemble network is proposed.This model uses the predicted probability of the current base model as the input of the next base model,thus reducing the loss of the base model and thus the bias,and finally improving the classification accuracy of the ensemble model.The Transductive General Adaptive Boosting(TGAdaBoost)algorithm is proposed,and proposing the Transductive Adaptive Boosting RelationNetwork(TA-RelaNet).The traditional AdaBoost algorithm only enhances the generalization performance of the deep model by reducing the training error,and when the training sample size is small,the generalization performance of the model is still poor despite the small training error of the deep model.Timely introduction of test samples in the training process can help enhance the generalization performance of the deep model.Based on this problem,this paper proposes the TGAdaBoost algorithm and proposes a Transductive relational ensemble network.The base model of this model uses relational networks to enhance the task relevance,and the base model is modified by introducing the prediction probability of test data to make the base model fit the distribution of test data better,which in turn improves the classification accuracy of the ensemble model.The three ensemble classification networks proposed in this paper are applicable not only to the few-shot image classification task but also to the traditional small sample image classification task.The experimental results show that the three ensemble classification networks have advantages over other methods. |