| Cotton aphid is one of the three major pests in the cotton-growing areas of Xinjiang,which causes great harm to cotton yield.Efficient,non-destructive,and accurate monitoring of cotton aphid damage is of great significance for improving cotton yield.Considering the timeconsuming and labor-intensive traditional manual monitoring methods,this paper uses multi-source data such as non-imaging hyperspectral data,imaging hyperspectral data,RGB images,and ground survey data to conduct monitoring research on cotton aphid damage levels.The main research contents are as follows:In the study of non-imaging hyperspectral data,five spectral preprocessing methods were used to better analyze and model the data.In addition,three feature selection methods were used to select the most representative and useful feature subsets to improve the accuracy and efficiency of the model.Finally,partial least squares discriminant analysis and support vector machine were used to construct 30 combination models,which were compared.Among these models,the multi-scattering correction transformation preprocessing method performed well,and the MSC-SPA-SVC in the combination model performed the best on the test set,with an accuracy of 85.64%.In the study of imaging hyperspectral data,sample point spectral data was extracted based on ground survey information,and five spectral preprocessing methods and three feature selection methods were used.Partial least squares discriminant analysis and support vector machine were combined to construct 30 classification models.By comparing the results,the MSC-RF-SVC combination model performed the best on the test set,with an accuracy of 76%.In the study of RGB images,three deep learning models for monitoring cotton aphid damage levels were constructed using Python language and Pytorch framework,namely Alex Net,Res Net34,and Swin Transformer V2 networks,and the results were compared and analyzed.The results showed that the Swin Transformer V2 model had the highest classification accuracy,reaching 85.16%.On this basis,in order to extract image texture features and improve the performance of the model,the study also explored the application of the Gabor classification network to improve the model.The experimental results showed that the improved GA-Swin Transformer V2 model had an accuracy of7.33% higher than the original Swin Transformer V2 model. |