| Hyperspectral remote sensing image(HRSI)classification is a hot research topic in the field of remote sensing.Currently,spectral-spatial classifiers based on deep learning have been widely studied,among which convolutional neural networks usually outperform other deep learning models due to the characteristics of local connection and shared weights.However,the design of the neural network structure directly affects the performance of the classifier based on a convolutional neural network.Researches show that the neural network-based hyperspectral remote sensing image classification(HRSIC)method relies heavily on excessive training time and expertise to manually construct network models,which is a time-consuming and error-prone.In contrast,the neural architecture search(NAS)method has attracted more attention due to its automatic search for a better network architecture based on a certain dataset.Therefore,this paper mainly investigates the neural architecture automatic search method and applies it to hyperspectral remote sensing image classification,and the work and contributions of the paper are as follows:(1)To address the issues of the fact that deep learning method for HRSIC commonly uses the neighboring cubes of images as training data,but the neighborhood cubes have a lot of redundant information,in this paper,we proposed an image-based neural architecture search algorithm(Image-based Neural Architecture Search,I-NAS).Unlike the classification framework based on image neighborhood cubes,the model in this paper takes the complete image as input to reduce the redundant information existing among the neighborhood cubes,and uses end-to-end cell structure to enrich feature information and obtains the final model architecture by stacking cells.Experiments on the Indian Pines and Pavia University datasets show that the overall accuracy of I-NAS reaches 94.64% and 97.45%,respectively,which is 0.39% and 0.74% better than the neural architecture search algorithm based on the image neighborhood cube.And its running rate during architecture search,training and testing has been greatly improved.(2)For the inefficient feature description of existing network architectures for hyperspectral feature,a SimAM attention module is employed in a NAS-based algorithm(SimAM Attention Neural Architecture Search,SAM-NAS).The SimAM attempts to utilize the attention weights to highlight the linear separability among each image element in the HRSIs,along both spectral and spatial dimensions.It can extract effective and discriminative spectralspatial features from the redundant spectral features and highly similar spatial information,which can then be fed into the model to improve the classification performance.Experiments on the Indian Pines and Pavia University datasets show that the overall accuracy of the SAMNAS algorithm reaches 96.27% and 98.30%,which is 1.63% and 0.85% better than that of the I-NAS algorithm,respectively. |