| Due to different technical conditions and different working principles,a single sensor cannot obtain high spectral-resolution multispectral image.Therefore,multi-source remote sensing image fusion technology emerges as the times require.The fused image can not only retain the spectral information of low spectral-resolution multispectral image,but also combine the spatial information of high spatial-resolution panchromatic image to obtain high spatial-resolution multispectral image,which enhances its comprehensibility and has important application value.The traditional remote sensing image fusion method can only extract the shallow features of the image through the linear relationship,and the quality of the fused image is poor.The image fusion method based on deep learning can automatically learn the high-level feature information of the input image through the network.This method can improve the fusion quality of the output image;at the same time,by coupling multiple depth networks,the depth features of different images can be effectively extracted.Aiming at the fusion of the multispectral image and the panchromatic image,this thesis uses the depth network method,and the Pan-sharpening method that based on coupled depth neural network is proposed.The main work is as follows:1.This thesis proposes the Pan-sharpening based on coupled sparse autoencoder.The method extracts the intrinsic features of the input image and the output image of the network by two sparse autoencoders,and adds the sparse regular in the autoencoder to make the hidden layer features better express the image information.At the same time,a feature mapping layer is established between the intrinsic features.The end-to-end learning is performed on the initialized network through the backward propagation algorithm to complete the training process.The experimental results show that the fusion results of the proposed methods are significantly improved in both visual effects and objective evaluation indicators.2.A Pan-sharpening algorithm based on multilayer coupled convolution network is proposed.The algorithm extracts the intrinsic features of the input and output images respectively by two convolutional autoencoders,and uses the convolutional neural network to form a feature mapping layer between the intrinsic features of the two images.Finally,the network after initialization is subjected to end-to-end learning through the backward propagation algorithm to complete the training process.The experimental results on the simulated data and the real data show that the method in this chapter can not only enhance the spatial resolution of multi-spectral images,but also fidelity to the spectral information of the features.3.A Pan-sharpening method based on coupled multi-scale networks is proposed.The spatial resolution of multi-spectral image and panchromatic image is different,and the presented spatial information is different.Therefore,different scale networks are used to extract multi-scale features from the two images,and fusion is performed at the feature layer.Specifically,the initial features of two input images are first extracted using convolution layers of different convolution kernel sizes in this network.Subsequently,extract the intrinsic features of different scales of the multi-spectral image and the panchromatic image separately through two multi-scale convolution blocks with factorization,and the intrinsic features of the same scale of the two images are feature-fused.The fusion features of different scales are connected along the spectral dimension and input into the convolutional neural network to generate a fused image.The effectiveness and feasibility of the proposed method are verified by experiments on simulation data and real data. |