With the rapid development of modern remote sensing technology,massive remote sensing images have been widely used.Remote sensing images usually contain very rich surface features information,which has important research and application value.However,due to the high cost and complex imaging process of obtaining High Resolution Multispectral remote sensing images,a single sensor of optical remote sensing satellite can only obtain Low Resolution Multispectral images or High Resolution Panchromatic images.To obtain High Resolution Multispectral images,a large number of remote sensing image fusion methods are proposed to fuse the spatial and spectral information of multi-source remote sensing images and effectively improve the spatial and spectral resolution of remote sensing images.Remote sensing image fusion methods can be divided into Component Substitution Methods,Multiresolution Analysis Methods,Degradation Model-based Methods,and Deep Neural Network-based Methods.Among them,the most concerned methods are the Deep Neural Network-based Methods,but this kind of method has low image quality,long training time,and limited practicability.Considering the above problems,aiming at the difficulties of remote sensing image fusion,three improved deep network design methods are proposed.The main research contents of this thesis are as follows:1.Zero-Reference GAN for Fusion of Multispectral and Panchromatic Images.The existing remote sensing image fusion methods based on deep learning need a lot of training data and training time to get ideal results.To overcome these two problems,this method establishes a adversarial framework between multi-scale generators and discriminators,and adopts the strategy of "training while generating".Through the multi-scale generators,the High Resolution Multispectral image is gradually generated from the Low Resolution Multispectral image,and the spatial information of the Multispectral image is gradually enriched while preserving the spectral information.The discriminators aim to distinguish the spatial information difference between the fused image and the Panchromatic image through the Spectral Response Filtering module.At the same time,to ensure the quality of the fused image,the method designs a zero-reference loss function,including adversarial loss,spatial and spectral reconstruction losses,spatial enhancement loss,and average constancy loss,which effectively enhances the spectral and spatial details of the fused image.This method adopts a cascade network structure and only uses a pair of images as test images,which overcomes the problem that the existing methods need a large number of training datasets and training time.2.Pansharpening via Triplet Attention Network with Information Interaction.Most remote sensing image fusion methods based on deep learning use the same network structure or the same processing methods to learn the spatial and spectral features of Panchromatic images and Low Resolution Multispectral images,ignoring the differences and complementarities between multisource remote sensing images.To solve this problem,firstly,this method designs two subnetworks with different attention mechanisms to extract spectral and spatial features from Low Resolution Multispectral images and Panchromatic images respectively.Secondly,the method enhances the complementarity between Low Resolution Multispectral images and Panchromatic images through Information Interaction Module.Finally,the method stacks the sub-network feature maps containing spectral and spatial features and calculates the correlation between the feature maps through the Graph Attention Module as the nodes of the constructed graph.The feature map represented by the nodes with high correlation in the image is selected as the feature map to be highlighted,which provides more spatial and spectral information for the reconstruction of the fused image.3.HLF-Net: Pansharpening Based on High and Low Frequency Fusion Networks.High frequency and low frequency have different effects on the spatial details and spectral information in the fused image.However,the existing network does not consider the difference between high frequency and low frequency in remote sensing images.In this method,two different networks are designed to process the high frequency and low frequency in the source image respectively.The High Frequency Fusion Network fuses the high frequency in Low Resolution Multispectral images and Panchromatic images and uses the U-Net with the Skip Convolution Block to save the spatial information in the feature map.The Low Frequency Fusion Network preserves the spectral features of the image through the self-attention mechanism.Finally,the fused high frequency and low frequency are added to obtain the fused image.To sum up,this thesis studies the remote sensing image fusion based on deep spatial and spectral network and tests the above three methods with Geo Eye-1,Quick Bird,World View-2,and World View-4 datasets.Experimental results show the effectiveness of the proposed methods. |