Font Size: a A A

Convolution Neural Networks For Remote Sensing Pansharpening

Posted on:2024-09-16Degree:MasterType:Thesis
Country:ChinaCandidate:X WuFull Text:PDF
GTID:2542307079976909Subject:Electronic information
Abstract/Summary:PDF Full Text Request
With the development of computer technology,remote sensing images have been widely used in various fields,including politics,economy and society.However,limited by the optical system carried by the satellite,high spatial resolution and spectral resolution remote sensing images cannot be obtained directly.In order to solve the problem,researchers propose the task of pansharpening in the field of remote sensing.Pansharpening usually fuse panchromatic images(PAN)with high spatial resolution and multispectral images(MS)with low spatial resolution to generate high spatial resolution images with the same spectral resolution as the MS image.At present,convolutional neural networks(CNN)methods have made great progressive in remote sensing pansharpening.However,we find that these existing methods adopt single-scale or multi-scale CNN frameworks for pansharpening,which cannot meet the needs of qualities in spatial and spectrum well.Although the former has a deep enough network to ensure a greater receptive field when the input image was fed,so that the network can extract useful feature information.However,information distortion is caused by deep network,with limited capacity to reconstruct final results.The latter can extract the contextual relationship via the multi-scale structure,alleviating the information distortion to some extend,but they always reduce the spatial resolutions of feature maps in the process of feature extraction,and it is not fully considered that the scale gap of feature maps with different resolutions,so as to coordinate the relationship between details and semantic features,resulting in serious distortion of spatial information.The thesis proposes a dynamic cross feature fusion convolutional neural network for remote sensing pansharpening,dubbed as DCFNet.The proposed DCFNet contains three parallel branches,where the main branch maintains an end-to-end high-resolution feature representation,and the medium-resolution feature branch and low-resolution feature branch that are gradually injected to the main branch.Furthermore,in order to enhance the fusion effect and representation ability,DCFNet adopts pre-fusion units and pyramid cross-scale feature transfer model to improve the fusion of information among the three branches,thereby recovering desired pansharpened images.Experimental results show that DCFNet significantly outperforms the state-of-the-art in both quantitative metrics and visual quality.The contributions of this thesis are as follows:1)The thesis proposes a novel architecture,namely DCFNet,with cross-scale parallel branches and feature fusion modules specially designed for panchromatic sharpening.Benefiting from the information fidelity ability in the high-resolution branch and multiscale feature extraction ability,the proposed method has more accurate spectral information while achieving the spatial reduction-free fusion,and can reconstruct high-quality pansharpened images.2)We propose novel feature fusion methods to facilitate multi-resolution branches to capture inter-branches features.The proposed method uses multi-input representation in each branch as the head structure of the branch,and gradually injects multispectral images of different resolutions into the feature branch through the pyramidal cross-scale feature transfer layer,which helps the network extract rich feature information.In addition,the dynamic branch fusion module is adopted to alleviate the redundancy and conflict of features caused by different resolution branches during the feature fusion process.3)Extensive experiments on various datasets obtained by different satellite sensors demonstrate that the proposed method outperforms state-of-the-art methods significantly and has good generalization,thus verifying that the effectiveness of this method.In addition,the discussion and analysis shows that DCFNet is superior than classic multi-scale convolutional neural networks in terms of feature extraction and image spatial detail recovery,and can be further applied in more vision tasks.
Keywords/Search Tags:Remote Sensing Pansharpening, Image Fusion, Convolution Neural Network, Multi-Scale Feature Representation
PDF Full Text Request
Related items