Font Size: a A A

Remote Sensing Image Fusion Algorithm Based On Convolutional Neural Networks

Posted on:2020-12-07Degree:MasterType:Thesis
Country:ChinaCandidate:F J YeFull Text:PDF
GTID:2392330575978892Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
In recent years,with the development of social science and technology,multisensor remote sensing images have played an increasingly important role in various applications,such as environmental detection,geological disaster prevention,precision agriculture and national defense security,etc.There are many remote sensing satellites on various earth observation platforms that can provide images of different spatial,temporal and spectral resolutions.These remote sensing images are recorded in digital form and then processed by computer to obtain a variety of image products suitable for the applications.Due to the limitations of satellite sensors,remote sensing satellites cannot capture images with both high spectral and high spatial resolution,which can only obtain multispectral(MS)images with high spectral resolution and panchromatic(Pan)images with high spatial resolution.In practical applications,we need to use both high-spectral and high-spatial resolution information.According to the fusion level,remote sensing image fusion methods can be divided into pixel level,feature level and decision level,and the fused images are used for different application purposes.Pan-sharpening is a pixel-level fusion technology designed to combine the spectral information of multispectral images with the spatial information of panchromatic images,enabling fused images to have both high spectral resolution and high spatial resolution.The traditional algorithms are based on the human-made fusion rules,and the quality of fusion rules severely restricts the quality of final fused images.Convolutional neural networks have been widely used in computer vision field since 2012 due to their powerful representation capabilities and natural ability to adapt to images,and have made breakthroughs in recent years.In this paper,we propose to build a multi-scale fusion model using convolutional neural networks(CNNs).The fusion model implicitly represents a fusion function whose inputs are a pair of source images and the output is a fused image with end-toend property.Specifically,we use N ?N convolution to integrate the pixels of the N ?N region into one pixel to be fused,and then use 1?1 convolution to combine multiple pixels to be fused into one fused pixel.As the number of network layers increases,we can obtain the multi-scale fusion model proposed in this paper.In addition,due to the limitation of the training data set,the input images of the fusion model are MS images and approximately panchromatic(APan)images.In the specific fusion of remote sensing images,we use the Nonsubsampled Contourlet(NSCT)decomposition to transform the Pan image into APan image,which with MS image are input into the fusion model to obtain the final fused image.The method proposed in this paper overcomes the shortcomings of the traditional fusion methods in which the fusion rules are artificially formulated and learns an adaptive strong robust fusion function through a large amount of training data.In this paper,Landsat and Quickbird satellite data are used to verify the effectiveness of the proposed method.Compared with the traditional algorithms,the fusion algorithm proposed in this paper is superior to the contrast algorithms in both subjective and objective evaluation.The fused image preserves the spectral information of the MS image and the spatial information of the Pan image well.
Keywords/Search Tags:Remote sensing image fusion, Convolutional neural networks, Deep Learning, Image enhancement
PDF Full Text Request
Related items