With the popularization of remote sensing technology,remote sensing image has been widely used in water quality monitoring,military detection and disaster early warning.Due to the technical limitations of satellite sensors,it is difficult to obtain multispectral images with high-spatial resolution from a single sensor.To alleviate this problem,many optical earth observation satellites,such as GeoEye,IKONOS and QuickBird,carry two optical sensors to simultaneously capture two types of images of the same geographic area with different but complementary properties,namely,panchromatic images with high spatial resolution and multispectral images with high spectral resolution.In practical applications,however,both types of information are often needed.Therefore,remote sensing image fusion technology has become a research hotspot.In recent years,in the field of remote sensing image fusion,the application performance of convolutional neural network is particularly attractive.Different from traditional methods,the method based on convolutional neural network can automatically learn the up-sampling method of different bands end-to-end,and retain richer spectral information.In order to obtain high-quality fusion images,two remote sensing image fusion algorithms based on multiscale dilated convolution are proposed in this thesis.The research contents are as follows:1.To avoid the limitation of extracting features from shallow networks,this thesis uses deep networks to reconstruct low resolution multispectral images,which improves the convergence speed and the ability to recover details.By using the multi-scale residual hybrid dilated convolution module in the network,the receptive field can be enlarged without increasing the number of parameters,and the mesh effect caused by expansion convolution can be avoided.Residual learning is used to effectively alleviate the overfitting caused by deep network.Experimental results show that this network is superior to other advanced algorithms in both subjective vision and objective evaluation.2.Multispectral image is directly sampled to the same size as panchromatic image,which does not make full use of the feature information of both.In this thesis,a dual-stream feature extraction fusion network is proposed to extract features from multispectral images and panchromatic images respectively.The problem of large scale span between the two is relieved by changing the way of network architecture.The network uses multiscale dilated convolutional dense modules to reduce the problem of gradient disappearance in the training process and effectively avoid the problem of serious loss of local details.Through spectral mapping,the original spectral information is completely injected into the fusion image without any processing,so as to obtain a smaller training error and greatly reduce the difficulty of reconstruction.Experimental results show that the network can preserve spectral information and improve the ability of detail recovery. |