Font Size: a A A

Multi-source Remote Sensing Image Fusion And Classification Technology Based On End-to-end Deep Learning

Posted on:2021-01-16Degree:MasterType:Thesis
Country:ChinaCandidate:Y ChenFull Text:PDF
GTID:2392330605976000Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the steady advancement of our civil space infrastructure planning,the types of satellites in orbit are becoming more and more abundant,and a large number of multi-platform and multi-modal data can be obtained.Related researches such as the use of hyperspectral data for target recognition and classification have been successfully applied to agricultural monitoring,disaster warning,and city planning.However,if only single-source data is used,the applied research will be limited to the imaging characteristics of the single-source data.Therefore,this paper studies a multi-source remote sensing data fusion framework based on deep learning and a multi-source remote sensing image classification model based on deep learning.Both the multi-source remote sensing data fusion framework and the multi-source remote sensing image collaborative classification model are based on the end-to-end idea,but there are differences in the design of the two:in addition to the difference in the depth of the encode-decode convolution layer,the multi-source remote sensing collaborative classification model uses the end-to-end model establishes the mapping relationship between multi-source data,obtains the joint features from multi-source data,and provides a basis for the subsequent realization of feature classification;and the multi-source remote sensing image fusion framework adds dense blocks and fusion modules based on the end-to-end model.It is the effective use of multi-source data feature information to reconstruct the fusion feature through the decoder to obtain the expected fusion image.The main research contents of this article are as follows:1.Hyperspectral images use fine spectral feature analysis to identify the type and surface state of targets,so that many substances that are not easy to detect can be identified in hyperspectral images.However,under the limitation of fixed signal-to-noise ratio,the increase in spectral resolution must come at the expense of decreasing spatial resolution.There is currently no technology to directly acquire remote sensing images with both high spatial resolution and high inter-spectral resolution.In addition,the dimensions volume of the hyperspectral is large,difficulties in tag sample collection,and less fusion pairing data.The research on hyperspectral image fusion algorithms is relatively few,and it is difficult for existing algorithms to obtain better fusion results.Therefore,in order to obtain remote sensing images that combine high spatial and high spectral resolution characteristics,this paper uses deep learning to design a multi-source remote sensing data fusion framework that can merge high spectral resolution images and high spatial resolution images.The fused image can maintain the spectral physical characteristics of the original data,and at the same time improve the spatial resolution characteristics of the image.Based on the above analysis,this paper uses an end-to-end network to achieve multi-source data fusion.The encoding network extracts the depth features of the input image and performs feature mapping through dense blocks.The cascading inter-layer processing method can effectively use the intermediate process information.The fusion feature map is obtained through the fusion layer,and finally the final image is obtained through reconstruction of the decoding network.2.For multi-source remote sensing data classification,the traditional classification algorithm has the problems of low multi-source data utilization rate,single feature extraction,and easy destruction of feature space structure in the classification process.This paper takes deep learning as the main method and designs a multi-source remote sensing cooperative classification model based on I-To-I CNN(Image-To-Image Convolutional Neural Network)with the characteristics of multispectral images and panchromatic images.This model is based on the idea of end-to-end network,through the establishment of mapping between multi-source data,in-depth mining of multi-source data internal correlation information.The model uses an end-to-end network for joint feature extraction of multispectral and panchromatic images,stitches the joint features extracted by I-To-I CNN,and finally classifies them by a classifier to optimize feature utilization.By extracting and merging the deep-level features of multi-source data,the feature of multi-spectral image"integration of maps" is preserved,and the fusion extraction of multi-source data features is also completed.Experimental results show that the method achieves effe'ctive improvement of classification accuracy on the basis of end-to-end feature level fusion with multi-source data.
Keywords/Search Tags:Remote sensing image interpretation, Feature extraction, Remote sensing data fusion, Collaborative classification, Deep learning
PDF Full Text Request
Related items