Font Size: a A A

Researches Of Fusion And Classification Techniques For Hyperspectral Image Based On Deep Learning

Posted on:2020-09-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:M M ZhangFull Text:PDF
GTID:1362330605972475Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of sensor technology,a wealth of multi-platform and multi-modal data could be obtained.Hyperspectral image,the representative of the high volume data,can realize the simultaneous acquisition of spatial information,spectral information and radiation information for the observation target,promoting the description of the objective world to present multi-scale,multi-angle and multi-dimensional features.At present,the use of hyperspectral data for substance classification and target detection has been successfully applied in the fields of agricultural monitoring,disaster warning and medical detection.How to extract valuable characteristics within hyperspectral image to improve the interpretation and classification accuracy of hyperspectral observation areas is still a challenging subject.In addition,for hyperspectral data,the improvement of spectral resolution is often at the expense of reduced spatial resolution,and the use of single-source hyperspectral data limits a large number of applications which are sensitive to spatial resolution and radiation information.In consideration of the cooperative and complementarity among hyperspectral image and other sources,information intelligence fusion based accurate interpretation and classification is a research topic that has great research significance and broad application prospects.On the basis of hyperspectral data,this dissertation focus on fusion and classification techniques including spatial-spectral feature based hyperspectral image classification,fusion and classification for hyperspectral image and high spatial resolution visible image,and fusion and classification for hyperspectral and light detection and ranging(LiDAR)data.In view of the current problems of hyperspectral and multi-source remote sensing fusion classification,the feasibility of different fusion classification techniques are deeply analysed.Remote sensing images collected by AVIRIS,ROSIS and AISA Eagle sensor are used as the experimental data,and multi-level fusion classification frameworks are realized according to the overall pipeline of data pre-processing,typical feature extraction and classifier design.The main research contents can be summarized as follows:1.Since limited spatial perception and skewed edge description in existing hyperspectral image classification methods,a diverse region-based convolutional neural network(DR-CNN)is proposed for hyperspectral image classification.The method consists of four direction-based spatial input blocks,one local spatial block and one global spatial block.Each direction-based network branch is served as a feature extractor to learn the orientation characteristics of sample,which ensures the reliability of fine-grained classification for edge pixels;local spatial-block based network can be guided to extract and deepen relatively pure spectral features of hyperspectral data;global spatial-block based network is guided to capture global contextual information of input sample,including spatial textures and context interactions between different categories.Experiments are implemented based on hyperspectral data sets collected by AVIRIS and ROSIS sensors in different regions.Relevant results robustly validate the superiority of the proposed method in the task of hyperspectral image classification,and further demonstrating the feasibility and great potential of diverse inputs in spatial-spectral information extraction.2.Traditional machine learning methods for multi-source data classification cannot take full advantage of information,and the feature extraction ways lack of diversity and flexibility,leading to the problem of the curse of dimensionality.For suppressing these problems,with multi-source remote sensing data collaborative classification as research objective,hyperspectral image(HSI)and high spatial resolution visible images(VIS)collaborative classification,as well as HSI and LiDAR data collaborative classification has been studied.And two collaborative classification framework based on multiple features extraction and collaborative utilization are proposed in this dissertation.Hyperspectral and visible image collaborative classification based on SLIC super-pixel segmentation is firstly proposed.For this classification framework,visible image is segmented into super-pixels,and the super-pixels are served as spatial guidance to acquire decision-fusion result based on initial classification results of HSI and VIS.The second classification framework is designed for HSI and LiDAR data collaborative classification based on multiple features integration.Multiple features are integrated and then classified via composite kernel support vector machine.Detailed experiments show that the proposed methods have advantages in multi-source remote sensing data collaborative classification,and achieves improvement on classification accuracy by utilizing multi-type information contained in multi-source remote sensing data.Also,this work is of great significance for promoting the development of both multi-source data and multi-type features fusion.3.In order to effectively correlate multi-sensor data and construct a deep collaboration framework for multi-source data in the case of small samples,a Patch-to-Patch(PToP)cross-domain learning model is systematically designed in this dissertation.This learning model,which aims to establish information mapping between different domains,extracts deep joint features of different sources through cross-source reconstruction process,thus solving multi-source feature extraction and deep collaborative classification problem restricted by small samples.Focusing on joint feature extraction and collaborative classification of hyperspectral and LiDAR data,a three-channel PToP cross-domain mapping network is firstly constructed to achieve seamless integration of two-source data.Secondly,a hierarchical fusion module is constructed for multiple-feature collaborative representation,including multi-channel,multi-source,and multi-hidden layers features.Finally,classification result is acquired through a three-layer fully connected network.The goal of this method is to improve classification accuracy by taking full advantage of the integrity and reliability of multi-source data,thus achieving the comprehensive description of observation scenes.In order to evaluate the validity and reliability of this method,Houston data and Trento data were used as standard data sets to conduct experiments,and a comprehensive comparison was made with various research schemes.The experimental results show that the proposed framework contains a reliable unsupervised multi-source feature extraction scheme which has strong robustness of small samples,and discriminative features could be acquired through the extractor to facilitate subsequent classification and interpretation tasks.Visual results further indicate the reliability of the proposed method for hyperspectral and LiDAR data classification,which is the represent of multi-source remote sensing classification.
Keywords/Search Tags:hyperspectral image, collaborative classification, feature extraction, data fusion, deep learning, convolutional neural network, remote sensing data
PDF Full Text Request
Related items