Font Size: a A A

Feature Extraction And Fusion For Classification Of Remote Sensing Imagery

Posted on:2018-05-09Degree:DoctorType:Dissertation
Country:ChinaCandidate:R B LuoFull Text:PDF
GTID:1318330533967090Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Recent advances in remote sensing technology allow us to measure different aspects of objects on the Earth,from spectral characteristics in multispectral and hyperspectral images,to height information in the Light Detection And Ranging(Li DAR)data.Despite the richness of information available,automatic interpretation of remote sensing data remains challenging.Huge amounts of data,as well as the increasing dimensions hamper the ability to process the big data,causing problems in both computational complexity and storage resources.What's more,classification techniques in pattern recognition typically assume that there are enough training samples available to obtain accurate class descriptions in quantitative form.However,in many real applications,collecting ground-truth is often expensive and time consuming in practical applications.The limited training samples may leads to the Hughes phenomenons when doing classification for high dimensionality of remote sensing data(e.g.hyperspectral imagery).Last but not least,different data sources have different advantages and shortages,such as hyperspectral imagery can provide plentiful and valuable spectral information of different objects of interest,but cannot distinguish different objects made of the same material,and is easily influenced by different weather conditions(cloudy);Li DAR data can provide useful information about the size,structure and elevation of different objects,while it is difficult to discriminate different objects which are similar in altitude but quite different in nature.How to extract complementary information from multi-source data to improve recognition accuracy of objects is still very difficult.In order to address the challenging problems mentioned above,this dissertation focus on developing new feature extraction and data fusion techniques to improve the classification accuracy of remote sensing imagery.In general,our proposed methods can extract more effective features for higher classification accuracy and more efficiency in reducing the computational complexity,leading to potential improvements in processing of huge datasets.A more specific summary of our contributions can be highlighted in the following:· The first contribution of this thesis is the exploration of supervised feature extraction algorithms for classification of hyperspectral remote sensing imagery by combining local geometrical structure and label information.In detail,discriminative supervised neighborhood preserving embedding(DSNPE)and principle component analysis(PCA)-based supervised locality preserving projection(PSLPP)are presented.DSNPE incorporates the label information into a linear neighborhood preserving extraction method,pulls the neighboring points with the same class label closer,while simultaneously pushes the neighboring points with different labels far away from each other when projecting them from high dimensional feature space into low feature space.PSLPP first uses PCA to remove noisy and redundancy,and then combines label information and locality preserving projection to construct similarities between samples.· Normally,the number of labelled training samples is not enough for supervised feature learning,our second contribution is the proposition of novel semi-supervised feature extraction methods by combining limited labeled samples and a large number of unlabelled samples.First,this thesis improves the existing semi-supervised method by taking into account the correlations between labelled and unlabelled samples.Secondly,a semi-supervised graph learning(SEGL)method is proposed.The main contribution of SEGL is constructing a semisupervised graph to model the similarities between samples,as labelled samples are connected according to their label information,unlabelled samples are connected by their nearest neighborhood information,and the connections between labelled and unlabelled samples are based on the distance between class center and unlabelled samples.All connected samples have been set a weighted edge to better model the actual differences and similarities between them.Lastly,the semi-supervised graph learning(SEGL)are extended to both spectral and spatial domains,and build a semi-supervised fusion graph to simulate the correlations of samples.· In order to combine the complementary information from multi-sensor data to improve classification performance,a novel framework to fuse hyperspectral and Li DAR images for classification of the cloud-shadow mixed remote sensing scenes is proposed.In proposed framework,the cloud-shadow and non-shadow regions are processed separately.Firstly,a cloud-shadow mask is extracted to divide the remote sensed scene into two parts(cloudshadow and shadow-free).Then shadow-free region is classified by integrating multiple features(e.g.spectral from raw HS image,spatial generated from HS image,and elevation from Li DAR data),with available training samples.For classification of cloud-shadow region,the proposed method generates reliable training sample sets from cloud-shadow region by searching the nearest neighbors of each class center(obtained from Li DAR data)based on both spectral and spatial features.The pixels of cloud-shadow areas are classified with similar strategy to shadow-free region,while the classifier is trained by the new generated reliable training samples.The final classification map is produced by decision fusion of the classification results from both the shadow-free and cloud-shadow regions.The proposed framework makes full use of the advantages of different data sources.· Our last contribution to speed up non-linear feature extraction methods by exploiting the advantages of GPU.Non-linear feature extraction methods,like kernel principle component analysis(KPCA)are more suitable to describe non-linear and higher-order distributions of the data,but with relatively large computational complexity and need long execution time.An efficient implementation of KPCA feature extraction algorithm on graphics processing unit(GPU)based on Jacket MATLAB Toolbox in parallel strategy is developed in this thesis.By using the proposed methods based on parallel strategy,non-linear feature extraction methods(e.g.KPCA)can be speeded up significantly(more than 100 times),without losing classification accuracy.Compared with some state of the art methods by the experiments on some real data sets,the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies and have been proven to be efficient.
Keywords/Search Tags:Remote sensing, Hyperspectral image, Li DAR, Feature extraction, Data fusion, Classification, Semi-supervised graph learning
PDF Full Text Request
Related items