Font Size: a A A

A neural relevance model for feature extraction from hyperspectral images, and its application in the wavelet domain

Posted on:2007-08-03Degree:Ph.DType:Dissertation
University:Rice UniversityCandidate:Mendenhall, Michael JFull Text:PDF
GTID:1448390005474632Subject:Engineering
Abstract/Summary:
Our research is motivated by military applications related to aspects of contingency planning. Of recent interest is the identification of landmasses which can support the landing and takeoff of fixed wing and rotary aircraft where accurate classification of the surface cover is of utmost importance.In a supervised classification scenario, a natural question is whether a subset of the input features (spectral bands) could be used without degrading classification accuracy. Our interest in feature extraction is twofold. First, we desire a significantly reduced set of features by which we can compress the signal. Second, we desire to enhance classification performance by alleviating superfluous signal content. Feature extraction models based on PCA or wavelets judge feature importance by the magnitude of the transform coefficients, rarely leading to an appropriate set of features for classification.We analyze a recent neural paradigm, Generalized Relevance Learning Vector Quantization (GRLVQ) [1], to discover input dimensions relevant for classification. GRLVQ is based on, and substantially extends, Learning Vector Quantization (LVQ) [2] by learning relevant input dimensions while incorporating classification accuracy in the cost function. LVQ is the supervised version of Kohonen's unsupervised Self-Organizing Map [2]. LVQs iteratively adjust prototype vectors to define class boundaries while minimizing the Bayes risk. Our analysis reveals two major algorithmic deficiencies of GRLVQ. Fixing these deficiencies leads to improved convergence performance and classification accuracy. We call our unproved version GRLVQ-Improved (GRLVQI). By using only the relevant spectral channels discovered by GRLVQ, we show that one can produce as good or better classification accuracy as by using all spectral channels. We support this claim by running an independent classifier on the reduced feature set, using 23 classes of a real 194-band remotely sensed hyperspectral image. The higher the data dimension and/or larger the number of classes, the more advantage GRLVQI shows over GRLVQ.The improved performance of GRLVQI over GRLVQ is substantiated using several different methods discussed in the literature. We come to the important conclusion that the improved results obtained by our GRLVQI are statistically significant.A new and exciting feature extraction model is presented by applying GRLVQI in the wavelet domain. Our model is focused on classification requirements, rather than signal reconstruction. It does not follow the largest magnitude coefficient selection as is more typical in wavelet analysis. The most relevant wavelet features turn out to be something different. Further, it allows for a linearly selection of wavelet coefficients based on their computed relevances. We extend this work to complex wavelets in order to mitigate the effects of discontinuities introduced in the spectra due to the deletion of spectral bands containing irrecoverably corrupted data. The Dual-Tree Complex Wavelet Transform shows improved classification results with similar feature extraction capabilities as with the Critically Sampled Discrete Wavelet Transform. Our results demonstrate the superior classification and feature reduction performance of our relevance-wavelet model.
Keywords/Search Tags:Feature, Wavelet, Classification, Model, Spectral, GRLVQ, Performance
Related items