Font Size: a A A

Research On Deep Feature-based Methods For Optical Remote Sensing Image Classification And Retrieval

Posted on:2022-04-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:W W SongFull Text:PDF
GTID:1482306731966669Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the continuous development of earth observation technology and imaging equipment,remote sensing technology is entering a new era,and the acquired optical remote sensing images have been unprecedentedly improved in terms of resolution and volume.Because of its rich spectral and spatial information,remote sensing data is widely used in military reconnaissance,disaster detection,environmental monitoring,precision agriculture,and land planning.Ground object classification and scene retrieval are two key tasks for remote sensing data information extraction,and are important research contents of optical remote sensing image interpretation.However,how to design effective feature extraction methods to represent the complex images content has become a current research hotspot and difficult problem in the field of remote sensing.In recent years,deep learning has attracted widespread attention in the field of computer vision with its powerful feature extraction capabilities.Different from traditional feature extraction methods,deep learning technique uses deep neural networks to extract discriminative semantic features to represent image contents.Based on its great successes in the field of natural image processing,deep learning has also been introduced into the field of remote sensing,which has greatly promoted the development of related tasks including remote sensing image classification and retrieval.In this thesis,convolutional neural networks are used,combined with the spectral and spatial characteristics of remote sensing images,to study the hyperspectral image classification and high-spatial resolution remote sensing image retrieval.The specific research contents are as follows:(1)Deep learning-based classification methods often use the semantic labels of images as supervised information to train neural networks.However,due to insufficient labeled samples in hyperspectral images,it is difficult to use the features extracted by this type of methods to effectively distinguish some complex classes.In order to solve the above problems,a hyperspectral image classification method based on deep metric learning is proposed in this thesis.First,a two-branch convolutional neural network is constructed,and then the deep features of a pair of samples are extracted,where the similarity information in the original sample space is preserved.Specifically,the feature distance from the same categories should be as small as possible,and the feature distance from the different categories should be large enough.In order to quickly calculate the feature distance,hash learning mechanism is used to map high-dimensional real-valued features into low-dimensional binary codes.Finally,the learned similarity preserved deep features are fed into a support vector machine(SVM)classifier to complete the pixel-wised classification of hyperspectral remote sensing images.Experimental results on Indian Pines,University of Pavia,and Salinas data sets show that this method achieves the better classification results.(2)The performance of traditional neural networks is restricted by the number of network layers and cannot extract deep features to more effectively characterize the image content.This thesis proposes a deep feature fusion network(DFFN)for hyperspectral remote sensing image classification.First,the deep residual network is used as the backbone network to extract the spectral-spatial features of hyperspectral remote sensing images.The residual network adopts skip connection to ease the problem of performance degradation due to the sharp increase in the number of network layers.In addition,the features of different levels have strong correlation and complementary information,for example,the shallow layers mainly extract the color and edge features of the image,the middle layers mainly focus on the structural information,and the high level features focus on the semantic information.For this reason,a deep feature fusion mechanism is designed to make full use of the information between different levels and improve the representation ability of feature.The classification results on three public hyperspectral remote sensing data sets,including Indian Pines,University of Pavia,and Salinas data sets,show that this method is better than the compared method in the terms of visual comparison and the quantitative comparison,which validates the effectiveness of this method.(3)Current retrieval methods cannot effectively obtain the semantic labels of retrieved images,which limits the further analysis and processing of the images.This thesis proposes a unified framework based on deep hash convolutional neural network(DHCNN)for high-spatial resolution remote sensing image retrieval and classification.The proposed method makes full use of the respective advantages of deep neural networks and hash learning.On the one hand,a pre-trained convolutional neural network is used to extract high-dimensional deep features to represent image content.On the other hand,a hash layer is constructed to convert high-dimensional real-valued features into low-dimensional hash codes to achieve fast feature distance calculation.In addition,this method also combines the similarity between samples with the semantic information of each sample to jointly guide network training,so that similar samples in the original sample space are mapped to adjacent positions in the Hamming space,and dissimilar samples are mapped as far as possible.Finally,the experimental results on three high-spatial resolution remote sensing image data sets,i.e.,University of California Dataset,Merced(UCMD),WHU-RS,and Aerial Image Dataset(AID),demonstrate that the method can simultaneously obtain the satisfactory retrieval and classification results.(4)Current symmetric-based deep hash networks are not effective enough to use supervised information for large-scale image retrieval,this thesis proposes an asymmetric hash code learning-based method for high-spatial resolution remote sensing image scene retrieval.This method generates the hash code of the query images and the database images in an asymmetric manner.Specifically,the hash codes of the query images are generated by a trained deep network,and the hash codes of the database images are directly obtained by solving a designed objective function.Therefore,there is no need for database images to pass the feed-forward operation of the deep network,which significantly improves the efficiency of hash code generation.In addition,the method also integrates the semantic information and similar information of the samples to guide the training of the network,which can further improve the representation ability of features.Experimental results on UCMD,WHU-RS,and AID data sets show that the method is superior to the current symmetric-based deep hash network approaches in terms of retrieval performance and efficiency.
Keywords/Search Tags:Hyperspectral Remote Sensing Images, High-Spatial Resolution Remote Sensing Image, Ground Target Classification, Scene Retrieval, Deep Learning, Convolutional Neural Networks, Hashing Learning, Feature Extraction, Support Vector Machine
PDF Full Text Request
Related items