Font Size: a A A

Research On Multi-Label Remote Sensing Image Classification

Posted on:2023-01-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:D LinFull Text:PDF
GTID:1522307031978209Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Remote sensing technology adopts long-distance non-contact detection methods to collect electromagnetic radiation information of ground objects.It has been widely used in practical tasks such as meteorological observation,resource survey,and urban planning.Remote sensing images refer to images that record the electromagnetic signals of ground objects.They contain rich concrete information of the target area,and intuitively reflect the distribution status of ground objects.With the rapid development of terminal detection equipment and remote transmission technology,the volume of remote sensing images has grown rapidly,and the spatial resolution has been continuously improved.Therefore,how to automatically parse remote sensing images with abundant information has important research and practical value.Multi-label remote sensing image classification is a basic research problem for remote sensing imagery tasks.Through fine-grained analysis of images,multiple image labels are generated to provide basic semantic information for subsequent remote sensing analysis tasks,so it has received extensive attention from both academic and industry.However,the characteristics of remote sensing images have brought serious challenges to the design of multi-label image classification algorithms.The main problems include: The labels correlation information is limited,the image scales are various,the target features are sparse,and the image annotation cost is high.Aiming at the above four problems,this paper focuses on the multi-label remote sensing image classification research.The main contributions are as follows:1.To solve the problem of limited label correlation information,a knowledge-enhanced label semantic representation method is proposed.Existing methods represent label correlations through numerical statistics within the dataset,resulting in insufficient label semantic correlation feature.This method extracts the explicit and implicit semantic correlations between labels from the common-sense knowledge graph,and constructs a label concept graph to express the semantic correlations between labels.In addition,a graph convolutional network encoder is proposed to generate label features.Label semantic features are extracted from label concept graph by designed semantic attention and label attention mechanisms.Experiments show that the algorithm can enhance the label feature representation,and improve the accuracy of multilabel remote sensing image classification.2.For the problem of scale difference among remote sensing images,an image feature extraction method based on multi-layer fusion is proposed.Existing methods use pre-trained convolutional neural networks to generate image feature vectors,which cannot extract multi-scale image features.In this method,a spatial pyramid convolution model is designed to generate multi-scale feature representations for remote sensing images.Furthermore,three feature fusion mechanisms are designed to map multi-scale features to the same vector space to construct the final remote sensing image feature.The experimental results show that the image feature extraction method based on multi-layer fusion can effectively extract multi-scale image features and enhance the accuracy of the model.3.Aiming at the problem of sparse targets in remote sensing images,a semantic concept decoupling method based on contrastive learning is proposed.Existing methods perform label prediction by calculating the overall similarity between labels and image feature vectors,and it is difficult to extract sparse and scattered multi-target features in images.This method adopts the image feature decoupling method to extract fine-grained image features corresponding to each semantic concept from remote sensing images.At the same time,a training objective based on contrastive learning is defined.Augmented image samples are constructed for self-supervised learning,which guides the model to extract effective and distinguishable image features for each label.The experimental results verify the effectiveness of the image feature decoupling module and the contrastive learning method module from different perspectives.4.To solve the high cost of remote sensing data annotating,a model transfer training method based on domain adaptation is proposed.Existing remote sensing tasks lack sufficient annotated images,which makes it difficult to meet the training needs of deep learning models.This method introduces the domain adaptation strategy into the multi-label remote sensing image classification task to learn domain knowledge from a label-rich dataset(i.e.source domain)and transfer it to a label-scarce dataset(i.e.target domain).The designed domain classifier based on gradient reversal is adopted to reduce the difference between domain feature distributions.The experimental results on three datasets show that this method can transfer the knowledge from the source domain to new scenarios and significantly improve the classification performance of the target domain tasks.
Keywords/Search Tags:Remote Sensing Image Classification, Multi-label Classification, Graph Convolutional Neural Networks, Convolutional Neural Networks
PDF Full Text Request
Related items