Font Size: a A A

Research On The Methods Of Automatic Semantic Annotation For Remote Sensing Images

Posted on:2018-04-14Degree:MasterType:Thesis
Country:ChinaCandidate:Q Q XuFull Text:PDF
GTID:2348330536477346Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the development of remote sensing technology to higher resolution and wider coverage,there is consistent growth in the amount of remote sensing images data.It is urgent to possess management and understanding skills that are compatible with the speed of acquisition.Automatic semantics annotation for remote sensing images is the key to large scale remote sensing image data management and understanding.The semantic content of remote sensing images data by using information technology can help users to understand the image content intuitively and complete the efficient management of massive remote sensing images data.The existing methods has the following challenges in the annotation process.Firstly,there are complex spatial structure and rich geographical features in remote sensing image.Many scholars study a single feature of remote sensing image,which result in bad precision.Image features fusion can help present content of remote sensing image accurately.However,each dimension feature of remote sensing image doesn't have strong correlation with the accuracy of annotation,some weakly related features affect accuracy of remote sensing image semantics annotation.Secondly,the more features,the more dimensions of features.With the large increase in remote sensing image data,the traditional semantic annotation can't tap the rules of features faced with massive high dimensional features,so that the underlying characteristics can't accurately reflect the concept of high-level semantics and make accuracy restricted.Thirdly,Marine remote sensing image as a typical remote sensing image,has a significant target sparseness.That is,in large-scale marine remote sensing images,the key information is often only a small part of the entire image.In addition,the object and its corresponding semantic concept will be different under different observation scales.The traditional semantic annotation method can't accurately express marine remote sensing image content,but also lead to inefficient annotation.This paper focuses on the above questions to study automatic annotation of remote sensing image and the main work is divided into three parts.Firstly,the single feature of remote sensing image can't accurately describe the image content,so the paper use multi-feature fusion method.Because the contribution rate of the different dimension features of the remote sensing image is different.Remote sensing images annotation based weighted features fusion is proposed.Without remote sensing image segmentation,we extract color features of remote sensing images by color moment method based on HSV space,extract texture features by Co-occurrence Matrix,extract shape feature by SIFT.The paper calculate the standard deviation of each class of each dimension data,to judge the stability of the dimensional data.Then we get its corresponding weight coefficient,and the weight matrix is obtained,and the visual feature of the extracted remote sensing image is weighted.The method combines color,texture,and shape features of remote sensing image in order to improve the annotation accuracy.We use support vector machine method to carry out automatic labeling experiments on public remote sensing image datasets.Experimental results show that the accuracy of the proposed method is better than the annotation method using only the single feature of remote sensing image.Secondly,when the fusion feature is more,the higher the dimension of the feature data of the remote sensing image,the lower annotation precision is.We will build an automatic annotation model based on deep learning which input is the fusion feature to improve the accuracy of large scale remote sensing images annotation.The first layer of the model uses improved RBM,to adapt to the optimal weight of the fusion of high-dimensional visual features as the bottom of the model input.The other layers are based on the limited Boltzmann machine.We transform the high dimensional visual feature of the optimal fusion layer by layer to achieve from the bottom to the high-level feature extraction and we improve the large-scale remote sensing image semantic annotation accuracy.This method is compared with the traditional neural network and weighed fusion method.The experimental results show that the method multi-feature fusion based on deep belief network achieves better in precision.Thirdly,the sparse of the target information of marine remote sensing images make precision restricted.The method based on deep belief networks multi-instance is proposed.We use the wavelet transform to generate expression of images in different resolution,according to the multi-scale characteristics of marine remote sensing images.We get ocean remote sensing image background area and object area in coarse-grained segmentation,use multi instances to represent different image parts.We calculate the similarity between different instances of the same scale.We complete adaptive segmentation by given threshold.Instances of each scale is input of learning model,and we realize semantic annotation.We calculate the relationship between words,which contain occurrence and opposite relationship,to improve annotation precision.The proposed method improve the accuracy of marine remote sensing image automatic annotation.
Keywords/Search Tags:remote sensing images, automatic annotation, multi-features fusion, deep belief networks, multi-instance
PDF Full Text Request
Related items