Font Size: a A A

Research On High Spatial Resolution Remote Sensing Urban Scene Classification Using Spatial Relationship Of Objects And Multi-source Geographic Data

Posted on:2020-03-14Degree:MasterType:Thesis
Country:ChinaCandidate:S Q WuFull Text:PDF
GTID:2370330590476764Subject:Photogrammetry and Remote Sensing
Abstract/Summary:PDF Full Text Request
The urban ground objects forms urban scenes with same functions,such as commercial area,residential area and industrial area by spatial agglomeration following a specific spatial relationship.Reliable urban scene maps are essential for urban planning and urban analysis because the spatial distribution of scenes reflects the complex environment of megacities under the combined effects of nature and socioeconomics.Very high resolution remote sensing imagery has been widely used to map urban scene from the perspective of the area in recent years.As a result of the large semantic gap between the low-level features and the high-level semantics,scene understanding is a challenging task for high satellite resolution images.Nowadays,most of the existing scene classification methods,such as the bag-of-visual-words model,feature coding,topic models,and neural networks,have the following two problems: 1)These methods does not have the ability to recognize the semantic and spatial relationship between the internal objects.2)The results obtained cannot easily be applied to actual urban analysis,due to the inconsistency of the classification system between the results and the datasets,in addition to the regular scene segmentation.In order to deal with those challenges in very high resolution remote sensing urban scene classification,this paper develops a scene classification method based on spatial context relationship of multiple objects,combining the co-occurrence relations and position relations of the objects,solving the problem that the traditional force histogram cannot describe the position relations between multiple objects,expanding the application environment of the fisher kernel coding.With the help of various forms of multi-source data and the rich socio-economic semantic information contained in them,when combining them with high resolution remote sensing images,the mosaic phenomenon of the scene classification results,and the difficulty in putting into practical use caused by inconsistent classification system can be solved.The main work of this paper can be summarized into the following three points:(1)The basic methods and theories of urban scene classification.The paper reviews the scene classification methods based on high resolution remote sensing images,and introduces object-oriented classification methods,semantic object relationship mining methods and common classifiers.Besides,scene classification methods based on multi-source data is introduced.The OSM data and POI data are taken as examples for detailed explanation.In addition,the fusion of high resolution remote sensing images and multi-source data is also summarized,and the probabilistic topic model and deep learning are highlighted.(2)A bottom-up scene understanding framework based on the multi-object spatial context relationship model.In the proposed method,the co-occurrence relation features are modeled by the fisher kernel coding of objects,while the position relation features are represented by the multi-object force histogram.The multi-object force histogram is the evolution of the force histogram between pairwise objects.The multi-object force histogram not only has the property of being invariant to rotation and mirroring,but also acquires the spatial distribution of the scene by calculating the acting force between multiple land-cover objects.Due to the utilization of the prior knowledge of the objects' information,the proposed method can explain the objects and their relations to allow understanding of the scene.(3)A unified point,line,and polygon semantic object mapping framework through high resolution remote sensing images in conjunction with multi-source geospatial data.In this framework,the POI data,the OSM data,and the remote sensing image data are point,line,and polygon data,and three types of data are integrated to recognize the scenes.Specifically,for point objects,points of interests(POIs)are reclassified as subcategories.For line objects,OpenStreetMap(OSM)line data are first utilized to supply the boundaries of the land-use mapping units for POIs and high resolution remote sensing images,forming urban land parcels in block scales.For polygon objects,a multi-scale sampling and continuous fine-tuning approach is then introduced to acquire the categories of the VHR images in the land parcels,describing the global attributes in polygon scale.In terms of unification of classification system,in order to meet the actual needs,a rule-based bag-of-words(BoW)model is applied to integrate the categories of the POIs and VHR images into urban land-use classification standards,acquiring the actual categories in the megacities.Furthermore,the framework is scalable due to the unsupervised scene classification and the variability of the scene classification standards,thus can be adapted to different urban needs.
Keywords/Search Tags:High resolution remote sensing images, Multi-source data, semantic objects, urban scene classification, spatial relations, bag of words, deep learning
PDF Full Text Request
Related items