| The urban landscape in the Beijing Core Area is full of the historical and cultural characteristics of a city and has unparalleled historical,cultural,and social values.However,with the continuous promotion of urbanization and modernization,the urban population is increasing,and the problems of human living environment and urban landscape convergence among cities in the Beijing Core Area are gradually coming to the fore,affecting the quality of life of residents and the overall urban landscape.Therefore,there is an urgent need to conduct a detailed investigation and research on each urban landscape element in the Beijing Core Area,and then provide data support for the conservation and renewal of the urban landscape.Currently,with the continuous development of sensor technology and digitalization,the acquisition of large data of high-resolution streetscape images has become possible,which provides new opportunities for urban landscape research.Street view images can record urban street-level scenes in detail and systematically from the pedestrian’s perspective,and have the advantages of wide coverage,large data volume and low acquisition cost.However,there are complex environments and diverse scenes in the Beijing Core Area,and how to use street view images to identify various urban landscape elements automatically and accurately in the Beijing Core Area is an urgent problem.Meanwhile,the existing methods of extracting landscape elements based on streetscape images mostly use discrete points to represent the distribution information of landscape elements on the plane map,but cannot get an efficient,accurate and continuous description of the distribution of landscape elements on the landscape fa(?)ade,which cannot meet the demand of overall landscape analysis.Based on the above research background,this paper carries out research on the extraction method of landscape features in the Beijing Core Area from three aspects:semantic identification of landscape features in the Core Area,generation of landscape facades and extraction of the overall landscape of the street scene.The main research contents and innovations of this paper include:(1)This paper summarizes the current means of extracting landscape features and methods for modelling and representing street scenes,and focuses on the shortcomings of existing methods for extracting urban landscape features based on streetscape images.(2)To address the serious shortage of existing annotated samples in Beijing Core Area,a street view image dataset of the Beijing Core Area was assembled.Based on an exhaustive analysis of the situation in Beijing Core Area and the existing dataset,street view images collection paths are planned,street view images are collected and annotated in detail at the pixel level.(3)A semantic recognition model for urban landscape based on deep learning semantic segmentation algorithm and a migration learning-based model training method are constructed to address the problem of complex environment in the Beijing Core Area and the inability of current methods to accurately identify multiple types of landscape features from street view images.The model introduces a polarized selfattentive module and an atrous spatial pyramid pooling module,which can accurately identify various complex urban landscape features in the Beijing Core Area.In the training phase of the model,a transfer learning method is designed to train the network to learn the features of the historic landscape based on the information within the street view images contained in the existing dataset,which can increase the training efficiency of the model,maximize the use of existing data,reduce the reliance on a large number of training samples and improve the recognition accuracy of the semantic recognition model for landscape features.(4)Due to the narrow alleyways in Beijing Core Area,existing image-based methods usually require the design and planning of multiple acquisition routes,making it difficult to efficiently reconstruct the fa(?)ade information in street scenario,while Li DAR point cloud-based methods are unable to quickly collect threedimensional information about the scene.To address the above problems,this paper establishes a landscape fa(?)ade generation model based on neural radiation fields.By implicitly learning the 3D scene into the neural network weights,this method can reduce the storage volume of the model and can directly obtain high-quality orthorectified landscape facade images through rendering.In addition,the method is optimized for the scenes in Beijing Core Area,using panoramic images as input to facilitate fast and comprehensive street view information capture in narrow scenes of the historic alleyways.In the concrete implementation of the model,in order to meet the demand for reconstructing the scene using multiple panoramic images,a panoramic imaging model suitable for multiple panoramic images is established based on the existing single panoramic image ERP projection method,introducing panoramic image camera coordinates and forward directional angles.To further reduce the problems caused by aberrations in ERP projections,this paper introduces distortion-aware ray sampling methods to improve the efficiency and quality of scene understanding and reconstruction.(5)To address the problem of efficient,accurate and continuous extraction of urban landscape features in the Beijing Core Area,the extraction of landscape features from the overall fa(?)ade of a street is explored by combining the landscape semantic recognition model and the landscape fa(?)ade reconstruction model. |