| High quality 3D city models are the basic infrastructure for smart cities and various related applications.Now,with the great progress of photogrammetric technology,it is possible to automate fully the generation of high quality 3D city models using oblique images.However,one big challenge that impedes the model generation is moving objects,particularly cars,appearing in the stereo couples covering cities.The moving targets add unstable factors into aerial triangulation and density matching processes,thus influencing the final model quality.To extend this research and faithfully represent the dynamic environment of cities using instantaneous discrete still captures of the reality like images,we propose a new pre-processing procedure for optical imagery that detects and removes problematic objects such as moving cars,and ensures accurate and precise 3Dcity models.This procedure also fills empty car positions in street images,appearing as empty voids,in a visually plausible way.As a separate process preceding aero triangulation and 3D extrusion,this technique eliminates the lag inconvenience and distortion caused by moving objects when representing urban streetscapes,while remaining faithful to the concept of realism.This work addresses in a theoretical and practical way two major questions concerning image understanding and computer vision:object detection and object removal as applied to 3D modeling.Hence,the focus of my research is,first,on the design and application of a Deep Learning based object classifier to detect problematic objects(moving cars in cities)that skew stereovision during the 3D extrusion process.The second focus is the removal of those objects using a new method that combines texture synthesis and image Inpainting techniques.We apply our resulting stereo-couples in the 3D city generation thus validating this novel methodology.Experiments using oblique images of urban areas show that the method we propose can greatly improve the 3D modeling quality of road areas. |