Font Size: a A A

Towards Urban Three-Dimensional Modeling Using Mobile LiDAR and Images

Posted on:2012-12-28Degree:Ph.DType:Thesis
University:McGill University (Canada)Candidate:Wang, RuishengFull Text:PDF
GTID:2458390011455925Subject:Engineering
Abstract/Summary:
This thesis addresses the challenging problems in creating 3D photorealistic building models using mobile LiDAR and images. We focus on ground-based sensing techniques, consisting of two stages. In the first stage, I present two new methods using either images or mobile LiDAR data alone for 3D urban modeling. The limitations of using each data alone are discussed, which leads to the sensor fusion research in the next phase. In the second stage, I introduce methodologies to maximize the synergy by fusing images and mobile LIDAR for upsampling range data towards the goal of generating 3D photorealistic urban models.;In summary, we propose a set of algorithms for urban 3D modeling using mobile LiDAR and images. These techniques are useful not only for real-world applications such as urban 3D modeling, but also for an alternative solution to "NAVTEQ TRUE".;There are fundamental research challenges involved in each stage of this work. In the first stage, I address the ill-posed problem of how much 3D information we can infer from single images. Proposed is a new model-based approach to 3D building reconstruction from single images. This method does not require model-to-image projection and readjustment procedures as existing methods do, and is more accurate than vanishing point based methods. On the LiDAR data side, I present an automatic approach to window detection from mobile LiDAR data. Proposed is a combination of bottom-up and top-down scheme to extract building facades followed by a robust window detection algorithm. In the second stage, I address the virtual sensor problem, and provide an alternative solution to "NAVTEQ TRUE" technology. To do so, I first address the multi-modal registration problem. Our algorithm automatically processes LiDAR data and panoramic images collected over a metropolitan scale. It is the first example we are aware of that validates mutual information registration in a large-scale context. Then I address the problem of upsampling. Proposed is a new method incorporating LiDAR point visibility information to upsample mobile LiDAR data using panoramic images. A new point visibility computation method based on multi-resolution depth maps generated from a Quadrilateralized Spherical Cube mapping is presented. For the interpolation, I present a new interpolation scheme that uses ray casting methods incorporating constraints from color information in spherical images to upsample sparse LiDAR points. The experiments demonstrate the improvement of the upsampling by using the color information from the images.
Keywords/Search Tags:Mobile LIDAR, Images, LIDAR data, 3D photorealistic, Urban 3D, Information, Address, Problem
Related items