Font Size: a A A

High performance computing for massive LiDAR data processing with optimized GPU parallel programming

Posted on:2013-03-17Degree:M.SType:Thesis
University:The University of Texas at DallasCandidate:Yuan, ChenFull Text:PDF
GTID:2458390008481121Subject:Geodesy
Abstract/Summary:
The recent development of LiDAR (Light Detection And Ranging) for acquiring high density 3D laser scanning data has made it a cost-effective technology to extract accurate terrain information with extremely high spatial resolution. However, the increased LiDAR scanning density has routinely resulted in massive LiDAR point clouds in the order of millions or even billions of points. This explosive volume of data presents a new challenge to traditional LiDAR processing algorithms. To meet this challenge, a parallel LiDAR processing algorithm utilizing Compute Unified Device Architecture (CUDA) is proposed. To harness its advantages, a hierarchical spatial decomposition of discrete LiDAR points into cells, overlapped tiles, and a global grid indexing is implemented to stream massive LiDAR point cloud into memory for processing. The results show that a LiDAR point cloud with tens of millions points can be processed by a CUDA-enabled Graphics Processing Unit (GPU) with up to a thirty fold speed increase over a similar sequential algorithm.
Keywords/Search Tags:Lidar, Processing, Data
Related items