Font Size: a A A

Hyperspectral Image Blending GPU-based Pixel Decomposition Parallel Optimization

Posted on:2015-02-23Degree:MasterType:Thesis
Country:ChinaCandidate:S YeFull Text:PDF
GTID:2268330425987881Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Due to its high spatial resolution and spectral resolution, Hyperspectral Remote Sensing is widely used in various fields of Earth science. Throughout the hyperspectral image processing, the unmixing technique is the key link and research focus. The existing unmixing algorithms with low efficiency can’t meet the needs of real-time processing of large amounts of remote sensing image data, while GPU/CUDA architecture can provide nearly computer cluster algorithm for high computing power. Using the high memory bandwidth and powerful parallel processing ability of GPU to improve unmixing algorithm’s efficiency is an effective research idea.In response to these scientific issues, this paper analyzes the imaging mechanism and linear spectral mixture model of hyperspectral remote sensing, the research development status of parallel computing, GPGPU heterogeneous programming model and CUDA architecture. Then, combining with the GPU/CUDA architecture, traditional hyperspectral unmixing and sparse hyperspectral unmixing algorithms were optimized using parallel processing.Firstly, by analyzing the basic principles of traditional hyperspectral endmember extraction algorithms, this paper designed a PPI and N-FINDR Endmember extract algorithm based GPU, which combined with algorithms for non-correlation of different cell treatment. In the PPI parallel algorithm, the vector projection problem was transformed into a matrix multiplication, and achieved the highest hundredfold speedup while ensuring accuracy. The N-FINDR parallel algorithm achieved significant speedup either, by creating a endmember set concurrency replacement method.Secondly, hyperspectral unmixing algorithms based on the Nonnegative matrix decomposition method were deeply analyzed. Methods like threads mapping, memory optimization were used during the design of the parallel optimization. Then, the experiments demonstrated the effectiveness of the algorithm on both simulated and real hyperspectral datasets.Finally, GPU-based parallel optimization methods of sparsity hyperspectral image unmixing algorithm were proposed. In order to meet the real-time algorithm requirements for L1/2NMF algorithm which has a high-complexity constrained regularization term, CPU+GPU heterogeneous parallel computing solutions had been proposed to accelerate the processing speed of the algorithm through reasonable task allocation. While for CSNMF algorithm, we derived a high-performance method using large-scale parallel computing threading technology and tested on the Telsa C2050platform. The experimental result showed that GPU-based parallel optimization strategy not only can bring great efficiency for high precision hyperspectral image unmixing technique, but also making real-time remote sensing image processing possible.
Keywords/Search Tags:Hyperspectral Remote Sensing, Pixel Unmixing, Endmenber Collection, Sparsity, GPU, CUDA
PDF Full Text Request
Related items