Font Size: a A A

GPU-Based Parallel Optimization Of Adaptive Gaussian Mixture Background Modeling Algorithms

Posted on:2012-09-13Degree:MasterType:Thesis
Country:ChinaCandidate:J J ZhongFull Text:PDF
GTID:2218330362456449Subject:Computer system completely
Abstract/Summary:PDF Full Text Request
Moving target detection is the basis of moving image tracking and image analysis, in which a basic and crucial task is to determine the desired moving targets. The background subtraction technology is the commonest approach for target detection, which is implemented by subtracting the corresponding bachground frame from a video frame. Then the regions with a relative large difference are labeled as the moving targets. For all of the background modeling methods, the Gaussian mixture modeling is considered as a good method with high performance in both detection capability and adaptability. Nevertheless, it is difficult to implemented in real time for its huge computational complexity. Fortunately,the emerging Graphics Processor Units(GPUs) provide a new platform for its implemention because the so many Stream Processor(SP) units of the GPUs can be used to accelerate the computing process. So it is nontrivial to optimize the process of the background modeling by mining its parallelism on GPUs, which is useful for the extension of the application range, as well as the reduction of the cost.With the assistance of the CUDA programming environment on GPUs, the parallel improvement is applied to a adaptive Gaussian mixture background modeling algoritm from the two aspects of the thread-level parallelism and the asynchronous stream processing. The thread-level parallelism carried out by mapping the background update process of each pixel onto a Stream Processor as a thread to execute through the kernel function (kernel) provided by CUDA. These threads can run simultaneously, so as to achieve the purpose of the parallel execution and the effects of acceleration. The idea of stream computing, which schedules the computing the corresponding data access in parallel to hide the delay of the data access, is use for reference for the asynchronous stream processing optimization. By creating multiple streams in the CUDA programming model, the computing performance can be improved because the process of the access and computation for the data of each flow can overlap each other delicately. Meanwhile, the model parameters of each pixel are organized in blocks to storage according the rule of the order of row-first, so as to facilitate the data access of kernel functions in the process of multi-stream parallel processing.The video sequences with different resolutions as 384×288, 640×272, 720×576, 1280×720 and 1920×1080 are used to test the performance of the CUDA thread-level parallelism optimization. The experimental results indicate that the average time for the background modeling is reduced by 40.932ms, 94.656ms, 228.012ms, 547.759ms and 861.459ms repectively in the Debug mode; while in the Release mode, the average time is reduced by 10.362ms, 33.421ms, 71.594ms, 173.609ms and 156.02ms repectively. On the basis of this, the average time for the background modeling is further reduced by 2.64ms, 3.769ms, 10.703ms, 19.331ms and 55.335ms repectively in the Release mode through the asynchronous stream processing optimization which takes the 8 data streams as a typical reference. It can safely conclude that the process of the Gaussian mixture background modeling could be speeded up significantly through the optimization of the thread-level parallelism and the asynchronous stream processing.This thesis is supported by the project of National Natural Science Foundation of China (No.60873029) and the project of Innovation Research Foundation of Huazhong University of Science and Technology (No.2010MS014).
Keywords/Search Tags:Gaussian Mixture Background Modeling, Stream computing, Thread level parallelism, Graphic Processing Unit(GPU), Parallel programming model
PDF Full Text Request
Related items