Font Size: a A A

The Research And Implementation On Large Scale Volume Data Compression Algorithm Based On GPU

Posted on:2012-10-09Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZhangFull Text:PDF
GTID:2248330395485329Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
How to use the floating point calculation performance and parallel processing capabilities of programmable graphics hardware to accelerate data compression algorithm has gradually become a research hotspot in the field of data compression. Among many compression methods, vector quantization has attracted a large number of scholars’s enthusiasm because of its high compression ratio and simple decoding. Most of studies focus on how to speed up the code word searching and how to improve the codebook generation algorithm to accelerate vector quantization. Researches on GPU-based vector quantization are very few. This is mainly because vector quantization is a non-parallel algorithm. Therefore, the acceleration effect would be little even run vector quantization in GPU. The core issue of this paper is to research how to adjust the structure of vector quantization algorithm, make it can be parallel decomposition, and then make full use of parallel computing power of graphics hardware to accelerate the encoding process of vector quantization under the premise of ensuring the integrity of data.Firstly, this paper proposes a vector quantization algorithm which designing codebook according to spatial correlation of volume data. In the data preprocessing stage, this algorithm uses autocorrelation function to evaluation autocorrelation of each vector, divides vector set into two subsets according to correlation coefficient, uses LBG algorithm to design a codebook for each subset. Redundant data often have high correlation and it always account for most of the volume data. The experiment shows that this method could greatly reduce the computational for redundant data and those data that do not require detailed rendering.Secondly, this paper proposes a parallel improvement strategy for vector quantization after the fully studies of CUDA programming technology and parallel computing environment of GPU. The strategy reduces the interdependence between various aspects of vector quantization, it make implement vector quantization in GPU became true. When the original data is very large, data transfer between CPU and GPU is more frequent during iterative process of LBG algorithm, this will significantly increases the time cost of algorithm. To solve this problem, this paper proposes an adaptive codebook vector quantization algorithm. The algorithm can guarantee that data only need to be loaded once to obtain optimum codebook. The experiment shows that this algorithm could greatly improve the compression speed and ensure the image reconstruction and compression ratio.Finally, apply adaptive codebook vector quantization algorithm and two efficient vector quantization algorithms to the high-dimensional data visualization system which I was involved in. By comparing compression efficiency, compression ratio and image, reconstruction quality between these methods, we conclude that:GPU-based adaptive codebook vector quantization algorithm is a high efficiency and low distortion compression method.
Keywords/Search Tags:volume compression, vector quantization, spatial autocorrelationfunction, CUDA, GPU, high-dimensional seismic data visualization system
PDF Full Text Request
Related items