Font Size: a A A

Image Compression Based On Factoring Repeated Contents And Visual Saliency

Posted on:2014-11-14Degree:MasterType:Thesis
Country:ChinaCandidate:X H ZhuFull Text:PDF
GTID:2308330482485126Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
The web-oriented large-scale three-dimensional scene rendering systems often require a large number of high-resolution images as texture data. It presents a challenge to memory and network bandwidth. How to compress the texture in high-quality is a problem worth studying, which is able to improve rendering efficiency and the transmission speed. The traditional image compression algorithms, such as JPEG2000, focus on the local redundancies of the image and do not support the hardware decompression. In this paper, we study the rendering-oriented texture compression method which needs to meet the following two requirements:the graphic hardware is able to access the compressed texture data randomly while is rendering; the decompression of the texture can be implemented by graphic hardware, which can accelerate the speed of the decompression. Based on factoring repetitive contents algorithm, we focus on image compression quality and the compression speed. The main tasks of this paper are as follows.1. Adaptive image compression method based on visual saliency. By using the similarity of the blocks in the image, the aim of the method is to eliminate the duplicate contents while keep the most representative blocks and to guarantee that there are no obvious local flaws by controlling the matching error. In order to make the compression algorithm meet with the human eyes, we introduce the HVS (human visual system) by using the image visual saliency which is able to achieve the multiple precision error control of the matching process, i.e. according to the importance-map of the image, we adaptively adjust the error range among the different regions, and then increase the quality of the reconstructed image.2. GPU-accelerated extraction by factoring repeated contents:One problem of the globally factoring repetitive contents of the image is that the similarity matching process is computation-intensive and time-consuming. One of the reasons is that there are too many invalid candidates to be matched. In order to decrease the candidates’misses, we adopt a feature descriptor. Then the similarity of two blocks is transformed into distances comparison of the descriptors which is certainly much easier to compute. We add a filter before the self-similarity procedure which is used to select the most possible candidates. By introducing the GPU KNN algorithm, the results of the experiments show that the speed of the compression is much faster than before.3. The design and implementation of the compression system which is based on global factoring repetitive contents and visual saliency:After analyzing several features of the algorithm, such as decompression speed, random access, global repeated contents and visual saliency, we build a modular compression system, using the CUDA Toolkit and Visual Studio framework. Through the rational combination of system objects, we improve the reusability, stability and scalability of the compression system. By adding visual saliency and accelerated self-similarity filter, the compression system shows the better performances in testing of high-resolution images.
Keywords/Search Tags:image compression, image saliency, HVS, repeated content, KNN
PDF Full Text Request
Related items