Font Size: a A A

Joint Compression Of Near-duplicate Videos

Posted on:2017-01-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y L PanFull Text:PDF
GTID:2308330485957911Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
With the growing popularity of image acquisition devices and video sharing websites, huge of video volumes are captured and shared to various video sharing platforms. Especially, the spread and diffusion speed is very impressive for some hot videos. In a very short period of time, the Internet will be filled with these videos and its "variant", which can be considered as similar videos that might differ in editing operations, captions, logo, encoding parameters or brightness. All these videos are called near-duplicate videos. From the view of information theory, there is large redundant information between these videos, which may cause inconvenience in storage and transmission. Traditional video compression methods focus mainly on how to compress separate video more efficiently by reducing the intra- or inter-redundant information of frames. However, few methods compress videos by reducing the redundancy between different videos (videos set redundancy), which may achieve more efficient compression result. Therefore, joint compression of near-duplicate videos is a very valuable research area. This thesis presents two methods to jointly compress near-duplicate videos. The main works are list as follows:(1) A joint compression algorithm for near-duplicate videos is proposed, which is based on sharing of key-frames. The key idea is first to fuse (average) similar key-frames extracted from all the near-duplicate videos. And then, these fused key-frames are shared by all videos, which can remarkably reduce the number of key-frames in each original videos. Experimental results show that this algorithm can obtain 10%~20% compression gain with little PSNR loss, comparing with H.264 standard.(2) A joint compression algorithm based on interframe prediction coding of key-frames is proposed to jointly compress near-duplicate videos. Firstly, the algorithm extracts key-frames from each near-duplicate videos to form a key-frame set. Then, this set is sorted by using "disjoint-set" and neighbor-reversible relationship. After that, the sorted key-frame set is compressed by using interframe prediction coding method, which can remove the redundancy between these similar key-frames. Experimental results show that, this algorithm can obtain 6%-18%compression gain with slightly PSNR loss, comparing with H.264 standard.
Keywords/Search Tags:Video Compression, Frame Fusion, Reduce Redundancy, Image Set, Interframe Prediction Coding
PDF Full Text Request
Related items