Font Size: a A A

Research On Low Complexity Encoding Optimization Alogrithms Of HEVC

Posted on:2014-01-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:X L ShenFull Text:PDF
GTID:1228330395473751Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
To ease the heaven burden on networks imposed by growing data rates of HD(High Definition)/UHD(Ultra High Definition) video, HEVC is standardized by JCT-VC working group and it is the most efficient and latest video coding strandard.HEVC follows the classic block-based hybrid video coding framework, and cultivate innovation in almost every module of the framework. These new technologies include filexible data representation, finer intra prediction, new inter prediction mode "Merge", competition based motion vector prediction, DCT-based interpolation filter, sample adaptive offset, Tile and wave-front parallel processing, and so on. Introducing these new technologies, HEVC achieves50%bit rate reduction over H.264/AVC. However, such a large number of encoding parameters introduce great computional burden on encoder, which makes HEVC difficult for rapid and wide adoption. Therefore, research on reducing the computational complexity while maintaining the high coding efficiency is crucial for HEVC.It gives a brief introduction on video compression and the history of video coding standards at the beginning of this thesis, and summarizes the highlighted features of HEVC. Then, the key techonolgies and modules of HEVC are deeply studied, and the most important issues with respect to rate-distortion coding efficiency and computational complexity are figured out.Then, much effort are dedicated to multiframe reference optimization, flexible data representation, including the partitioning of coding unit (CU) tree, transform unit tree (TU) and prediction unit (PU).Multi-reference frame and flexible data representation make it much complicated to perform motion estimation. By analyzing the correlation of reference index between PUs in different CU depth and spatial neighbors, a simple and effective reference selection algorithm is proposed to reduce the complexity of HEVC encoder. In HEVC, CU and transform unit optimization is highly computational complicated. To address such issue, the splitting of CU/TU is modeled as a binary classification problem. A simple and effective Bayesian classification rule is designed to predict whether to split or not, based on the analysis of video content. In such a way, exhaustive search over all possible units is advoided and lower computational complexity is achieved. To assist classifying, optimal feature subset is formed based assessing the mutual information between features and splitting of CU.In order to reduce the rate distortion loss introduced by misclassification, support vector machine (SVM) is studied and introduced to tackle the aformetioned classification problem. By introducing RD difference as weights in the SVM training procedure, support vector classification (SVC) pays more attatetion to the CUs leading to larger coding efficiency varation. Therefore, it can prevent from losing coding efficiency. Furthmore, a wrapper feature selection method is proposed to optimize the feature subset based on F-Score ranking. Experimental results reveal that the proposed algorithm is highly generalizable and can maintain the coding efficiency of HEVC encoder.Another important module is the optimization of PU. By classifying all the PU modes into square and rectangular partition, a SVC is introduced to predict the PU modes selection. By selecting effective features, the classfier can accurately solve the PU optimization with very low complexity. Finally, the author summarized the thesis and outlook the hot research area in the future.
Keywords/Search Tags:HEVC, video encoding optimization, coding unit optimization, prediction unit optimization, transform unit optimization, multi-reference frameselection, classification, feature selection
PDF Full Text Request
Related items