Font Size: a A A

Research On Inter-view Prediction And Related Technologies Of The 3D-AVS2 Multi-view Video Coding Standard

Posted on:2016-11-11Degree:MasterType:Thesis
Country:ChinaCandidate:J MaFull Text:PDF
GTID:2308330479490057Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Three dimension(3D) video become more and more popular in the movie and entertainment market and daily life in the last few years. 3D video is captured by different cameras corresponding to a special position and 3D video formats with few texture views and associated depth information which are also termed as Multi-view plus Depth formats is still require a large storage space to store and a high transmission bandwidth to transmit. Moreover, it contains a large amount of interview redundancies. Therefore, how to exploit the inter-view correlation and to compress 3D video more effective is the focus of our research.The current international 3D video coding standard have been carried out, like 3D-HEVC. Aiming to handle the increasing demands for compression of 3D video content, the working group of China Audio Video Coding Standard(AVS) starts to develop a 3D extension of AVS. This paper proposed a joint development of 3DAVS2 coding platform based on the AVS2 coding standard, this platform can realize the input of multiple viewpoints of texture and depth maps. The independent views which can be encoded by traditional AVS2 coding standard and the other dependent views can be encoded by additional 3D coding tools(e.g., disparity compensated prediction) because of the correlation between views. The texture and depth map s are encoded independently without reference from each other. After encoding process, the output is the bit-stream mixed up the texture and depth maps. Experimental results show that our proposed platform has better performance compared to encoding the 3D video separately with AVS2 platform, the BD-rate saving of the coded texture views is up to 18.2%.The relationship between current block and the corresponding block in an already coded reference view of the same access unit is that they represent the same object in the real world and the corresponding block can be located by a disparity vector(DV) between views and the location of current block. The 3D tools in the 3D-AVS2 are lacking an effective method to derive DVs, therefore this paper provides a method to derive a global disparity vector(GDV) from the temporal reference picture. This GDV can be used by disparity compensated prediction, weight skip mode and inter-view motion prediction. Experimental results show that the BD-rate saving of the coded texture views of our proposed method is up to 1.6%, compared to RFD0.1.On the basis of allowing coded depth map, since the DV can be derived from the depth sample and camera parameters. This paper proposes a method that GDV can be refined by using the correspondding depth sample of the texture map. Experimental results show that the BD-rate saving of the coded texture views of our proposed method is up to 3.6%, compared to RFD1.0.Since an object can be located in different views by a DV, t he motion information is highly correlated between different views. This paper provides a quad-tree based inter-view motion prediction method which the dependent view can be predicted from motion information of the independent views. Experimental results show that the BD-rate saving of the coded texture views of our proposed method is up to 4.7%, compared to RFD0.1.
Keywords/Search Tags:3D video coding, 3D-AVS2, Disparity vector, Quad-tree, Inter-view motion prediction
PDF Full Text Request
Related items