Font Size: a A A

Multi-depth Fusion Of 3D Scene Rendering

Posted on:2018-07-04Degree:MasterType:Thesis
Country:ChinaCandidate:Z LiuFull Text:PDF
GTID:2348330536485633Subject:Engineering
Abstract/Summary:PDF Full Text Request
With the development of multimedia technology,three-dimensional video,multiview video and free viewpoint video(FVV)are widely used.Muti-view color plus depth video format is widely used in the free viewpoint video.This kind of video format can be used to synthesize arbitrary viewpoint in a certain range by using limited view information.Multiview video plus depth add depth information compared with the traditional way of only encoding texture map that can obtain the more scene content,which is conducive to reconstructed scene at the decoder.Depth map is more smooth and has a greater efficiency of compression than the color map.The virtual view synthesis based on depth map technology makes the view synthesis simple and efficient,and the virtual viewpoint image quality is acceptable.However,the depth image based rendering technology is not perfect,first of all,the rendering quality depends on the depth map which will be distortion in compression and transmission.Secondly,when the scene in the non-uniform illumination conditions,the synthesis view will appear inconsistent of the phenomenon between the reference viewpoints.Therefore,this paper puts forward three innovative technologies for the problems in depth image based rendering technology to raise the quality of the synthesize views.following is The main work and innovation of this paper.Firstly,an algorithm is proposed to restore the real illumination of synthetic viewpoint.First of all,calculate the mean value of multiple left view and right view reference images,then the new viewpoint is synthesized.A linear least squares fitting method is used to fit the actual illumination condition and the mean image.Finally,the virtual viewpoint of illumination is restored by using the one-dimensional linear relation expression.The algorithm can effectively mitigate the influence of color and brightness change between different views when in the non-homogeneous illumination conditions.Experimental results show that,this method can ease the problem of pixel offset caused by inaccurate depth map and the mapping error in view synthesis.At the same time,it has a good effect on even illumination.Secondly,an algorithm is proposed to recover the depth map after the depth fusion.In this method,a frame of the depth map after compression reconstruction is proposed.The depth map is restored By using the left and right views of the target viewpoint as reference information.First of all,the left and right views are mapped tothe target point of view using DIBR technology respectively.Because the depth image edge information is most damaged after encoding,the SLIC superpixel segmentation of the middle view of texture map,and mapped texture map super pixel segmentation results to the same location of depth map,it can rectify the depth map more edge.Finally,the depth value of each pixel is filtered by clustering to eliminate the depth error due to quantization errors.This algorithm effectively eliminates the block effect caused by block encoding and quantization process.In the end,do not need encoding additional information to reconstruct high quality depth map through the effective use of the correlation between viewpoints.Objectively,the total average gain on peak signal to noise ratio is 2.0074 dB.
Keywords/Search Tags:Virtual View Synthesis, Free Viewpoint Video, Depth Map Coding, Depth Image Recovery, Illumination Restoration
PDF Full Text Request
Related items