Font Size: a A A

Based On The Depth Of The Extracted 2D To 3D Implementation

Posted on:2015-02-04Degree:MasterType:Thesis
Country:ChinaCandidate:M X LiFull Text:PDF
GTID:2268330425487432Subject:Optical Engineering
Abstract/Summary:PDF Full Text Request
In recent years, with the rapid development of computer vision technology, more and more3D movies and3D television come into being, which gives the audiences the feeling of being personally on the scene, and greatly enriches the human visual enjoyments.3D film and television industry has developed rapidly, and various3D display devices have also come out. However, the3D video resources are insufficient at present, due to the high cost of3D video production and the other limiting factors such as the long production cycle. Researchers have begun to investigate the2D-3D conversion methods in order to make up for the lack of3D resources.The curve of the spread parameters vs. depths was not monotonic, and then deducing the depth information from a single frame’s spread parameter was impossible. The focused area can be determined by spread parameter because there is the minimal facular radius. We brought forward a method extracting the focused area using spread parameter. The local variance of pixels (LOV) was calculated in this method. After morphological reconstruction, we extracted the focused area and separated the object from the background successfully.Lots of3D information has been lost during the2D image formation, especially, the depth information of the image. It is an ill-posed problem in substance to gain the scene depth from the single frame image. So we need some prior knowledge to solve this problem. We explored the applicability of the four common used depth maps. A radiate gradient assignment for depth map was presented in this paper, which assigns the pixel values according the distance to the maximal depth point. It was proved that conceiving the total scene through fusing the focused object to the prior depth map was practicable.The3D video display principle based on binocular disparity was introduced. The relationship between object depth and parallax was detruded. Every pixel was shifted according to its parallax, and the left-eye and the right-eye images of a stereoscopic pair were produced. The produced red/blue3D pictures had too much noise because of the discontinuity of the parallax distribution. The bilateral filtering was adopted to reduce noise, which preserved the edge of red/blue3D pictures and enhanced the quality of3D picture remarkably.
Keywords/Search Tags:2D to3D, Depth recognition, Focus information
PDF Full Text Request
Related items