Font Size: a A A

Image based view rendering from uncalibrated cameras

Posted on:2017-12-23Degree:Ph.DType:Thesis
University:State University of New York at BuffaloCandidate:Hu, JieFull Text:PDF
GTID:2458390005980549Subject:Computer Engineering
Abstract/Summary:
With the progressing of camera related techniques in recent years, people would like to acquire all kinds of view rendering to satisfy different requirements. However, due to the high budget, limited field of view, unreachable capturing position, or other subjective and objective reasons, some special views of a scene may be hard to obtain from available equipments directly. Especially with the rapid progress and extensive deployment of mobile devices in recent years, there is a strong consumer interest in rendering views of a scene to provide immersive user experience, but the camera may not complete the task in one shot. View rendering techniques, which render novel views of the scene given a set of images, provide a solution to such kinds of application.;As a topic that has long been studied in computer graphics and computer vision, conventional view rendering schemes fall in two categories. One relies on accurate 3D geometric information, which generates 3D point cloud or depth map and then re-project the novel view based on pre-constructed 3D model. This kind of methods often provide accurate rendering results, but the processes of 3D point cloud reconstruction, depth map extraction and precise calibration of full 6DOF camera poses require either auxiliary equipments or large number of reference images and computational resources. The other category does not need explicit 3D geometry, which registers input images based on their pairwise relationship and presents the novel views by stitching or interpolating process. Comparing with the first scheme, this kind of methods is often light weight and easy to be implemented, but may suffer from artifacts caused by parallax.;In this research, we aim at developing and improving image based view rendering scheme for several application scenarios. This dissertation addresses three different scenarios of view rendering: view synthesis, image and video stitching and multi-perspective panorama. In the first topic, view synthesis, we propose a novel Bayesian formulation to reconstruct the synthetic image of the novel view from given reference images of the scene with planar structures. Adopting plane segmentation in the proposed scheme, we extend the Bayesian scheme to solve view synthesis for the scene with multiple planar structures. In the second topic, image and video stitching, we focus on the artifacts caused by local misalignment and adjust compositing steps of the previous pipeline to improve the stitching results. We introduce content preserving warping based on global alignment, smoothness, feature matching, line preserving and line matching to apply adjustment over the warping step of the image stitching. We also propose a discontinuous seam cutting scheme using dynamic programming to search seam cutting for each frame of the video stitching. In the third topic, multi-perspective panorama, we propose a novel adaptive resampling scheme which can generate the natural looking panoramas for the long scene via consumer-grade cameras.
Keywords/Search Tags:View rendering, Camera, Image, Novel, Scene, Scheme
Related items