Font Size: a A A

Creating Point Based Models From Photographs And Rendering Them In Environment Mappings

Posted on:2008-01-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:W H AnFull Text:PDF
GTID:1118360215468421Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Nowadays, the modeling and browsing of real scenes is a very important applica-tion in computer graphics, such as virtual museum application. The common renderingtechniques based on triangle meshes can not meet this requirement due to the followingfacts. On one side, it uses simple parameters to define re?ectance properties for objects,which is difficult to achieve the photorealism like real world; On the other side, withthe increasing requirements for geometry accuracy and complexity, the triangle meshesbecome more and more dense, which is a great challenge to the realtime rendering.In order to solve these problems, researchers have proposed two novel rendering tech-niques, i.e. image based rendering and point based rendering.Image based rendering technique takes pixels as the primitive elements for ren-dering. It not only avoids the construction of triangle meshes, but also can use thesubtle real-world details in the images to construct photorealistic scenes. However, it isdifficult for the large amount of sampled images to achieve real-time rendering. Pointbased rendering technique defines 3D models as a set of dense sampled points on asolid object's surface. It decreases the model complexity, and improves the renderingperformance. Nevertheless, its main challenges are how to identify the visibility, andhow to fill the holes on the screen.Focusing on the photorealistic rendering, we discuss the above two techniquessystematically in this dissertation. To make use of their complementary advantages, wecombine the two techniques to implement a photorealistic rendering system. Firstly,we divide the 3D scene into two parts: near scene and far scene. Considering thecomplex geometry of near scene, we represent it with point based models, which can beconstructed from sampled images. Since the far scene has no distinct change in depth,it can be represented by environment mapping. Finally, the two parts are assembledtogether, and their interactions in illumination are computed. The main contributionsof our work can be summarized as follows.For environment mapping, we adopt two kinds of representations, i.e. cylindricaland spherical panoramas. In this dissertation, an automatic method is proposed to create cylindrical panoramas. This method has two distinct features: First, it adopts the FFT-based phase-correlation algorithm to register images, which avoids the extraction ofimage features. Second, it employs the Laplacian pyramid based method to performimage fusion, thus achieving seamless stitching.In addition, we generalize the above cylindrical mosaic method, and implementa new method for spherical panoramic mosaics. This method uses recursion strategyto register images, which reduces the image amount and avoid the complicated globaloptimization. To avoid the accumulation of registration errors, we mosaic the sam-pled images into a spherical panorama by employing the image division and warpingalgorithms.Based on the theory of visual hulls, we propose an improved method to create pointbased models for real objects. This method has two advantages. Firstly, a uniform-interval index table is adopted to organize the silhouette edges of each sample image,which provides much ?exibility for point sampling. Secondly, by combining the splat-ting algorithm with Layered Depth Buffers (LDB), we implement an algorithm fordetermining the points'visibilities and re?ectance properties, which is more accuratethan pervious methods.For point based rendering, we accelerate the EWA (Elliptical weight average) sur-face splatting algorithm by hardware. By analyzing the view frustum in graphics hard-ware, we deduce an equivalent projective transformation, which enable us to use auniform format to represent the surface splatting based on both software and hardwareimplementation. Moreover, each fragment depth is accurately calculated, which avoidthe visibility error induced by nonlinear depth distribution.Finally, we implement a rendering system, which combines the point based mod-els with spherical panoramas. In order to implement photorealistic rendering, this sys-tem takes panoramas as incident lights, and computes irradiance for each point. Fur-thermore, we provide a preprocessing operation to interpolate the discrete re?ectanceproperties for sampled points, which reduces the rendering storage and computation.
Keywords/Search Tags:Image Based Modeling and Rendering, Point Based Rendering, En-vironment Mapping, Image Based Relighting, Bidirectional Re?ectance Distribution Function(BRDF), Hardware Acceleration
PDF Full Text Request
Related items