Font Size: a A A

Spatially encoded image-space simplifications for interactive walkthrough

Posted on:2003-07-29Degree:Ph.DType:Dissertation
University:The University of North Carolina at Chapel HillCandidate:Wilson, Andrew ThomasFull Text:PDF
GTID:1468390011485210Subject:Computer Science
Abstract/Summary:
Many interesting geometric environments contain more primitives than standard rendering techniques can handle at interactive rates. Sample-based rendering acceleration methods such as the use of impostors for distant geometry can be considered simplification techniques in that they replace primitives with a representation that contains less information but is less expensive to render.; In this dissertation we address two problems related to the construction, representation, and rendering of image-based simplifications. First, we present an incremental algorithm for generating such samples based on estimates of the visibility error within a region. We use the Voronoi diagram of existing sample locations to find possible new viewpoints and an approximate hardware-accelerated visibility measure to evaluate each candidate. Second, we present spatial representations for databases of samples that exploit image-space and object-space coherence to reduce both storage overhead and runtime rendering cost. The image portion of a database of samples is represented using spatial video encoding, a generalization of standard MPEG2 video compression that works in a 3D space of images instead of a 1D temporal sequence. Spatial video encoding results in an average compression ratio of 48:1 within a database of over 22,000 images. We represent the geometric portion of our samples as a set of incremental textured depth meshes organized using a spanning tree over the set of sample viewpoints. The view-dependent nature of textured depth meshes is exploited during geometric simplification to further reduce storage and rendering costs. By removing redundant points from the database of samples, we realize a 2:1 savings in storage space and nearly 6:1 savings in preprocessing time over the expense of processing all points in all samples.; The spatial encodings constructed from groups of samples are used to replace geometry far away from the user's viewpoint at runtime. Nearby geometry is not altered. We achieve a 10–15x improvement in frame rate over static geometric levels of detail with little loss in image fidelity using a model of a coal-fired power plant containing 12.7 million triangles. Moreover, our approach lessens the severity of reconstruction artifacts present in previous methods such as textured depth meshes.
Keywords/Search Tags:Textured depth meshes, Spatial, Rendering, Over, Geometric
Related items