Font Size: a A A

Massive data management for distributed volume visualization

Posted on:2009-10-10Degree:Ph.DType:Dissertation
University:State University of New York at Stony BrookCandidate:Frank, Susan LavisFull Text:PDF
GTID:1448390005960659Subject:Computer Science
Abstract/Summary:PDF Full Text Request
As memory and compute power become more affordable, the size of data sets has grown rapidly due to the improved accuracy and memory capacity of data collecting devices. We present a framework which automates load balancing volume distribution and ray-task scheduling for parallel rendering of massive volumes. Our framework is designed to automate the volume distribution process for parallel ray directed algorithms including ray casting and ray tracing on a heterogeneous mix of hardware. This parallelization may occur within a new custom-designed hardware, on multiple processing units, on a distributed super-computer, or on a visualization cluster.;High resolution and very large scale volume data sets frequently contain large portions of empty space. We remove empty space early in the pipeline in order to reduce the bandwidth required to send volume data across a cluster, as well as processing cycles required for loading volume data into local memory during rendering. We introduce the flex-block partition, which gives an unambiguous ray traversal order for any view direction. The scene is partitioned into cells which each contain one or more closely cropped blocks of volume data, or flex-blocks. Scene partitioning is driven by tightly cropped subvolumes to allow volume boundaries to be followed in a natural manner.;Managing massive volumetric data requires the redesign of traditional algorithms for out-of-core implementation. The out-of-core region growing algorithm performs region growing on a consecutive group of slices, or slab. The output of is a mask volume which is used for volume segmentation and empty space cropping. The last slice of the mask volume supplies a set of independent region seeds for region growing on the next slab. Special consideration is taken to deal with regions splitting and/or merging within a slab. Our slab-projection slice is used to gather non-empty region information for one slab of data at time. Our out-of-core bricking is used to create cropped subvolumes, or bricks, sized for target memory. A directed acyclic graph representing the relative distance of the bricks for a given viewpoint and direction is built concurrently. Our slab-projected kd-tree partitioning uses a series of slab-projection slices to partition data between nodes for parallel rendering applications.;We have explored dynamic programming (DP) solutions to the load balancing network distribution (LBND) problem that are particulary applicable to scenes with large portions of unevenly distributed empty space. The first, brick grouping, inputs a directed acyclic graph (DAG) of data bricks and finds an optimal partition with respect to a particular view direction, set of hardware constraints, and cost model. We attempt to minimize a cost function that reflects the end-to-end rendering cost on a cluster. The algorithm average time complexity is reduced by limiting the average number of bricks, which is a function of the brick size. The second DP solution, moving walls, inputs slab projection slices and incrementally builds a flex-block partition by locally optimizing for a given set of constraints and cost model. The output is a view-independent, approximate solution. The potential size of the problem is greatly reduced by imposing restrictions on the cut plane ordering. Empirical results indicate that good load balancing is achieved using these algorithms.;We present a dependency graph scheduling algorithm for distributed ray tracing. Our cell-tree gathers clusters of eye, shadow, reflected and refracted rays into a compact description of all ray dependencies. Our cell-tree peeling algorithm exploits frame-to-frame coherence. It determines a memory caching schedule from ray traversal order of the previous frame, which is encrypted in the cell-tree to predict a good schedule. This algorithm works with any scene partition which allows unambiguous priority order for ray traversal. (Abstract shortened by UMI.).
Keywords/Search Tags:Data, Volume, Ray, Distributed, Partition, Algorithm, Memory, Massive
PDF Full Text Request
Related items