SPONSOR


ORGANIZATION

Tutorial

João Luiz Dihl Comba 1, Claudio T. Silva 2, Fábio F. Bernardon 1, Steven

P. Callahan 2

1 Universidade Federal do Rio Grande do Sul (UFRGS), Instituto de Informática

2 Scientific Computing and Imaging Institute, University of Utah

Harvesting the power of special-purpose Graphics Processing Units (GPUs)

to produce real-time volume rendering of large unstructured meshes is a

major research goal in the scientific visualization community. While,

for regular grids, texture-based techniques are well-suited for current

GPUs, the steps necessary for rendering unstructured meshes are not so

easily mapped to current hardware.

In this tutorial we review state-of-the-art volume rendering

techniques for unstructured grids that simplifies the CPU-based

processing and shifts much of the processing burden to the GPU, where it

can be performed more efficiently. The presentation focus on two

different techniques to solve this problem using object and image space

approaches. For each technique we review its fundamental ideas, describe

its GPU implementation and discuss the results.

The first algorithm we review is called Hardware-Assisted Visibility

Sorting (HAVS). It is a hybrid technique that operates in both

object-space and image-space. In object-space, the algorithm performs a

partial sort of the 3D primitives in preparation for rasterization. The

goal of the partial sort is to create a list of primitives that generate

fragments in nearly sorted order. In image-space, the fragment stream is

incrementally sorted using a fixed-depth sorting network. In this

algorithm, the object-space work is performed by the CPU and the

fragment-level sorting is done completely on the GPU. Results that will

be discussed demonstrates that the fragment-level sorting achieves

rendering rates of between one and six million tetrahedral cells per

second on an ATI Radeon 9800.

The second algorithm to be discussed is called GPU-based Ray

Casting. Computation is entirely performed in the GPU by advancing

intersections against the mesh while evaluating the volume rendering

integral, with an efficient and compact representation for mesh data in

2D textures. In addition, a tile-based subdivision of the screen allows

computation to proceed only at places where it is required, thus

reducing fragment processing in the GPU. Finally, a depth-peeling

approach that captures when rays re-enter the mesh is described, which

is much more general and does not require a convexification algorithm.

This technique can render true non-convex meshes, such as the Blunt Fin,

in between 400~Ktet/sec to 1.3~Mtet/sec.

To complement the presentation of the two algorithms described

above, we discuss extensions that allows handling even larger meshes

using a new level-of-detail approach, and a vector quantization solution

that compress time-varying scalar fields is a suitable format that

allows interactive exploration.