Elsevier

Computers & Graphics

Volume 32, Issue 4, August 2008, Pages 420-429
Computers & Graphics

Volume visualization and exploration through flexible transfer function design

https://doi.org/10.1016/j.cag.2008.04.004Get rights and content

Abstract

Direct volume rendering (DVR) is a well-known method for exploring volumetric data sets. Optical properties are assigned to the volume data and then a DVR algorithm produces visualizations by sampling volume elements and projecting them into the image plane. The mapping from voxel values to optical attribute values is known as transfer function (TF). Therefore, the quality of a visualization is highly dependent on the TF employed, but its specification is a non-trivial and unintuitive task. Without any help during the TF design process, the user goes through a frustrating and time-consuming trial-and-error cycle. This paper presents a useful combination of TF design techniques in an interactive workspace for volume visualization. Our strategy relies on semi-automatic TFs generation methods: boundary emphasis, stochastic evolutive search in TF space, and manual TF specification aided by dual domain interaction. A two-level user interface was also developed. In the first level, it provides multiple simultaneous interactive visualizations of the volume data using different TFs, while in the second one, a detailed visualization of a single TF and the respective rendered volume are displayed. Moreover, in the second level, the TF can be manually refined and the volume can be further inspected through geometric tools. The techniques combined in this work are complementary, allowing easy and fast TF design and data exploration.

Introduction

Volume rendering is widely known as a set of methods for visualization of large three-dimensional (3D) scalar or vector fields, mainly in medical and scientific data exploration. In these areas, one often deals with 3D images, such as those obtained from CT and MRI devices or numerical simulation data. Volume rendering techniques and algorithms are well described in the literature [1], and can be classified as isosurface extraction methods and direct volume rendering (DVR) methods. The former methods extract polygonal meshes representing isosurfaces in the volume and then use the traditional rendering pipeline to display the meshes (see [2], for a well-known example). On the other hand, DVR methods display volume data without extracting an intermediate geometry. DVR and its advantages were first described by Levoy [3]. Modern graphics hardware supports volume rendering at interactive rates using either of these approaches.

To obtain useful images through DVR, voxels have to be classified in order to determine which ones must be displayed. This classification is typically performed by transfer functions (TFs), which associate values of optical attributes to voxels based on their values. Opacity and color are the most common optical properties used in TFs. The degree of opacity can make a voxel more or less visible and is normally used to emphasize voxels in boundaries between different homogeneous regions of the volume [4]. Other optical properties may also be used, such as specular reflection coefficients [5], spectral reflectance [6] and light scattering coefficients [7]. More recently, the concept of style TFs was introduced by Bruckner and Gröller [8], who used TFs to define the rendering style of volume regions based on data values and eye-space normals. The information conveyed by an image built from volume data is, therefore, highly dependent on the quality of the TF. However, TF design is a non-trivial and unintuitive task, and has been referred as one of the top 10 problems in volume visualization [9].

One-dimensional (1D) TFs take into account only scalar voxel values, and are the most common TFs, although having a limited classification power. On the other hand, multi-dimensional TFs allow more freedom in voxel classification by taking as arguments vectorial values or combinations of local measures of scalar fields, such as derivative values [10], [11], neighborhood, position [12], curvature [13], [14] and statistical signatures [15]. Notwithstanding, the design complexity grows with the size of the TF domain [10], and the memory required to implement truly multi-dimensional TFs restricts their application [16]. In this work, we adopted 1D TFs due to their simplicity and low-memory requirements, since they can be implemented as small lookup tables. Furthermore, the pre-integrated volume rendering technique, proposed by Engel et al. [17], allows high-quality DVR at interactive rates using 1D TFs.

Designing TFs with no assistance leads to trial-and-error efforts. Therefore, several automatic and semi-automatic techniques for specifying TFs have been proposed [4], [8], [15], [18], [19], [20], [21], [22], [23]. They can be guided by analysis of the volumetric data (data-driven) or analysis of the generated images (image-driven) [9]. In any case, to make the process less frustrating and less time-consuming, the user must be given a rapid feedback with real-time rendering frame rates.

The main contribution of our work is a two-level interface method that combines a set of useful tools for semi-automatic 1D TF design and fast data exploration in an interactive workspace. The first level of our interface presents several thumbnails of the volume data rendered with different TFs, allowing immediate insight of the main structures in the data set. The second level shows a detailed visualization of a single TF as well as the resulting rendering. TFs can be easily generated and refined using semi-automatic boundary emphasis, stochastic evolutive search and manual design aided by dual domain interaction [11]. These three approaches are complementary and were successfully combined in this work, improving the idea of two-level interaction proposed by Prauchner et al. [24].

This paper extends our previous work [25] with an improved interface and an experimental evaluation of our approach. We implemented a history tree that keeps track of the TF evolution and allows the user to go back to a previously specified TF. The evaluation was performed as an experiment with 15 subjects who accomplished two visualization tasks with different data sets. This way we tested the usability of our methods with potential users. We also implemented a set of geometric tools for volume inspection inspired by the work of Dietrich et al. [26], but their description is beyond the scope of this article.

The paper is organized as follows. Section 2 discusses the closest related works. Section 3 describes the proposed interface and the TF design techniques provided within it. Implementation details are addressed in Section 4, while in Section 5 we present the evaluation of our tools. At last, in Section 6, we draw some conclusions and point directions for future work.

Section snippets

Related work

The TF specification problem has received much attention from researchers. Traditional approaches rely on user effort in adjusting control points of a graphic plot of the TF [27]. The control points—scalar values associated with values of optical attributes—are then interpolated in order to build the TF. However, with no clues or prior knowledge about the data, this is a “blind process”. Some data-driven approaches provide users with higher-level information [18], [23] that helps in obtaining

TF specification

Most researchers agree that TF specification methods should not overload the users nor exclude them from the process [9]. The quality of a TF depends on the amount of information conveyed by the generated image—a subjective metric. Therefore, it is hard to automatically evaluate how “good” a TF is. Fully automatic TF specification methods may miss important features of the volume while completely manual TF design may demand a lot of effort and time, mainly from users that do not have prior

Implementation details

Our volume rendering tool was implemented in C++ using GLUT [35] and GLUI [36] libraries for the interface (see Fig. 8), and OpenGL [37] and CG [38] for volume visualization. The rendering algorithm runs in GPU and is based on 3D texture sampling using view-aligned slices as proxy geometry. The number of slices can be changed by the user, and is automatically reduced when the volume is being rotated to guarantee interactive rates in both levels of interaction. It is worthy to mention that

Evaluation and discussion

In this section we describe the experiment performed in order to evaluate our TF design technique and then we present the results. Assuming that our manual TF design tool with dual domain interaction would be at least as good as the traditional interfaces based on adjustment of control points, we wanted to prove the usefulness of our interface by comparing the time and interaction steps needed to build suitable visualizations of selected data sets using all resources of the workspace with the

Conclusions and future work

Despite the considerable attention devoted to the TF specification problem, TF design is still a hard task. We are far away from an ideal solution, but several TF specification methods have proved to be useful. We developed an interactive general purpose volume visualization tool with high-quality volume rendering by adapting, extending and combining known TF specification techniques. Compared with other methods, ours successfully combines two different classes of approaches (image-driven and

Acknowledgments

We thank J.L. Prauchner and his co-authors for kindly providing us their tool, which was the starting point of ours, and the colleagues from the CG group of UFRGS that served as subjects in our evaluation. We also acknowledge the financial support from CNPq.

References (38)

  • K. Brodlie et al.

    Recent advances in volume visualization

    Computer Graphics Forum

    (2001)
  • W.E. Lorensen et al.

    Marching cubes: a high resolution 3d surface construction algorithm

  • M. Levoy

    Display of surfaces from volume data

    IEEE Computer Graphics and Applications

    (1988)
  • G. Kindlmann et al.

    Semi-automatic generation of transfer functions for direct volume rendering

  • E.B. Lum et al.

    Lighting transfer functions using gradient aligned sampling

  • M. Tory

    A practical approach to spectral volume rendering

    IEEE Transactions on Visualization and Computer Graphics

    (2005)
  • J. Kniss et al.

    A model for volume lighting and modeling

    IEEE Transactions on Visualization and Computer Graphics

    (2003)
  • S. Bruckner et al.

    Style transfer functions for illustrative volume rendering

    Computer Graphics Forum

    (2007)
  • H. Pfister et al.

    The transfer function bake-off

    IEEE Computer Graphics and Applications

    (2001)
  • J. Kniss et al.

    Multidimensional transfer functions for interactive volume rendering

    IEEE Transactions on Visualization and Computer Graphics

    (2002)
  • J. Kniss et al.

    Interactive volume rendering using multidimensional transfer functions and direct manipulation widgets

  • F.-Y. Tzeng et al.

    An intelligent system approach to higher-dimensional classification of volume data

    IEEE Transactions on Visualization and Computer Graphics

    (2005)
  • G. Kindlmann et al.

    Curvature-based transfer functions for direct volume rendering: methods and applications

  • J. Hladuvka et al.

    Curvature-based transfer functions for direct volume rendering

  • S. Tenginakai et al.

    Salient iso-surface detection with model-independent statistical signatures

  • J. Kniss et al.

    Gaussian transfer functions for multi-field volume visualization

  • K. Engel et al.

    High-quality pre-integrated volume rendering using hardware-accelerated pixel shading

  • C.L. Bajaj et al.

    The contour spectrum

  • S. Fang et al.

    Image-based transfer function design for data exploration in volume visualization

  • Cited by (11)

    View all citing articles on Scopus
    View full text