Elsevier

Computers & Graphics

Volume 27, Issue 2, April 2003, Pages 189-204
Computers & Graphics

Rendering complex scenes using spatial subdivision and textured LOD meshes

https://doi.org/10.1016/S0097-8493(02)00276-5Get rights and content

Abstract

We present a hybrid rendering scheme that explores the locality of visibility at the cost of extra storage and prefetching, and makes a tradeoff between image quality and rendering efficiency by using textured level-of-detail (LOD) meshes. The space is first subdivided into cells. For each cell, inside objects are rendered as normal while outside objects are rendered as textured LOD meshes using projective texture mapping. The textured LOD meshes are object based and derived from the original meshes based on the captured depth images viewed at the centers of the cell and its adjacent cells. With such a textured LOD mesh, problems commonly found in image-based rendering, such as the hole problem due to occlusion among objects and the gap problems due to resolution mismatch, can be avoided. The size of holes due to self-occlusion is constrained to be within a user-specified tolerance. Several scenes with millions of polygons have been tested and higher than 200 FPS has been achieved with a little loss of image quality.

Introduction

In order to achieve an immersive visual effect during the VR navigation, rendering with photo-realistic scene images in high frame rate has been an ultimate goal of real-time rendering. In the traditional geometry-based rendering, very complex scenes often consist of numerous polygons that cannot be rendered at an acceptable frame rate even using a state-of-the-art hardware. Many techniques have been proposed in last decades on reducing the polygon count while preserving the visual realism of the complex scenes, including visibility culling, level-of-detail (LOD) modeling, and image-based rendering (IBR). Although IBR is capable of rendering complex scenes with photo-realistic images in the time that is independent of the scene complexity, it has been suffered from the static lighting, the limited viewing degree of freedom, and some losses of image quality due to gaps and holes. As a consequence, hybrid rendering that combines geometry- and image-based technique has become a viable alternative.

As a representation for an object or a region of the scene, several image-based or hybrid representations have been proposed. Shade et al. [1] described a paradigm in which regions or objects could be represented by environment map, planar sprite, sprite with depth, layered depth image (LDI), and polygonal mesh, depending on their distances to the viewer. Although the scheme integrates several existing representations, each individual form has its own problems. For example, sprites in general have gap problem due to resolution mismatch, and have to be re-computed once the viewer is outside the safe-region. LDI can only be drawn using software rendering with splatting. Finally, transition between different representations may produce noticeable popping effects.

To reduce gap problems due to resolution mismatch and to improve the efficiency of pixel-based rendering, depth meshes are extracted from the sprite with depth based on depth variation. However, rubber artifacts between disjoint surfaces are often encountered, and re-projecting pixel coordinates back to 3D coordinates may result in precision problems. The depth mesh approach can be incorporated by space subdivision, in which, when navigating inside a cell, distant objects are rendered using depth meshes with textures, while near objects are rendered by selected LOD models. With such approaches, the polygon count of a complex scene can be still high and, most importantly, the transition between LOD and depth mesh with texture will generally results in visually noticeable popping effects.

Another more uniform representation is LOD modeling, which can be incorporated with texture mapping for recovering surface details. View-independent LOD modeling has no control over silhouette during navigation. View-dependent LOD modeling, however, has to deal with silhouette problems at run-time by maintaining a mesh of fine resolution along silhouettes. Silhouette clipping that incorporates LOD modeling and normal/texture map needs to extract fine silhouettes at run-time, which is in general time consuming.

A hybrid rendering scheme that aims to render complex scenes in a constant and high frame rate with only a little or an acceptable quality loss is presented in this paper. To this end, view space is partitioned into cells to explore the locality of visibility, and for a view cell, each object outside the cell is represented by a LOD mesh together with textures that are derived with respect to the view cell. All these are done in a preprocessing. In contrast with IBR or depth mesh approach, the object-based LOD mesh derivation avoids hole problems due to occlusion among objects. In the meantime, to reduce hole problems due to self-occluding, the LOD mesh is classifised into either single-view LOD mesh (termed as SVMesh) or multi-view LOD mesh (termed as MVMesh), depending on the object's self-occluding error (w.r.t. the viewcell). The SVMesh is chosen if the object's self-occluding error is smaller than a user-specified tolerance, otherwise MVMesh is chosen. Such a condition on SVMesh ensures that the potential holes possibly found in the images viewed from any point inside the cell will have size less than the user-specified tolerance. Hence all the information necessary to guide the derivation of SVMesh and the texture associated with the SVMesh come from the captured image and captured depth image of the cell's center. On the other hand, the MVMesh presents geometry and texture necessary to avoid holes on images viewed from some points in the cell. Therefore, the derivation of MVMesh and its texture associations are based on captured images and depth images from the cell's center as well as the centers of adjacent cells. In the proposed scheme, prefetching is also implemented to preload the data necessary for the following cells such that sudden drops in the frame rate at the cell transition can be avoided.

The proposed approach explores locality of visibility at the cost of extra storage and prefetching, and makes a tradeoff between image quality and rendering efficiency by using the SVMesh and MVMesh together with textures. Our experiments have shown that for a scene of 8 million polygons we have achieved higher than 200 frames/s with a little loss of image quality (average PSNR 37.34 dB). The polygons and textures require about 1260 MB hard disk storage and about 287 MB run-time memory on average. With such high frame rates, the overhead of prefetching is hardly noticeable.

Section snippets

Related work

There have been extensive research in the field of real-time rendering, ranging from geometry-based rendering, IBR, and hybrid rendering. Although culling, including back-face culling, view-frustum culling, and occluding culling, is a classical technique to clip out invisible polygons, many new approaches have been proposed. In [2], a sublinear algorithm has been proposed for hierarchical back-facing culling. Zhang et al. improved this by introducing normal mask which reduces the per polygon

Proposed hybrid rendering scheme

The proposed hybrid scheme consists of a preprocessing phase and a run-time phase. In the preprocessing phase, the xy plane of the given 3D scene is first partitioned into equal-sized hexagonal cells. Then for each cell, we derive object-based textured LOD meshes, called SVMesh or MVMesh, for each object outside the cell. Note that with object-based LOD meshes, the holes due to occlusion among objects can be avoided. Furthermore, substituting original meshes with textured SVMeshes or MVMeshes

Setup

The test platform is a PC with an AMD AthlonXP 1800+CPU, 512 MB main memory, and an nVIDIA GeForce4 Ti 4400 with 128 MB DDR RAM graphics accelerator. The OS is Windows XP Pro. The output image is in a resolution of 1024×1024×32. S3's S3TC DXT3 is used to compress textures (in a ratio of 1/4).

For efficiency consideration, polygons and objects are represented by vertex IDs and object IDs, respectively. The original meshes are loaded into main memory before the navigation. In prefetching objects,

Concluding remarks

We have presented a hybrid rendering scheme for real-time display of complex scenes. The scheme partitions the model space into cells, thus explores the locality of visibility based on which the objects outside a cell are rendered as textured LOD meshes and inside objects are rendered as normal. Such a hybrid representation allows us to avoid problems that are commonly found in image-based rendering; such as the gap problem due to resolution mismatch and the hole problem due to occlusion among

References (35)

  • L. Darsa et al.

    Walkthroughs of complex environments using image-based simplification

    Computers & Graphics,

    (1998)
  • Shade JW, Gortler SJ, He L-W, Szeliski R. Layered depth images. In: Proceedings of SIGGRAPH ’98, Orlando, Florida, July...
  • Kumar S, Manocha D, Garrett B, Lin M. Hierarchical back-face culling. In: Proceedings of the Seventh Eurographics...
  • Zhang H, Hoff III KE. Fast backface culling using normal masks. In: Proceedings of 13th Symposium on Interactive 3D...
  • Hudson T, Manocha D, Cohen J, Lin M, Hoff K, Zhang H. Accelerated occlusion culling using shadow frusta. In:...
  • Greene N, Kass M, Miller G. Hierarchical Z-buffer visibility. In: Kajiya JT, editor. Computer Graphics (SIGGRAPH ’93...
  • H. Zhang et al.

    Visibility culling using hierarchical occlusion maps

    Computer Graphics

    (1997)
  • D. Cohen-Or et al.

    Conservative visibility and strong occlusion for viewspace partitioning of densely occluded scenes

    Computer Graphics Forum

    (1998)
  • Durand F, Drettakis G, Thollot J, Puech C. Conservative visibility preprocessing using extended projections. In: Akeley...
  • Schaufler G, Dorsey J, Decoret X, Sillion FX. Conservative volumetric visibility with occluder fusion. In: Akeley K.,...
  • J. Rossignac et al.

    Multi-resolution 3-D approximations for rendering complex scenes

  • K. Zhou et al.

    A new mesh simplification algorithm based on vertex clustering

    Chinese Journal of Automation

    (1999)
  • Z. Pan et al.

    Level of detail and multi-resolution modeling for virtual prototyping

    Internal Journal of Image and Graphics,

    (2001)
  • W.J. Schroeder et al.

    Decimation of triangle meshes

    Computer Graphics, Chicago, Illinois,

    (1992)
  • Hoppe H. Progressive meshes. In: Rushmeier H, editor. Proceedings of SIGGRAPH ’96, Addison Wesley, New Orleans, August...
  • Yang S-K, Chuang J-H. Material-preserving progressive mesh using vertex collapsing simplification. Journal of Virtual...
  • Hoppe H. View-dependent refinement of progressive meshes. In: Proceedings of SIGGRAPH ’97. Los Angeles, California,...
  • Cited by (2)

    Work supported partially by NSC of ROC under Grant NSC 90-2213-E-009-126.

    View full text