Elsevier

Computers & Graphics

Volume 33, Issue 2, April 2009, Pages 120-129
Computers & Graphics

Technical Section
A simulator-based approach to evaluating optical trackers

https://doi.org/10.1016/j.cag.2009.02.002Get rights and content

Abstract

We describe a software framework to evaluate the performance of model-based optical trackers in virtual environments. The framework can be used to evaluate and compare the performance of different trackers under various conditions, to study the effects of varying intrinsic and extrinsic camera properties, and to study the effects of environmental conditions on tracker performance. The framework consists of a simulator that, given various input conditions, generates a series of images. The input conditions of the framework model important aspects, such as the interaction task, input device geometry, camera properties and occlusion.

As a concrete case, we illustrate the usage of the proposed framework for input device tracking in a near-field desktop virtual environment. We compare the performance of an in-house tracker with an ARToolkitPlus-based tracker under a fixed set of conditions. We also show how the framework can be used to assess the quality of various camera placements given a pre-recorded interaction task. Finally, we use the framework to determine the minimum required camera resolution for a desktop, Workbench and CAVE environment, and study the influence of random noise on tracker accuracy.

The framework is shown to provide an efficient and simple method to study various conditions affecting optical tracker performance. Furthermore, it can be used as a valuable development tool to aid in the construction of optical trackers.

Introduction

Tracking in virtual and augmented reality is the process of identifying the pose of an input device in the virtual space. Model-based optical tracking achieves this by using input devices augmented by markers for which the 3D features of these markers are known in advance. The set of known 3D features is called the model. Pose estimation is then performed by detecting these features in one or more 2D camera images. Optical tracking is an important technology as it provides a cheap tracking solution that does not require any cables in the virtual space. Furthermore, given sufficient camera resolution, the accuracy of optical tracking is very good. However, a common and inherent problem in optical tracking is that line of sight is required: if the input device is partially occluded a pose can often not be found. Various implementations of optical trackers exist (e.g. [4], [14], [10], [13]).

An important issue in optical tracking is an objective way of measuring performance. The user's performance for an interactive task often depends on the performance of the optical tracking system. Tracker accuracy puts a direct upper-bound on the level of accuracy the task can be performed with. Particularly in cases where the tracker cannot detect the input device, for example due to occlusion, interaction performance is reduced significantly. Therefore, many aspects must be taken into account when evaluating the performance of an optical tracker. These aspects include the type of interaction task that is performed; the intrinsic and extrinsic camera parameters, such as focal length, resolution, number of cameras and camera placement; environment conditions in the form of lighting and occlusion; and end-to-end latency. Furthermore, performance can be expressed in a number of different ways, such as positional accuracy, orientation accuracy, hit:miss ratio, percentage of outliers and critical accuracy, among others. Most optical tracker descriptions do not take all these aspects into account when describing the tracker performance.

In this paper, we present a framework for evaluating the performance of optical trackers in a systematic way. The presented framework allows us to quantitatively:

  • Evaluate and compare the performance of different optical trackers under various conditions. This is useful for deciding which optical tracker implementation to use for a specific virtual environment, under different constraints.

  • Study camera properties for various virtual environments. In this way, we can evaluate how many cameras are required to perform a specific task, what the minimum required quality of the cameras should be in terms of resolution, distortion and focal length, and where they should be placed.

  • Study environment conditions for various virtual environments. This allows us to study the effects of device occlusion, which is an important aspect for optical tracking. Also, different lighting conditions can be studied, such as infrared, office or day light.

This paper is organized as follows. In Section 2 we describe related work. Next, in Section 3 we describe the presented framework in detail. In Section 4 we give four examples of how the framework can be used in a practical setting to study various aspects of optical tracking and the environment. Finally, in Section 5 we discuss additional uses and considerations for the framework.

Section snippets

Related work

van Liere and van Rhijn [12] examined the effects of erroneous intrinsic camera parameters on the accuracy of a model-based optical tracker. They recorded a real, interactive task and subsequently ran three different optical tracking algorithms on these images, providing them with varying intrinsic camera parameters to simulate errors in the camera calibration process. They showed how these parameters affect the accuracy, robustness and latency of the tested optical tracking algorithms. The

Methods

In this section we provide a detailed description of our proposed framework for the performance evaluation of model-based optical trackers. The various components of the framework are discussed, along with some examples of typical usage scenarios. Furthermore, a brief description is given of the implementations of two optical trackers. These optical trackers will be used in Section 4 as examples to illustrate how the presented framework can be used in practice.

An overview of the proposed

Results

In this section we show four sample uses of our framework to evaluate different aspects of optical tracker performance. In the first experiment, the accuracy of two different optical trackers is compared, given a fixed environment. In the second experiment, we try to determine high-quality camera placements by varying the extrinsic camera parameters. Third, we evaluate the effect of camera resolution and distance by varying the intrinsic camera parameters as well. Finally, we study the

Discussion

In Section 4 we have shown a sample evaluation of two different optical trackers under varying conditions. By using the presented framework, it was possible to acquire performance metrics for different situations, such as varying camera placements, that would be difficult to acquire quickly in practice. However, care must be taken not to judge tracker performance too rapidly based on a single experiment.

The accuracy for GraphTracker was shown to be generally worse than that of ARToolkitPlus.

Conclusion

We described a framework to evaluate optical tracker performance under various conditions. Four examples were shown where the framework was used to evaluate the performance of two different optical trackers. The presented framework provided an efficient and simple method to study various conditions affecting optical tracker performance. This allowed us to study the effect of parameters that cannot be easily changed in real-life setups, such as camera placements and resolution. Furthermore, the

References (14)

  • F.A. Smit et al.

    GraphTracker: a topology projection invariant optical tracker

    Computers & Graphics

    (2007)
  • W.-Y. Chang et al.

    Pose estimation for multiple camera systems

  • Chen X. Camera Placement Considering Occlusion for Robust Motion Capture. Technical Report, CGL Stanford;...
  • B. Horn

    Closed-form solution of absolute orientation using unit quaternions

    Journal of the Optical Society of America

    (1987)
  • H. Kato et al.

    Marker tracking and HMD calibration for a video-based augmented reality conferencing system

  • C.-P. Lu et al.

    Fast and globally convergent pose estimation from video images

    IEEE Trans Pattern Anal Mach Intell

    (2000)
  • Micheals R. A new closed-form approach to the absolute orientation problem. Masters thesis, Lehigh University;...
There are more references available in the full text version of this article.

Cited by (2)

  • A large-panel two-CCD camera coordinate system with an alternate-eight- matrix look-up-table method

    2012, Optics and Laser Technology
    Citation Excerpt :

    The positions of different fingers can be detected based on the differences between the fingers. The changed area is identified through an image subtraction method and an update of the background model [11–14]. Finally, the gesture type is identified for applications regarding results of webpage browsing, drawing, teaching, and multi-user tracking.

  • CAVE based visual system design and implementation in marine engine room simulation

    2013, 19th International Conference on Industrial Engineering and Engineering Management: Management System Innovation
View full text