Elsevier

Computers in Industry

Volume 64, Issue 9, December 2013, Pages 1090-1098
Computers in Industry

Increasing the accuracy of Time-of-Flight cameras for machine vision applications

https://doi.org/10.1016/j.compind.2013.06.006Get rights and content

Abstract

Range imaging based on the Time-of-Flight (ToF) principle evolved largely in recent years. Especially, the lateral resolution, the ability to operate outdoors with sunlight and the sensitivity have been improved. Nevertheless, the acceptance of depth cameras for machine vision in the industry environment is still rather limited. The major shortcoming of ToF depth cameras compared to laser range scanners is their measuring accuracy, which is not sufficient for several applications. In this paper, we firstly introduce several state of the art depth cameras briefly and demonstrate their capabilities. Afterwards, we explore possibilities to increase the radial resolution and the accuracy of ToF depth cameras based on the Photonic Mixer Device (PMD). In general, the usage of higher modulation frequencies promises higher depth resolution but yields on the other hand higher noise levels. Moreover, the accuracy is limited by systematic errors and the measurement are affected by random noise and we show how to minimize and compensate them in industry environments.

Introduction

For a few years cameras utilizing continuous wave amplitude modulation to gain depth information through the Time-of-Flight (ToF) principle were commercially available and many research groups work on their development and application. Significant progress has been made in increasing the lateral resolution (number of pixels) and the sensitivity of each pixel. But still the depth accuracy and precision are criticized by many practitioners in the area of machine vision especially in inspection and quality control applications. Several new competitors like SoftKinetic, Microsoft, Panasonic or Fotonic joined the ranks of the established companies PMD Technologies (PMDTec), Canesta and MESA Imaging. The new cameras work on novel operating principles or modify common ones, but details are not publicly known. Therefore, we investigate firstly the state of the art in this paper by comparing several available depth cameras. Afterwards, we evaluate different approaches to bring the radial resolution and accuracy of PMD based ToF cameras to the sub-centimeter region. On the hardware side, we apply higher modulation frequencies up to 60 MHz instead of e.g. 20 MHz and study the effects of the geometry of lighting devices. Moreover, we give bonds to the systematic measurement errors and show how to reduce the random noise. The approaches are evaluated using quantitative experimental setups and the influence of the system parameters as well as measurement conditions are quantified.

Section snippets

Related work

Depth cameras have the advantage of generating depth maps without the need for scanning as is required by laser rangefinders or 3D laser scanners. Additionally, these novel cameras are able to provide depth maps with high frame rates, which is not true for standard structured light approaches. The drawback of depth cameras is their low lateral resolution and their precision in the range of centimeters. We will concentrate in the following on research conducted with ToF cameras. Commercially

Several available depth cameras

The comparison involves different commercially available depth cameras, which are shown in Fig. 1 as well as several versions of the ZESS MultiCam. The Microsoft Kinect and the SoftKinetic DepthSense as recent consumer depth cameras compete with two established Time-of-Flight cameras based on the Photonic Mixer Device (PMD) of PMD Technologies.

ZESS MultiCam

Recently, we developed a new version of the MultiCam (cf. [15]) equipped with Gigabit Ethernet, a 3 megapixel color CMOS chip and the 19k PMD chip of PMD Technologies. Both chips share the same lens utilizing a Bauernfeind prism with an integrated beam splitter. The heart of the camera is a Xilinx FPGA chip and the camera features a C-mount lens adapter. See Fig. 2 for pictures and specifications of the MultiCam. The modular lighting devices consist of chip LEDs with a maximum continuous

Radial resolution evaluation

All cameras were mounted on a translation unit, which is able to position the cameras at distances between 50 cm and 5 m to a wall with a positioning accuracy better than 1 mm. The cameras were leveled and were looking orthogonally at the wall. 100 images were taken per position with a step size of 1 cm, which results in 45, 000 images per camera. The same lens, the same modulation frequency of 20 MHz as well as the same exposure times (5 ms for distances lower that 2.5 m and 10 ms for higher

Increasing the modulation frequency

In theory, increasing the modulation frequency enhances the precision of the depth measurements, since the phase corresponds then to a smaller distance. But producing modulated light with high power LEDs and high frequencies is difficult and leads to deteriorated signals. To find the optimal frequency for a given task is far from trivial and is also limited by the required framerates and the available amount of active lighting. In the area of machine vision we often have short distances but

Effects of the lighting geometry

In this section, the influence of the position of the lighting devices are investigated aimed at giving only a rough estimate and not a complete discussion. In Fig. 7(a) we use the same experimental setup as in the previous section, but instead of changing the modulation frequency the lighting position as well as the exposure times are altered to account for a changed amount of active light. The distance between the two lighting devices used is changed and it was expected that the position

Nonlinear phase estimation errors

In this section, one major cause of systematic measurement errors of PMD chips will be examined. The PMD calculations, i.e. the four phase algorithm, assume sinusoidal modulation signals, but in fact the lighting devices receive a nearly rectangular modulation signal and emit light with a signal shape, which is close to a rectangular signal for small frequencies and similar to a triangular signal for higher frequencies. This deformation of the rectangular modulation signal is caused by rise and

Intensity related measurement errors

Another systematic error of PMD based depth cameras is independent of the phase offset, but it exhibits a dependency to the observed intensity. This kind of error is most easily observed when measuring a checkerboard and was described in numerous publications, e.g. [6], [8]. In order to characterize this type or error, a dataset consisting of 100 images for integration times between 0.1 ms and 5 ms with a step size of 0.1 ms was acquired under different indoor illumination settings. In Fig. 10(a)

Summary and conclusion

In order to broaden the acceptance for depth cameras in the area of machine vision, we compared accuracy and precision of the depth measurements of several commercial and experimental depth cameras. Moreover, we propose a monocular and therefore fully registered 2D/3D camera to benefit machine vision applications. Subsequently, approaches to increase the precision of PMD based ToF cameras and parameters affecting their accuracy are investigated.

Higher modulation frequencies, which in turn raise

Acknowledgments

This work was funded by the German Research Foundation (DFG) as part of the research training group GRK 1564 “Imaging New Modalities”.

Benjamin Langmann received the Diploma degree in computer science (2009) and in mathematics (2010) from the Technical University of Braunschweig, Germany. Since 2009 he is working as a research assistant at the Center for Sensor Systems (ZESS) at the University of Siegen as a member of the research training group GRK 1564 “Imaging New Modalities” funded by the German Research Foundation (DFG). His research includes 2D/3D imaging starting with image acquisition and super-resolution methods.

References (17)

  • M. Lindner et al.

    Time-of-flight sensor calibration for accurate range sensing

    Computer Vision and Image Understanding

    (2010)
  • T. Möller, H. Kraft, J. Frey, M. Albrecht, R. Lange, Robust 3d measurement with pmd sensors, Range Imaging Day, Zürich...
  • A. Jongenelen et al.

    Analysis of errors in tof range imaging with dual-frequency modulation

    IEEE Transactions on Instrumentation and Measurement

    (2011)
  • A.A. Dorrington et al.

    Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

    Measurement Science and Technology

    (2007)
  • O. Lottner et al.

    Systematic non-linearity for multiple distributed illumination units for time-of-flight (pmd) cameras

  • O. Lottner et al.

    Time-of-flight cameras with multiple distributed illumination units

  • T. Kahlmann et al.

    Calibration for increased accuracy of the range imaging camera swissrangertm

    Image Engineering and Vision Metrology (IEVM)

    (2006)
  • J. Radmer et al.

    Incident light related distance error study and calibration of the pmd-range imaging camera

    IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

    (2008)
There are more references available in the full text version of this article.

Cited by (13)

View all citing articles on Scopus

Benjamin Langmann received the Diploma degree in computer science (2009) and in mathematics (2010) from the Technical University of Braunschweig, Germany. Since 2009 he is working as a research assistant at the Center for Sensor Systems (ZESS) at the University of Siegen as a member of the research training group GRK 1564 “Imaging New Modalities” funded by the German Research Foundation (DFG). His research includes 2D/3D imaging starting with image acquisition and super-resolution methods. Additionally, he works in the field of computer vision, where he worked on background subtraction, segmentation and tracking utilizing multiple 2D/3D cameras.

Klaus Hartmann received the Diploma degree in electrical engineering and the Dr. Eng. degree from the University of Siegen, Siegen, Germany, in 1983 and 1989, respectively. After receiving the Diploma degree he was with an Industrial Company, where he was engaged in the development of distributed real-time measurement systems until 1985. Since 1985 he has been a Researcher with the Institute of Signal Processing and Communication Theory, Department of Electrical Engineering and Computer Science. In 1989, he became a member of the Center for Sensor Systems (ZESS), which is a central scientific research establishment at the University of Siegen. He also became the general management position of the ZESS in 1989. He founded a research group on Embedded System Design and Image Processing. He cofounded also some spin-offs. His current research interests comprise multisensory-systems, real-time processing for data fusion, sensor networking and image processing.

Otmar Loffeld received the Diploma degree in electrical engineering from the Technical University of Aachen, Aachen, Germany, in 1982 and the Eng. Dr. degree and the Habilitation degree in the field of digital signal processing and estimation theory from the University of Siegen, Siegen, Germany, in 1986 and 1989, respectively. In 1991, he was appointed Professor of digital signal processing and estimation theory with the University of Siegen. Since then, he has been giving lectures on general communication theory, digital signal processing, stochastic models and estimation theory, and synthetic aperture radar. He is the author of two textbooks on estimation theory. In 1995, he became a member of the Center for Sensor Systems (ZESS), which is a central scientific research establishment at the University of Siegen. Since 2005, he has been the Chairman of ZESS. In 1999, he became a Principal Investigator (PI) on baseline estimation for the X-band part of the Shuttle Radar Topography Mission (SRTM), where ZESS contributed to the German Aerospace Center (DLR)s baseline calibration algorithms. He is a PI for interferometric techniques in the German TerraSAR-X mission, and, together with Prof. Ender from FGAN, he is one the PIs for a bistatic spaceborne airborne experiment, where TerraSAR-X serves as the bistatic illuminator, whereas the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR)s PAMIR system mounted on a Transall airplane is used as a bistatic receiver. In 2002, he founded the International Postgraduate Programme (IPP) Multi Sensorics. In 2008, based on the aforementioned program, he established the NRW Research School on Multi Modal Sensor Systems for Environmental Exploration and Safety (MOSES) at the University of Siegen as an upgrade of excellence. He is the Speaker and Coordinator of both doctoral degree programs, which are hosted by ZESS. Furthermore, he is the universitys Scientific Coordinator for Multidimensional and Imaging Systems. His current research interests comprise multisensor data fusion, Kalman filtering techniques for data fusion, optimal filtering and process identification, SAR processing and simulation, SAR interferometry, phase unwrapping, and baseline estimation. His recent field of interest is bistatic SAR processing. Dr. Loffeld is a member of the Information Technology Society (ITG) of the German Association for Electrical, Electronic and Information Technologies (VDE) and a Senior Member of the IEEE Geoscience and Remote Sensing Society. He was the recipient of the Scientific Research Award of North Rhine-Westphalia (Bennigsen-Foerder Preis) for his works on applying Kalman filters to phase estimation problems such as Doppler centroid estimation in SAR, phase, and frequency demodulation.

View full text