Optical signal processing with illumination-encoded filters

https://doi.org/10.1016/j.cviu.2010.10.012Get rights and content

Abstract

Recently, computer vision researchers have shown that orthogonal functions and computational techniques from the signal processing framework can be mapped directly into the scene using projector–camera systems. These scene-space signal processing algorithms are achieved with illumination-encoded functions as primitives and computations derived from surface reflection models. Some examples of this new optical approach include convolution filtering and aliasing-canceling filter banks.

In this paper we present computational techniques for realizing fundamental elements of the signal processing framework in the 3D scene domain. The motivation for optical computation directly in the scene is to avoid information loss when the rich 3D scene is reduced to an image. The computations are at subpixel resolution because they are performed within each camera sensor. Scene-space filtering applies 2D operators to locally planar 3D surfaces based on the optical coupling of the scene surface topology with projector–camera devices. Signal processing issues such as sampling geometry, dynamic range, mathematical operators, and resolution are addressed. We report a novel subpixel point correspondence technique for accurate camera sensor footprint localization in general scenes. It is a parallelizable optical clipping algorithm related to the polygon clipping and box filter anti-aliasing algorithms used in computer graphics. It replaces the regular coordinates of an image with a local surface parametrization in projector coordinates. The result is subpixel resolution filter responses. We provide experiments and results to evaluate the performance of our scene-space filtering techniques with both planar and non-planar objects.

Research highlights

Finite impulse response filters can be embedded in illumination. ► In projector–camera systems each sensor pixel footprint can be computed. ► Subpixel filtering of non-planar surface textures using dense light fields. ► Optically-based geometry clipping algorithm for projector–camera systems. ► Structured light patterns can be used for geometric and spectral computation.

Introduction

A common projector–camera technique is to encode the pixel positions of the projector with binary structured light to resolve point correspondences with the camera pixels that recover the code [1]. Techniques using high-frequency primitives have been used to probe optical properties such as defocus blur to compute depth [2], [3]. A high-frequency pattern is projected into the scene and the reflected signal is analyzed for low-pass filtering effects (blur). Other schemes use binary patterns to discern the global and local illumination components [4]. There are other variations of these techniques highlighting the effectiveness of projector–camera systems in computer vision problems.

Recently, researchers have investigated light patterns based on orthogonal bases from the signal processing domain. The use of analytical functions as structured light primitives allows for new approaches to computer vision problems. Computations are performed directly in the scene by modeling signal processing formulations in the optical domain with digital projectors and cameras. The work of Damera-Venkata and Chang [5], Jean [6], [7], and Ghosh et al. [8] clearly demonstrate that signal processing formulations can be implemented in scene-space by optically encoding analytical functions and leveraging the native reflection models in the scene. This is the core computational model and is easily extended with processor-based image processing. Mapping signal processing tools into scene-space also requires satisfying the requirements for signal representation and the analytical constraints of sampling theory in the optical domain.

The motivation for optical computation directly in the scene is to avoid information loss when the rich 3D scene is reduced to an image. Passive computer vision techniques suffer from the inherent ambiguities and overall challenge of inverse mapping from an image to the real world phenomena. For example, stereo vision systems must address the point correspondence problems between an image pair before the surface reconstruction.

Active vision techniques have been successful in removing some of these ambiguities. One solution is to replace a camera with a digital projector and use structured light techniques [1] to introduce a spatial code into the scene to disambiguate the pixel correspondences. Depth discontinuities are another challenge in stereo images and researchers have utilized a simpler form of active light, multiple flash images [9], to localize these features in an image. Clearly, introducing a known signal into an unknown scene, like in radar, is a valuable strategy in solving computer vision problems (i.e., photometric stereo [10]). And the extra images required can be produced at very high rates with current consumer-grade digital projectors [11], [12].

In this paper we present a technique for producing subpixel filter responses of scenes with locally planar topology. We add to our early results in [7], an orthographic projection calibration technique designed to produce higher accuracy light fields with only consumer-grade digital projector technology, and some unpublished results. The calibration technique is used to produce an orthographic projection system, and specifically, to bring the regular sampling geometry of signal processing to scene-space techniques. We report a novel subpixel projector–camera correspondence technique using an optical analog of polygon clipping and anti-aliasing from computer graphics. The goal is to replace the regular coordinates of an image with a local surface parametrization in projector coordinates. We compare processor-based image processing to our optical method and perform experiments using objects of varying topology.

In Section 2 we review some topics in the projector–camera literature for probing the environment and scene-space techniques, in particular. A technique for more general scenes is presented in Section 5. Experiments with various objects are documented in Section 6 and a discussion of the results is in Section 7.

Section snippets

Related work

There are many structured light techniques where a known illumination pattern is projected into the scene and the spatial or spectral transformation of the reflected signal is used to determine the configuration of objects and surfaces. We focus on coded light triangulation and depth from defocus, techniques built on a pattern primitive and related to our work with basis functions.

Batlle et al. [1] provides an overview of optical triangulation coding and the trade-offs depending on the computer

Overview

In this paper a computational technique is discussed to realize optically-based signal processing in the scene. We begin this paper by discussing dynamic range compensation, in Section 4, to enable the implementation of Eq. (3) with digital projectors and cameras.

In Section 5 a method is presented that takes a local approach, it finds the neighborhood of projector pixels, the footprint, of each camera sensor pixel from structured light point correspondences. This local approximation of regular

Dynamic range

In an optical computation system, projectors and cameras establish the interface between the algorithm executing on the processor and the analog mathematical operators in the scene. Naturally, an optical encoding of functions and parameters is an analog signal and fraught with signal handling problems.

The change in representation from digital data to an analog light field carrier brings new capabilities along with issues of quantization error, sampling, device noise, etc. The camera and digital

Optical clipping for subpixel sensor footprints

The results from optical filtering of planar scenes show the effectiveness of optical domain modeling of the digital signal processing framework [7]. However, the experiments clearly exploit some advantages in the setup. For example, the optical sampling topology is regular, defined by the orthographic light field from the projector. The object surfaces, posters, are planar and locally reflect filter images with low spatial distortion toward the sensors. The posters, relative to both the

Experiments and results

In the previous section we developed a computational framework for computing the footprint of sensors in projector–camera systems. Now we report on experiments using a projector and camera with different objects. The working volume is the overlapping region where the projector and camera are in focus.

The tests were performed using a 640 × 480 monochrome camera, an IMPERX IPX-VGA-210 640 × 480 monochrome camera with a 6 mm lens, and a 1024 × 768 resolution ViewSonic PJ1158 3LCD digital projector. The

Discussion

The number of projections in an experiment, including structured light (40 images), box filters (16 images), and edge filters (64 images), is 120 images. This set of images is easily handled with 3 kHz projection techniques reported in the literature [12], [11], however, the projection frame rate depends on producing sufficient optical output for image capture at a compatible exposure time.

The number of images captured in an experiment, including structured light (80 images), box filter

Conclusion

We have presented an optical computation technique for scene analysis following the signal processing framework. It is a parallelizable algorithm which outputs at each camera sensor subpixel filter responses to augment triangulated 3D coordinates. The efficacy of our methods were tested with objects of varying topology but locally planar. The results clearly show feature detection results comparable to computer processor-based computations but at subpixel resolution.

The conditions where optical

Acknowledgment

We would like to thank the anonymous reviewers for providing valuable comments and questions that helped us improve this document.

References (36)

  • J. Batlle et al.

    Recent progress in coded structured light as a technique to solve the correspondence problem: a survey

    Pattern Recog.

    (1998)
  • J. Salvi et al.

    Pattern codification strategies in structured light systems

    Pattern Recog.

    (2004)
  • S.K. Nayar et al.

    Real-time focus range sensor

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1996)
  • L. Zhang et al.

    Projection defocus analysis for scene capture and image display

  • S.K. Nayar et al.

    Fast separation of direct and global components of a scene using high frequency illumination

    ACM Trans. Graph.

    (2006)
  • N. Damera-Venkata, N.L. Chang, Realizing super-resolution with superimposed projection, in: CVPR,...
  • Y. Jean, Scene-space feature detectors, in: CVPR: Beyond Multivew Geometry, 2007, pp....
  • Y. Jean, Orthographic projection for optical signal processing, in: ECCV 2008, Workshop on Omnidirectional Vision,...
  • A. Ghosh, S. Achutha, W. Heidrich, M. O’Toole, BRDF acquisition with basis illumination, in: IEEE International...
  • R. Feris et al.

    Multiflash stereopsis: depth-edge-preserving stereo with small baseline illumination

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2008)
  • R.J. Woodham

    Photometric method for determining surface orientation from multiple images

    Opt. Eng.

    (1980)
  • D. Cotting et al.

    Embedding imperceptible patterns into projected images for simultaneous acquisition and display

  • S. Narasimhan et al.

    Temporal dithering of illumination for fast active vision

  • D. Scharstein, R. Szeliski, High-accuracy stereo depth maps using structured light, in: CVPR03, 2003, pp. I:...
  • L. Zhang et al.

    Spacetime faces: high resolution capture for modeling and animation

  • S. Zhang, P. Huang, High-resolution, real-time 3-d shape acquisition, in: IEEE CVPR Workshop, vol. 3, 2004, pp....
  • D. Lanman, D. Crispell, G. Taubin, Surround structured lighting for full object scanning, in: 3DIM07, 2007, pp....
  • M. Young, E. Beeson, J. Davis, S. Rusinkiewicz, R. Ramamoorthi, Viewpoint-coded structured light, in: CVPR07, 2007, pp....
  • Cited by (0)

    View full text