Skip to main content
Log in

Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment

  • SI: Augmented Reality
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

This paper presents a novel model-based approach of dynamic defocus and occlusion compensation method in a multi-projection environment. Conventional defocus compensation research applies appearance-based method, which needs a point spread function (PSF) calibration when either position or orientation of an object to be projected is changed, thus cannot be applied to interactive applications in which the object dynamically moves. On the other hand, we propose a model-based method in which PSF and geometric calibrations are required only once in advance, and projector’s PSF is computed online based on geometric relationship between the projector and the object without any additional calibrations. We propose to distinguish the oblique blur (loss of high-spatial-frequency components according to the incidence angle of the projection light) from the defocus blur and to introduce it to the PSF computation. For each part of the object surfaces, we select an optimal projector that preserves the largest amount of high-spatial-frequency components of the original image to realize defocus-free projection. The geometric relationship can also be used to eliminate the cast shadows of the projection images in multi-projection environment. Our method is particularly useful in the interactive systems because the movement of the object (consequently geometric relationship between each projector and the object) is usually measured by an attached tracking sensor. This paper describes details about the proposed approach and a prototype implementation. We performed two proof-of-concept experiments to show the feasibility of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Audet S, Cooperstock J (2007) Shadow removal in front projection environments using object tracking. In: Proceedings of IEEE CVPR ’07, pp 1–8

  • Bandyopadhyay D, Raskar R, Fuchs H (2001) Dynamic shader lamps: painting on movable objects. In: Proceedings of IEEE/ACM ISAR ’01, pp 207–216

  • Bimber O, Emmerling A (2006) Multifocal projection: a multiprojector technique for increasing focal depth. IEEE Trans Visualization Comput Graph 12(4):658–667

    Article  Google Scholar 

  • Bimber O, Raskar R (2005) Spatial augmented reality: merging real and virtual worlds. A. K. Peters Ltd, USA

    Google Scholar 

  • Bimber O, Iwai D, Wetzstein G, Grundhöfer A (2008) The visual computing of projector-camera systems. Comput Graph Forum 27(8):2219–2254

    Article  Google Scholar 

  • Brown MS, Song P, Cham TJ (2006) Image pre-conditioning for out-of-focus projector blur. In: Proceedings of IEEE CVPR ’06, vol II, pp 1956–1963

  • Cham TJ, Rehg JM, Sukthankar R, Sukthankar G (2003) Shadow elimination and occluder light suppression for multi-projector displays. In: Proceedings of IEEE CVPR ’03, pp 513–520

  • Grosse M, Bimber O (2008) Coded aperture projection. In: Proceedings of ACM EDT-IPT ’08, pp 13:1–13:4

  • Gupta S, Jaynes C (2006) The universal media book: tracking and augmenting moving surface with projected information. In: Proceedings of IEEE/ACM ISMAR ’06, pp 177–180

  • Hartley RI, Zisserman A (2004) Multiple view geometry in computer vision. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Jaynes C, Webb S, Steele RM (2004) Camera-based detection and removal of shadows from interactive multiprojector displays. IEEE Trans Vis Comput Graph 10(3):290–301

    Article  Google Scholar 

  • Kondo D, Kijima R (2002) Proposal of a free form projection display using the principle of duality rendering. In: Proceedings of VSMM ’02, pp 346–352

  • Kondo D, Shiwaku Y, Kijima R (2008) Free form projection display and application. In: Proceedings of IEEE/ACM PROCAMS ’08, pp 31–32

  • Levoy M, Chen B, Vaish V, Horowitz M, McDowall I, Bolas M (2004) Synthetic aperture confocal imaging. ACM Trans Graph 23(3):825–834

    Article  Google Scholar 

  • Low KL, Welch G, Lastra A, Fuchs H (2001) Life-sized projector-based dioramas. In: Proceedings of ACM VRST ’01, pp 93–101

  • Majumder A, Brown MS (2007) Practical multi-projector display design. A K Peters, USA

    Google Scholar 

  • Oyamada Y, Saito H (2008) Defocus blur correcting projector-camera system. Lect Notes Comput Sci 5259:453–464

    Article  Google Scholar 

  • Park H, Lee MH, Kim SJ, Park JI (2006) Surface-independent direct-projected augmented reality. In: Proceedings of ACCV ’06, vol 2, pp 892–901

  • Raskar R, Welch G, Low KL, Bandyopadhyay D (2001) Shader lamps: animating real objects with image-based illumination. In: Proceedings of Eurographics EGWR ’01, pp 89–102

  • Sato K, Inokuchi S (1987) Range-imaging system utilizing nematic liquid crystal mask. In: Proceedings of IEEE ICCV ’87, pp 657–661

  • Sukthankar R, Cham TJ, Sukthankar G (2001) Dynamic shadow elimination for multi-projector displays. In: Proceedings of IEEE CVPR ’01, vol II, pp 151–157

  • Zhang L, Nayar S (2006) Projection defocus analysis for scene capture and image display. ACM Trans Graph 25(3):907–915

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daisuke Iwai.

Appendix: Geometric calibration of projector

Appendix: Geometric calibration of projector

We apply a geometric calibration method based on gray-code projection, which was proposed by Sato and Inokuchi (Sato and Inokuchi 1987).

Suppose that a 3D point (XYZ) in a world coordinate system is projected onto a 2D image plane (xy). According to the pinhole camera model, the projection can be described by the perspective equation with the 3 × 4 perspective projection matrix C:

$$ h [ x \, y \, 1 ] ^t = {\bf C} [ X \, Y \, Z \, 1 ]^t, $$
(14)

where

$$ {\bf C}= \left[\begin{array}{cccc} C_{11} & C_{12} & C_{13} & C_{14}\\ C_{21} & C_{22} & C_{23} & C_{24}\\ C_{31} & C_{32} & C_{33} & 1 \end{array} \right]. $$
(15)

C is determined up to a scale factor h and has eleven unknown parameters. Because two equations are derived from (14), the unknown parameters of C can be solved using a least-squares method with six or more correspondences between the 3D world and the 2D screen coordinate systems. The perspective projection matrix can be decomposed to intrinsic and extrinsic matrices (Hartley and Zisserman 2004). Once the extrinsic matrix is calculated, a 3D rigid-body transform of the pinhole camera device (camera and projector in our case) in the world coordinate system can be computed. The basic idea of geometric registration of each child projector to target objects is to use a camera to determine the relationship (i.e., perspective projection matrix or extrinsic matrix) between the object and the projector.

In the actual calibration process, we position a reference cube (50 [mm] on a side) with spatially known feature points on the slide stage within the intersection of view frusta of the projector and the camera. The fiducial cube determines the world coordinate system (Fig. 14a). First, the camera captures the fiducial cube, and the fiducial points are automatically extracted in the captured image. Then, intersection points of two grid line segments each of which connects two fiducial points are calculated as shown in Fig. 14b. The number of the intersection points is 147 in total. With 147 correspondences between the world coordinate value (XYZ) and the camera screen coordinate value (x c y c ), a perspective projection matrix C is calculated using a least-squares method.

$$ h [ x_c \, y_c \, 1 ]^t = {\bf C} [ X \, Y \, Z \, 1 ]^t. $$
(16)
Fig. 14
figure 14

Fiducial cube. a Spatially known feature points are printed on it. b Grid line segments each of which connects detected two fiducial points

In the geometric calibration of each child projector, horizontal and vertical gray-code patterns are projected onto the fiducial cube from each child projector to compute the perspective projection matrix P i (i = 1, ..., 12). The projected scenes are captured by the camera (Fig. 15). The captured images are processed and correspondences between camera screen and projector screen coordinate values \(((x_c, y_c)\leftrightarrow(x_{pi}, y_{pi}))\) are obtained. Correspondences between world and camera screen coordinate values \(((X, Y, Z)\leftrightarrow(x_c, y_c))\) were already obtained in the camera calibration. Therefore, correspondences between world and projector screen coordinate values \(((X, Y, Z)\leftrightarrow(x_{pi}, y_{pi}))\) are derived from these two correspondences (Fig. 15). With the 147 correspondences, the perspective projection matrix of each child projector P i is calculated.

$$ h [ x_{pi} \, y_{pi} \, 1 ]^t = {\bf P}_{\bf i} [ X \, Y \, Z \, 1 ]^t. $$
(17)
Fig. 15
figure 15

Gray-code projection to obtain geometric correspondences between the world and the projector screen coordinate systems via camera screen coordinate system

Because only a part of child projectors is focused on the fiducial cube, we translate it by using the slide stage so that other child projectors focus on the cube. Then, the camera captures the cube again, and the perspective projection matrix C′ is computed. Comparing the extrinsic matrices of C and C′, we can compute the rigid-body transform (i.e., translation vector) of the cube. Then, we calibrate the uncalibrated child projectors one by one with the same method as mentioned above by taking into account the translation vector of the cube. If there are still defocused child projectors, we move the cube again and repeat the calibration process.

We use the fiducial cube as an projection object. In addition, we place a planar surface behind the cube, on which spatially known lines are drawn. The system takes a picture of the surface by the camera and recognizes the lines automatically. Because the intrinsic parameter of the camera is derived from C, we can compute the rigid-body transform of the surface from the camera. Consequently, position and orientation of the surface in the world coordinate system is calculated. When the cube is translated by the slide stage, the system captures it and automatically compute the position and orientation of the cube in the world coordinate system.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nagase, M., Iwai, D. & Sato, K. Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment. Virtual Reality 15, 119–132 (2011). https://doi.org/10.1007/s10055-010-0168-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-010-0168-4

Keywords

Navigation