Abstract
This paper presents a novel model-based approach of dynamic defocus and occlusion compensation method in a multi-projection environment. Conventional defocus compensation research applies appearance-based method, which needs a point spread function (PSF) calibration when either position or orientation of an object to be projected is changed, thus cannot be applied to interactive applications in which the object dynamically moves. On the other hand, we propose a model-based method in which PSF and geometric calibrations are required only once in advance, and projector’s PSF is computed online based on geometric relationship between the projector and the object without any additional calibrations. We propose to distinguish the oblique blur (loss of high-spatial-frequency components according to the incidence angle of the projection light) from the defocus blur and to introduce it to the PSF computation. For each part of the object surfaces, we select an optimal projector that preserves the largest amount of high-spatial-frequency components of the original image to realize defocus-free projection. The geometric relationship can also be used to eliminate the cast shadows of the projection images in multi-projection environment. Our method is particularly useful in the interactive systems because the movement of the object (consequently geometric relationship between each projector and the object) is usually measured by an attached tracking sensor. This paper describes details about the proposed approach and a prototype implementation. We performed two proof-of-concept experiments to show the feasibility of our approach.













Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Audet S, Cooperstock J (2007) Shadow removal in front projection environments using object tracking. In: Proceedings of IEEE CVPR ’07, pp 1–8
Bandyopadhyay D, Raskar R, Fuchs H (2001) Dynamic shader lamps: painting on movable objects. In: Proceedings of IEEE/ACM ISAR ’01, pp 207–216
Bimber O, Emmerling A (2006) Multifocal projection: a multiprojector technique for increasing focal depth. IEEE Trans Visualization Comput Graph 12(4):658–667
Bimber O, Raskar R (2005) Spatial augmented reality: merging real and virtual worlds. A. K. Peters Ltd, USA
Bimber O, Iwai D, Wetzstein G, Grundhöfer A (2008) The visual computing of projector-camera systems. Comput Graph Forum 27(8):2219–2254
Brown MS, Song P, Cham TJ (2006) Image pre-conditioning for out-of-focus projector blur. In: Proceedings of IEEE CVPR ’06, vol II, pp 1956–1963
Cham TJ, Rehg JM, Sukthankar R, Sukthankar G (2003) Shadow elimination and occluder light suppression for multi-projector displays. In: Proceedings of IEEE CVPR ’03, pp 513–520
Grosse M, Bimber O (2008) Coded aperture projection. In: Proceedings of ACM EDT-IPT ’08, pp 13:1–13:4
Gupta S, Jaynes C (2006) The universal media book: tracking and augmenting moving surface with projected information. In: Proceedings of IEEE/ACM ISMAR ’06, pp 177–180
Hartley RI, Zisserman A (2004) Multiple view geometry in computer vision. Cambridge University Press, Cambridge
Jaynes C, Webb S, Steele RM (2004) Camera-based detection and removal of shadows from interactive multiprojector displays. IEEE Trans Vis Comput Graph 10(3):290–301
Kondo D, Kijima R (2002) Proposal of a free form projection display using the principle of duality rendering. In: Proceedings of VSMM ’02, pp 346–352
Kondo D, Shiwaku Y, Kijima R (2008) Free form projection display and application. In: Proceedings of IEEE/ACM PROCAMS ’08, pp 31–32
Levoy M, Chen B, Vaish V, Horowitz M, McDowall I, Bolas M (2004) Synthetic aperture confocal imaging. ACM Trans Graph 23(3):825–834
Low KL, Welch G, Lastra A, Fuchs H (2001) Life-sized projector-based dioramas. In: Proceedings of ACM VRST ’01, pp 93–101
Majumder A, Brown MS (2007) Practical multi-projector display design. A K Peters, USA
Oyamada Y, Saito H (2008) Defocus blur correcting projector-camera system. Lect Notes Comput Sci 5259:453–464
Park H, Lee MH, Kim SJ, Park JI (2006) Surface-independent direct-projected augmented reality. In: Proceedings of ACCV ’06, vol 2, pp 892–901
Raskar R, Welch G, Low KL, Bandyopadhyay D (2001) Shader lamps: animating real objects with image-based illumination. In: Proceedings of Eurographics EGWR ’01, pp 89–102
Sato K, Inokuchi S (1987) Range-imaging system utilizing nematic liquid crystal mask. In: Proceedings of IEEE ICCV ’87, pp 657–661
Sukthankar R, Cham TJ, Sukthankar G (2001) Dynamic shadow elimination for multi-projector displays. In: Proceedings of IEEE CVPR ’01, vol II, pp 151–157
Zhang L, Nayar S (2006) Projection defocus analysis for scene capture and image display. ACM Trans Graph 25(3):907–915
Author information
Authors and Affiliations
Corresponding author
Appendix: Geometric calibration of projector
Appendix: Geometric calibration of projector
We apply a geometric calibration method based on gray-code projection, which was proposed by Sato and Inokuchi (Sato and Inokuchi 1987).
Suppose that a 3D point (X, Y, Z) in a world coordinate system is projected onto a 2D image plane (x, y). According to the pinhole camera model, the projection can be described by the perspective equation with the 3 × 4 perspective projection matrix C:
where
C is determined up to a scale factor h and has eleven unknown parameters. Because two equations are derived from (14), the unknown parameters of C can be solved using a least-squares method with six or more correspondences between the 3D world and the 2D screen coordinate systems. The perspective projection matrix can be decomposed to intrinsic and extrinsic matrices (Hartley and Zisserman 2004). Once the extrinsic matrix is calculated, a 3D rigid-body transform of the pinhole camera device (camera and projector in our case) in the world coordinate system can be computed. The basic idea of geometric registration of each child projector to target objects is to use a camera to determine the relationship (i.e., perspective projection matrix or extrinsic matrix) between the object and the projector.
In the actual calibration process, we position a reference cube (50 [mm] on a side) with spatially known feature points on the slide stage within the intersection of view frusta of the projector and the camera. The fiducial cube determines the world coordinate system (Fig. 14a). First, the camera captures the fiducial cube, and the fiducial points are automatically extracted in the captured image. Then, intersection points of two grid line segments each of which connects two fiducial points are calculated as shown in Fig. 14b. The number of the intersection points is 147 in total. With 147 correspondences between the world coordinate value (X, Y, Z) and the camera screen coordinate value (x c , y c ), a perspective projection matrix C is calculated using a least-squares method.
In the geometric calibration of each child projector, horizontal and vertical gray-code patterns are projected onto the fiducial cube from each child projector to compute the perspective projection matrix P i (i = 1, ..., 12). The projected scenes are captured by the camera (Fig. 15). The captured images are processed and correspondences between camera screen and projector screen coordinate values \(((x_c, y_c)\leftrightarrow(x_{pi}, y_{pi}))\) are obtained. Correspondences between world and camera screen coordinate values \(((X, Y, Z)\leftrightarrow(x_c, y_c))\) were already obtained in the camera calibration. Therefore, correspondences between world and projector screen coordinate values \(((X, Y, Z)\leftrightarrow(x_{pi}, y_{pi}))\) are derived from these two correspondences (Fig. 15). With the 147 correspondences, the perspective projection matrix of each child projector P i is calculated.
Because only a part of child projectors is focused on the fiducial cube, we translate it by using the slide stage so that other child projectors focus on the cube. Then, the camera captures the cube again, and the perspective projection matrix C′ is computed. Comparing the extrinsic matrices of C and C′, we can compute the rigid-body transform (i.e., translation vector) of the cube. Then, we calibrate the uncalibrated child projectors one by one with the same method as mentioned above by taking into account the translation vector of the cube. If there are still defocused child projectors, we move the cube again and repeat the calibration process.
We use the fiducial cube as an projection object. In addition, we place a planar surface behind the cube, on which spatially known lines are drawn. The system takes a picture of the surface by the camera and recognizes the lines automatically. Because the intrinsic parameter of the camera is derived from C, we can compute the rigid-body transform of the surface from the camera. Consequently, position and orientation of the surface in the world coordinate system is calculated. When the cube is translated by the slide stage, the system captures it and automatically compute the position and orientation of the cube in the world coordinate system.
Rights and permissions
About this article
Cite this article
Nagase, M., Iwai, D. & Sato, K. Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment. Virtual Reality 15, 119–132 (2011). https://doi.org/10.1007/s10055-010-0168-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10055-010-0168-4