Skip to main content
Log in

Rainbow Flash Camera: Depth Edge Extraction Using Complementary Colors

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

We present a novel color multiplexing method for extracting depth edges in a scene. It has been shown that casting shadows from different light positions provides a simple yet robust cue for extracting depth edges. Instead of flashing a single light source at a time as in conventional methods, our method flashes all light sources simultaneously to reduce the number of captured images. We use a ring light source around a camera and arrange colors on the ring such that the colors form a hue circle. Since complementary colors are arranged at any position and its antipole on the ring, shadow regions where a half of the hue circle is occluded are colorized according to the orientations of depth edges, while non-shadow regions where all the hues are mixed have a neutral color in the captured image. Thus the colored shadows in the single image directly provide depth edges and their orientations in an ideal situation. We present an algorithm that extracts depth edges from a single image by analyzing the colored shadows. We also present a more robust depth edge extraction algorithm using an additional image captured by rotating the hue circle with \(180^\circ \) to compensate for scene textures and ambient lights. We compare our approach with conventional methods for various scenes using a camera prototype consisting of a standard camera and 8 color LEDs. We also demonstrate a bin-picking system using the camera prototype mounted on a robot arm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. For curved depth edges, we consider the tangent line at each depth edge point.

    Fig. 4
    figure 4

    (Left) Depth edge extraction in 3D using a ring light source having a continuous hue circle. At a depth edge, a half of the hue circle is occluded and the hues in the other half are integrated in the shadow region. This produces colored shadow, whose hue corresponds to the average of the integrated hues. The shadow color directly provides the orientation of the depth edge. On the other hand, all the hues are mixed in non-shadow regions, producing a natural (white) color. Therefore, to extract shadow regions, the saturation component for each pixel (i.e., the distance from the origin on the hue-saturation plane) can be used as a confidence measure. (Right) Robust depth edge extraction using two images. We capture the two images by using a hue circle \(H\) and its complementary version \(\bar{H}\) and compute the distance between the shadow colors on the hue-saturation plane to obtain a pixel-wise shadow confidence map (Color figure online)

    Fig. 5
    figure 5

    Colors captured by the camera against the duty cycle for each RGB sub-LED (Color figure online)

  2. Vaquero et al. (2008) showed that three light positions are sufficient to cast shadows for all depth edges in general scenes. However, it only distinguishes three depth edge orientations.

  3. We observed that \(D_\mathrm{HS} (M_1, W)\) is not sensitive to the scale factor \(t\) as long as the pixels in \(W\) are not saturated and not too dark. This is expected because \(t\) only affects the brightness (value) component of the image. In experiments we set \(t = 4\).

References

  • Agrawal, A., Sun, Y., Barnwell, J., & Raskar, R. (2010). Vision-guided robot system for picking objects by casting shadows. The International Journal of Robotics Research, 29(2–3), 155–173.

    Article  Google Scholar 

  • Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6), 679–698.

    Article  Google Scholar 

  • Chen, C., Vaquero, D. & Turk, M. (2011). Illumination demultiplexing from a single image. In Proceedings of IEEE international conference computer vision (ICCV).

  • Crispell, D., Lanman, D., Sibley, P. G., Zhao, Y. & Taubin, G. (2006). Beyond silhouettes: Surface reconstruction using multi-flash photography. In Proceedings of international symposium 3D data processing, visualization, and transmission (3DPVT), (pp. 405–412).

  • De Decker, B., Kautz, J., Mertens, T. & Bekaert, P. (2009). Capturing multiple illumination conditions using time and color multiplexing. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR), (pp. 2536–2543).

  • Feris, R., Raskar, R., Chen, L., Tan, K. H., & Turk, M. (2008). Multiflash stereopsis: Depth-edge-preserving stereo with small baseline illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(1), 147–159.

    Article  Google Scholar 

  • Feris, R., Raskar, R., Tan, K. H. & Turk, M. (2004). Specular reflection reduction with multi-flash imaging. In Proceedings of Brazilian symposium computer graphics and image processing (SIBGRAPI), (pp. 316–321).

  • Feris, R., Turk, M. & Raskar, R. (2006) Dealing with multi-scale depth changes and motion in depth edge detection. In Proceedings of Brazilian symposium computer graphics and image processing (SIBGRAPI), (pp. 3–10).

  • Fyffe, G., Yu, X., Debevec, P. (2011). Single-shot photometric stereo by spectral multiplexing. In Proceedings of IEEE international conference computational photography (ICCP), (pp. 1–6).

  • Hernandez, C., Vogiatzis, G., Brostow, G. J., Stenger, B., & Cipolla, R. (2007). Non-rigid photometric stereo with colored lights. In Proceedings of IEEE international conference computer vision (ICCV), (pp. 1–8).

  • Liu, M. Y., Tuzel, O., Taguchi, Y. (2013). Joint geodesic upsampling of depth images. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR).

  • Liu, M. Y., Tuzel, O., Veeraraghavan, A., Taguchi, Y., Marks, T. K., & Chellappa, R. (2012). Fast object localization and pose estimation in heavy clutter for robotic bin picking. The International Journal of Robotics Research, 31(8), 951–973.

    Article  Google Scholar 

  • MacEvoy, B. (2008). Color vision. http://www.handprint.com/LS/CVS/color.html.

  • Minomo, Y., Kakehi, Y., Iida, M., Naemura, T. (2006). Transforming your shadow into colorful visual media: Multiprojection of complementary colors. Computers in Entertainment, 4(3).

  • Park, J. I., Lee, M. H., Grossberg, M. D., Nayar, S. K. (2007). Multispectral imaging using multiplexed illumination. In Proceedings of IEEE international conference computer vision (ICCV).

  • Raskar, R., Tan, K. H., Feris, R., Yu, J., & Turk, M. (2004). Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. ACM Transactions on Graphics, 23(3), 679–688.

    Article  Google Scholar 

  • Sá, A. M., Carvalho, P. C. P., Velho, L. (2002). \((b, s)\)-BCSL: Structured light color boundary coding for 3D photography. In Proceedings of vision, modeling, and visualization conference (VMV).

  • Schechner, Y., Nayar, S., & Belhumeur, P. (2003). A theory of multiplexed illumination. In Proceedings of IEEE international conference computer vision (ICCV), (Vol. 2, pp. 808–815).

  • Shroff, N., Taguchi, Y., Tuzel, O., Veeraraghavan, A., Ramalingam, S. & Okuda, H. (2011). Finding a needle in a specular haystack. In Proceedings of IEEE international conference robotics automation (ICRA), (pp. 5963–5970).

  • Taguchi, Y. (2012). Rainbow flash camera: Depth edge extraction using complementary colors. In Proceedings of European conference computer vision (ECCV), (Vol. 6, pp. 513–527).

  • Vaquero, D. A., Feris, R. S., Turk, M., Raskar, R. (2008). Characterizing the shadow space of camera-light pairs. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR).

  • Vaquero, D. A., Raskar, R., Feris, R. S., Turk, M. (2009). A projector-camera setup for geometry-invariant frequency demultiplexing. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR), (pp. 2082–2089).

  • Wan, G., Horowitz, M., Levoy, M. (2012). Applications of multi-bucket sensors to computational photography. Tech. Rep. 2012–2, Stanford Computer Graphics Laboratory.

  • Woodham, R. J. (1980). Photometric method for determining surface orientation from multiple images. Optical Engineering, 19(1), 139–144.

    Article  Google Scholar 

Download references

Acknowledgments

The author thanks Jay Thornton for many valuable discussions and for naming the proposed camera. The author also thanks Ramesh Raskar, Amit Agrawal, Oncel Tuzel, Srikumar Ramalingam, Tim K. Marks, Ming-Yu Liu, Makito Seki, and Yukiyasu Domae for their feedback and support, and the anonymous reviewers of ECCV 2012 and this journal submission for their helpful comments. Special thanks to John Barnwell, William Yerazunis, and Abraham Goldsmith for their help in developing the camera prototype.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuichi Taguchi.

Additional information

Communicated by Dr. Srinivas Narasimhan, Dr. Frédo Durand and Dr. Wolfgang Heidrich.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (wmv 24418 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Taguchi, Y. Rainbow Flash Camera: Depth Edge Extraction Using Complementary Colors. Int J Comput Vis 110, 156–171 (2014). https://doi.org/10.1007/s11263-014-0726-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-014-0726-4

Keywords

Navigation