Abstract
We present a novel color multiplexing method for extracting depth edges in a scene. It has been shown that casting shadows from different light positions provides a simple yet robust cue for extracting depth edges. Instead of flashing a single light source at a time as in conventional methods, our method flashes all light sources simultaneously to reduce the number of captured images. We use a ring light source around a camera and arrange colors on the ring such that the colors form a hue circle. Since complementary colors are arranged at any position and its antipole on the ring, shadow regions where a half of the hue circle is occluded are colorized according to the orientations of depth edges, while non-shadow regions where all the hues are mixed have a neutral color in the captured image. Thus the colored shadows in the single image directly provide depth edges and their orientations in an ideal situation. We present an algorithm that extracts depth edges from a single image by analyzing the colored shadows. We also present a more robust depth edge extraction algorithm using an additional image captured by rotating the hue circle with \(180^\circ \) to compensate for scene textures and ambient lights. We compare our approach with conventional methods for various scenes using a camera prototype consisting of a standard camera and 8 color LEDs. We also demonstrate a bin-picking system using the camera prototype mounted on a robot arm.
Similar content being viewed by others
Notes
For curved depth edges, we consider the tangent line at each depth edge point.
Vaquero et al. (2008) showed that three light positions are sufficient to cast shadows for all depth edges in general scenes. However, it only distinguishes three depth edge orientations.
We observed that \(D_\mathrm{HS} (M_1, W)\) is not sensitive to the scale factor \(t\) as long as the pixels in \(W\) are not saturated and not too dark. This is expected because \(t\) only affects the brightness (value) component of the image. In experiments we set \(t = 4\).
References
Agrawal, A., Sun, Y., Barnwell, J., & Raskar, R. (2010). Vision-guided robot system for picking objects by casting shadows. The International Journal of Robotics Research, 29(2–3), 155–173.
Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6), 679–698.
Chen, C., Vaquero, D. & Turk, M. (2011). Illumination demultiplexing from a single image. In Proceedings of IEEE international conference computer vision (ICCV).
Crispell, D., Lanman, D., Sibley, P. G., Zhao, Y. & Taubin, G. (2006). Beyond silhouettes: Surface reconstruction using multi-flash photography. In Proceedings of international symposium 3D data processing, visualization, and transmission (3DPVT), (pp. 405–412).
De Decker, B., Kautz, J., Mertens, T. & Bekaert, P. (2009). Capturing multiple illumination conditions using time and color multiplexing. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR), (pp. 2536–2543).
Feris, R., Raskar, R., Chen, L., Tan, K. H., & Turk, M. (2008). Multiflash stereopsis: Depth-edge-preserving stereo with small baseline illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(1), 147–159.
Feris, R., Raskar, R., Tan, K. H. & Turk, M. (2004). Specular reflection reduction with multi-flash imaging. In Proceedings of Brazilian symposium computer graphics and image processing (SIBGRAPI), (pp. 316–321).
Feris, R., Turk, M. & Raskar, R. (2006) Dealing with multi-scale depth changes and motion in depth edge detection. In Proceedings of Brazilian symposium computer graphics and image processing (SIBGRAPI), (pp. 3–10).
Fyffe, G., Yu, X., Debevec, P. (2011). Single-shot photometric stereo by spectral multiplexing. In Proceedings of IEEE international conference computational photography (ICCP), (pp. 1–6).
Hernandez, C., Vogiatzis, G., Brostow, G. J., Stenger, B., & Cipolla, R. (2007). Non-rigid photometric stereo with colored lights. In Proceedings of IEEE international conference computer vision (ICCV), (pp. 1–8).
Liu, M. Y., Tuzel, O., Taguchi, Y. (2013). Joint geodesic upsampling of depth images. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR).
Liu, M. Y., Tuzel, O., Veeraraghavan, A., Taguchi, Y., Marks, T. K., & Chellappa, R. (2012). Fast object localization and pose estimation in heavy clutter for robotic bin picking. The International Journal of Robotics Research, 31(8), 951–973.
MacEvoy, B. (2008). Color vision. http://www.handprint.com/LS/CVS/color.html.
Minomo, Y., Kakehi, Y., Iida, M., Naemura, T. (2006). Transforming your shadow into colorful visual media: Multiprojection of complementary colors. Computers in Entertainment, 4(3).
Park, J. I., Lee, M. H., Grossberg, M. D., Nayar, S. K. (2007). Multispectral imaging using multiplexed illumination. In Proceedings of IEEE international conference computer vision (ICCV).
Raskar, R., Tan, K. H., Feris, R., Yu, J., & Turk, M. (2004). Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. ACM Transactions on Graphics, 23(3), 679–688.
Sá, A. M., Carvalho, P. C. P., Velho, L. (2002). \((b, s)\)-BCSL: Structured light color boundary coding for 3D photography. In Proceedings of vision, modeling, and visualization conference (VMV).
Schechner, Y., Nayar, S., & Belhumeur, P. (2003). A theory of multiplexed illumination. In Proceedings of IEEE international conference computer vision (ICCV), (Vol. 2, pp. 808–815).
Shroff, N., Taguchi, Y., Tuzel, O., Veeraraghavan, A., Ramalingam, S. & Okuda, H. (2011). Finding a needle in a specular haystack. In Proceedings of IEEE international conference robotics automation (ICRA), (pp. 5963–5970).
Taguchi, Y. (2012). Rainbow flash camera: Depth edge extraction using complementary colors. In Proceedings of European conference computer vision (ECCV), (Vol. 6, pp. 513–527).
Vaquero, D. A., Feris, R. S., Turk, M., Raskar, R. (2008). Characterizing the shadow space of camera-light pairs. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR).
Vaquero, D. A., Raskar, R., Feris, R. S., Turk, M. (2009). A projector-camera setup for geometry-invariant frequency demultiplexing. In Proceedings of IEEE conference computer vision and pattern recognition (CVPR), (pp. 2082–2089).
Wan, G., Horowitz, M., Levoy, M. (2012). Applications of multi-bucket sensors to computational photography. Tech. Rep. 2012–2, Stanford Computer Graphics Laboratory.
Woodham, R. J. (1980). Photometric method for determining surface orientation from multiple images. Optical Engineering, 19(1), 139–144.
Acknowledgments
The author thanks Jay Thornton for many valuable discussions and for naming the proposed camera. The author also thanks Ramesh Raskar, Amit Agrawal, Oncel Tuzel, Srikumar Ramalingam, Tim K. Marks, Ming-Yu Liu, Makito Seki, and Yukiyasu Domae for their feedback and support, and the anonymous reviewers of ECCV 2012 and this journal submission for their helpful comments. Special thanks to John Barnwell, William Yerazunis, and Abraham Goldsmith for their help in developing the camera prototype.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Dr. Srinivas Narasimhan, Dr. Frédo Durand and Dr. Wolfgang Heidrich.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 1 (wmv 24418 KB)
Rights and permissions
About this article
Cite this article
Taguchi, Y. Rainbow Flash Camera: Depth Edge Extraction Using Complementary Colors. Int J Comput Vis 110, 156–171 (2014). https://doi.org/10.1007/s11263-014-0726-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-014-0726-4