Abstract
We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.
Similar content being viewed by others
References
Arvo, J. (1993). Transfer equations in global illumination. In Global Illumination, SIGGRAPH ’93 Course Notes.
Campillo, A., & Shapiro, S. (1983). Picosecond streak camera fluorometry—a review. IEEE Journal of Quantum Electronics.
Dattorro, J. (2006). Convex optimization & Euclidean distance geometry. Morrisville: Lulu.com.
Denk, W., Strickler, J.H., & Webb, W.W. (1990). Two-photon laser scanning fluorescence microscopy. Science, 248, 73–76.
Garren, D., Goldstein, J., Obuchon, D., Greene, R., & North, J. (2004). SAR image formation algorithm with multipath reflectivity estimation. In Proceedings of the IEEE radar conference, 2004 (pp. 323–328).
Garren, D., Sullivan, D., North, J., & Goldstein, J. (2005). Image preconditioning for a SAR image reconstruction algorithm for multipath scattering. In Proc. of IEEE int. radar conference
Gonzalez-Banos, H., & Davis, J. (2004). Computing depth under ambient illumination using multi-shuttered light. Computer Vision and Pattern Recognition.
Iddan, G. J., & Yahav, G. (2001). 3D imaging in the studio (and elsewhere…). In SPIE.
Immel, D. S., Cohen, M. F., & Greenberg, D. P. (1986). A radiosity method for non-diffuse environments. In ACM SIGGRAPH.
Itatani, J., Quéré, F., Yudin, G., Ivanov, M., Krausz, F., & Corkum, P. (2002). Attosecond streak camera. Physical Review Letters.
Kajiya, J. T. (1986). The rendering equation. In ACM SIGGRAPH.
Kamerman, G. (1993). Active electro-optical system. In The infrared and electro-optical system handbook: Vol. 6. Laser radar [M]. Chapter 1.
Kutulakos, K. N., & Steger, E. (2007). A theory of refractive and specular 3d shape by light-path triangulation. International Journal of Computer Vision.
Lange, R., & Seitz, P. (2001). Solid-state time-of-flight range camera. IEEE Journal of Quantum Electronics.
Morris, N. J. W., & Kutulakos, K. N. (2007). Reconstructing the surface of inhomogeneous transparent scenes by scatter trace photography. International Conference on Computer Vision.
Nayar, S. K., Ikeuchi, K., & Kanade, T. (1990). Shape from interreflections. International Conference on Computer Vision.
Nayar, S. K., Krishnan, G., Grossberg, M. D., & Raskar, R. (2006). Fast separation of direct and global components of a scene using high frequency illumination. In ACM SIGGRAPH.
Ng, R., Marc, L., Mathieu, B., Gene, D., Mark, H., & Pat, H. (2005). Light field photography with a hand-held plenoptic camera. Stanford University Computer Science Tech Report.
Patow, G., & Pueyo, X. (2003). A survey of inverse rendering problems. Computer Graphics Forum.
Ramamoorthi, R., & Hanrahan, P. (2001). A signal-processing framework for inverse rendering. Computer Graphics and Interactive Techniques.
Sarunic, P., White, K., & Rutten, M. (2001). Over-the-horizon radar multipath and multisensor track fusion algorithm development.
Schmitt, J. M. (1999). Optical coherence tomography (oct): a review. IEEE Quantum Electronics.
Seitz, S. M., Matsushita, Y., & Kutulakos, K. N. (2005). A theory of inverse light transport. IEEE International Conference on Computer Vision.
Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., & Lensch, H. P. A. (2005). Dual photography. In ACM SIGGRAPH.
Vandapel, N., Amidi, O., & Miller, J. (2004). Toward laser pulse waveform analysis for scene interpretation. IEEE International Conference on Robotics and Automation.
Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., & Tumblin, J. (2007). Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocussing. In ACM SIGGRAPH.
Raskar, R., & Davis, J. 5d time-light transport matrix: What can we reason about scene properties, Int. Memo 2007.
Smith, A., Skorupski, J., & Davis, J. Transient rendering, UC Santa Cruz TR UCSC-SOE-08-26, Feb 2008. 2.
Author information
Authors and Affiliations
Corresponding author
Additional information
The Marr Prize is awarded to the best paper(s) at the biannual flagship vision conference, the IEEE International Conference on Computer Vision (ICCV). This paper is an extended and re-reviewed journal version of the conference paper that received Honorable Mention in 2009.
Electronic Supplementary Material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Kirmani, A., Hutchison, T., Davis, J. et al. Looking Around the Corner using Ultrafast Transient Imaging. Int J Comput Vis 95, 13–28 (2011). https://doi.org/10.1007/s11263-011-0470-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-011-0470-y