Skip to main content
Log in

Baseline Detection and Localization for Invisible Omnidirectional Cameras

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Two key problems for camera networks that observe wide areas with many distributed cameras are self-localization and camera identification. Although there are many methods for localizing the cameras, one of the easiest and most desired methods is to estimate camera positions by having the cameras observe each other; hence the term self-localization. If the cameras have a wide viewing field, e.g. an omnidirectional camera, and can observe each other, baseline distances between pairs of cameras and relative locations can be determined. However, if the projection of a camera is relatively small on the image of other cameras and is not readily visible, the baselines cannot be detected. In this paper, a method is proposed to determine the baselines and relative locations of these “invisible” cameras. The method consists of two processes executed simultaneously: (a) to statistically detect the baselines among the cameras, and (b) to localize the cameras by using information from (a) and propagating triangle constraints. Process (b) works for the localization in the case where the cameras are observed each other, and it does not require complete observation among the cameras. However, if many cameras cannot be observed each other because of the poor image resolution, it dose not work. The baseline detection by process (a) solves the problem. This methodology is described in detail and results are provided for several scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adelson, E.H. and Bergen, E.H. 1991. The plenoptic function and the elements of early vision. In Computation Models of Visual Processing, M. Landy and J.A. Movshon (Eds.), MIT Press.

  • Aggarwal, J.K. and Cai, Q. 1997. Human motion analysis: A review. In Proc. IEEE Nonrigid and Articulated Motion Workshop, pp. 90–102.

  • Boyd, J. Hunter, E. Kelly, P. Tai, L. Phillips, C., and Jain, R. 1998. MPI-video infrastructure for dynamic environments. In IEEE Int. Conf. Multimedia Systems.

  • Collins, Lipton and Kanade, 1999. A system for video surveillance and monitoring. In Proc. American Nuclear Society (ANS) Eighth International Topical Meeting on Robotics and Remote Systems. Pittsburgh, PA.

  • Eyevision, 2001 http://www.ri.cmu.edu/events/sb35/tksuperbowl.html

  • Hong, J. et al. 1991. Image-based homing. In Proc. Int. Conf. Robotics and Automation.

  • Ishiguro, H., Yamamoto, M., and Tsuji, S. 1992. Omni-directional stereo. In IEEE Trans. PAMI, 14(2): pp. 257–262.

    Google Scholar 

  • Ishiguro, H. 1997. Distributed vision system: A perceptual information infrastructure for robot navigation. In Proc. IJCAI, pp. 36–41.

  • Ishiguro, H. 1998. Development of low-cost compact omnidirectional vision sensors and their applications. In Proc. Int. Conf. Information Systems, Analysis and Synthesis, pp. 433–439.

  • Ishiguro, H. and Nishimura, T. 2001. VAMBAM: View and motion based aspect models for distributed omnidirectional vision systems. In Proc. Int. J. Conf. Artificial Intelligence, pp. 1375–1380.

  • Ishiguro, H. 1998. Development of low-cost compact omnidirectional vision sensors and their applications. In Proc. Int. Conf. Information Systems, Analysis and Synthesis, pp. 433–439.

  • Jain R. and Wakimoto, K. 1995. Multiple perspective interactive video. In Proc. Int. Conf. Multimedia Computing and System.

  • Kato, K. Ishiguro, H., and Barth, M. 1999. Identifying and localizing robots in a multi-robot system. In Proc. Int. Conf. Intelligent Robots and Systems, pp. 966–972.

  • Medioni, G., Cohen, I., Bremond, F., and Nevatia, R. 2001. Event detection and analysis from video streams. IEEE Trans. PAMI, 23(8):873–888.

    Google Scholar 

  • Nayar, S.K. and Baker, S. 1997. Catadioptiric image formation. In Proc. Image Understanding Workshop. pp. 1431–1437.

  • Rees, D.W. 1970. Panoramic television viewing system. United States Patent, No. 3, 505, 465.

  • Sarachik, K. 1989. Characterizing an indoor environment with a mobile robot and uncaliblated stereo. In Proc. Int. Conf. Robotics and Automation, pp. 984–989.

  • Torr, P.H.S. and Murray, D.W. 1997. The development and comparison of robust methods for estimating the fundamental matrix. Int. J. Computer Vision, 24(3):271–300.

    Google Scholar 

  • Vsam, 2001. http://www-2.cs.cmu.edu/∼vsam/

  • Yagi, Y. and Kawato, S. 1990. Panoramic scene analysis with conic projection. In Proc. IROS.

  • Yamazawa, K., Yagi Y., and Yachida, M. 1993. Omnidirectional imaging with hyperboloidal projection. In Proc. Int. Conf. Robots and Systems.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ishiguro, H., Sogo, T. & Barth, M. Baseline Detection and Localization for Invisible Omnidirectional Cameras. International Journal of Computer Vision 58, 209–226 (2004). https://doi.org/10.1023/B:VISI.0000019687.45792.f0

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:VISI.0000019687.45792.f0

Navigation