Skip to main content
Log in

An occlusion metric for selecting robust camera configurations

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Vision based tracking systems for surveillance and motion capture rely on a set of cameras to sense the environment. The exact placement or configuration of these cameras can have a profound affect on the quality of tracking which is achievable. Although several factors contribute, occlusion due to moving objects within the scene itself is often the dominant source of tracking error. This work introduces a configuration quality metric based on the likelihood of dynamic occlusion. Since the exact geometry of occluders can not be known a priori, we use a probabilistic model of occlusion. This model is extensively evaluated experimentally using hundreds of different camera configurations and found to correlate very closely with the actual probability of feature occlusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Azarbayejani, A., Pentland, A.: Real-time self-calibrating stereo person tracking using 3-D shape estimation from blob features. In: Proceedings of the 13th International Conference on Pattern Recognition, vol. 3, pp. 627–32 (1996)

  2. Chen, X., Davis, J.: Wide area camera calibration using virtual calibration objects. IEEE Comput. Vis. Pattern Recogn. (2000)

  3. Cowan C.K. and Kovesi P.D. (1988). Automatic sensor placement from vision task requirements. IEEE Trans. Pattern Anal. Mach. Intell. 10: 407–416

    Article  Google Scholar 

  4. Fleishman, S., Cohen-Or, D., Lischinski, D.: Automatic camera placement for image-based modeling. In: Proceedings of Seventh Pacific Conference on Computer Graphics and Applications, Los Alamitos, CA, USA, IEEE Comput. Soc. (1999)

  5. Gonzalez-Banos H. and Latombe J.-C. (2002). Navigation strategies for exploring indoor environments. Int. J. Robot. Res. 21: 829–848

    Article  Google Scholar 

  6. Heikkila, J., Silven, O.: A four-step camera calibration procedure with implicit image correction. IEEE Comput. Vis. Pattern Recogn. 1106–1112 (1997)

  7. Lensch, H., Lang, J., Sa, A., Seidel, H.-P.: Planned sampling of spatially varying brdfs. Comput. Graph. Forum (Eurographics) 22 (2003)

  8. Marengoni, M., Draper, B., Hanson, A., Sitaraman, R.: Placing observers to cover a polyhedral terrain in polynomial time. In: IEEE Workshop on Applications of Computer Vision (WACV), Sarasoto, FL (1996)

  9. Neter, J., Wasserman, W., Kutner, M.H.: Applied Linear Statistical Models, 4th edn. Irwin (1996)

  10. Olague, G., Mohr, R., Venkatesh, S., Lovell, B.C.: Optimal camera placement to obtain accurate 3D point positions. In: Proceedings of Fourteenth International Conference on Pattern Recognition, Los Alamitos, CA, USA (1998)

  11. O’rourke J. (1987). Art Gallery Theorems and Algorithms. Oxford University Press, Oxford

    MATH  Google Scholar 

  12. Pito R. (1999). A solution to the next best view problem for automated surface acquisition. IEEE Trans. Pattern Anal. Mach. Intell. 21: 1016–1030

    Article  Google Scholar 

  13. Reed M. and Allen P. (2000). Constraint-based sensor planning for scene modeling. IEEE Trans. Pattern Anal. Mach. Intell. 22: 1460–1467

    Article  Google Scholar 

  14. Scott, W., Roth, G., Rivest, J.-F.: View planning with a registration constraint. In: Third International Conference on Recent Advances in 3D Imaging and Modeling (3DIM) (2001)

  15. Sedas-Gersey, S.: Algorithms for automatic sensor placement to acquire complete and accurate information. Ph.D. Dissertation (CMU-RI-TR-93-31), The Robotics Institute and Department of Achitecture, Carnegie Mellon University, Pittsburgh, PA (1993)

  16. Tarabanis K.A., Allen P.K. and Tsai R.Y. (1995). A survey of sensor planning in computer vision. IEEE Trans. Robot. Automat. 11: 86–104

    Article  Google Scholar 

  17. Tarabanis K.A., Tsai R.Y. and Allen P.K. (1995). The Mvp sensor planning system for robotic vision tasks. IEEE Trans. Robot. Automat. 11: 72–85

    Article  Google Scholar 

  18. Triggs, B., Laugier, C.: Automatic camera placement for robot vision tasks. In: Proceedings of 1995 IEEE International Conference on Robotics and Automation. Ieee, New York, NY, USA (1995)

  19. Wu J.J., Sharma R. and Huang T.S. (1998). Analysis of uncertainty bounds due to quantization for three-dimensional position estimation using multiple cameras. Optical Eng. 37: 280–92

    Article  Google Scholar 

  20. Ye Y. and Tsotsos J. (1999). Sensor planning for 3d object search. Comput. Vis. Image Understand. 73: 145–168

    Article  Google Scholar 

  21. Yi, S., Haralick, R.M., Shapiro, L.G.: Automatic sensor and light source positioning for machine vision. In: Proceedings of 10th International Conference on Pattern Recognition, IEEE Comput. Soc. Press Los Alamitos, CA, USA (1990)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James Davis.

Additional information

Authors X. Chen and J. Davis were in Computer Graphics Lab at Stanford University at time of research.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chen, X., Davis, J. An occlusion metric for selecting robust camera configurations. Machine Vision and Applications 19, 217–222 (2008). https://doi.org/10.1007/s00138-007-0094-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-007-0094-y

Keywords

Navigation