Skip to main content
Log in

Optimal Camera Placement for Automated Surveillance Tasks

  • Published:
Journal of Intelligent and Robotic Systems Aims and scope Submit manuscript

Abstract

Camera placement has an enormous impact on the performance of vision systems, but the best placement to maximize performance depends on the purpose of the system. As a result, this paper focuses largely on the problem of task-specific camera placement. We propose a new camera placement method that optimizes views to provide the highest resolution images of objects and motions in the scene that are critical for the performance of some specified task (e.g. motion recognition, visual metrology, part identification, etc.). A general analytical formulation of the observation problem is developed in terms of motion statistics of a scene and resolution of observed actions resulting in an aggregate observability measure. The goal of this system is to optimize across multiple cameras the aggregate observability of the set of actions performed in a defined area. The method considers dynamic and unpredictable environments, where the subject of interest changes in time. It does not attempt to measure or reconstruct surfaces or objects, and does not use an internal model of the subjects for reference. As a result, this method differs significantly in its core formulation from camera placement solutions applied to problems such as inspection, reconstruction or the Art Gallery class of problems. We present tests of the system’s optimized camera placement solutions using real-world data in both indoor and outdoor situations and robot-based experimentation using an all terrain robot vehicle-Jr robot in an indoor setting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Abrams, S., Allen, P.K., Tarabanis, K.A.: Dynamic sensor planning. In: Proceedings of the IEEE International Conference on Intelligent Autonomous Systems, pp. 206–215. IEEE, Pittsburgh, PA, February 1993

  2. Ben-Arie, J., Wang, Z., Pandit, P., Rajaram, S.: Human activity recognition using multidimensional indexing. IEEE Trans. Pattern Anal. Mach. Intell. 24(8), 1091–1104, August 2002

    Article  Google Scholar 

  3. Beymer, D., Konolige, K.: Real-time tracking of multiple people using continuous detection. In: International Conference on Computer Vision (1999)

  4. Bodor, R., Drenner, A., Janssen, M., Schrater, P., Papanikolopoulos, N.: Mobile camera positioning to optimize the observability of human activity recognition tasks. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2005)

  5. Bodor, R., Jackson, B., Papanikolopoulos, N.: Vision-based human tracking and activity recognition. In: Proceedings of the 11th Mediterranean Conference on Control and Automation, June 2003

  6. Bodor, R., Schrater, P., Papanikolopoulos, N.: Multi-camera positioning to optimize task observability. In: Proceedings of the IEEE International Conference on Advanced Video and Signal-Based Surveillance (2005)

  7. Bregler, C., Malik, J.: Tracking people with twists and exponential maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 1998

  8. Carranza, J., Thebalt, C., Magnor, M., Seidel, H.: Free-viewpoint video of human actors. In: Proceedings of ACM SIGGRAPH (2003)

  9. Chen, S., Li, Y.: Automatic sensor placement for model-based robot vision. IEEE Trans. Syst. Man Cybern. Part B Cybern. 33(1), 393–408 (2004)

    Article  Google Scholar 

  10. Chen, X., Davis, J.: Camera placement considering occlusion for robust motion capture. Technical Report CS-TR-2000-07. Stanford University (2000)

  11. Cheung, G., Baker, S., Kanade, T. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2003

  12. Cutler, R., Turk, M.: View-based interpretation of real-time optical flow for gesture recognition. In: Proceedings of the Third IEEE Conference on Face and Gesture Recognition, Nara, Japan, April 1998

  13. Denzler, J., Zobel, M., Niemann, H.: On optimal camera parameter selection in Kalman filter based object tracking. In: Proceedings of the 24th DAGM Symposium on Pattern Recognition, pp. 17–25. Zurich, Switzerland, (2002)

  14. Fablet, R., Black, M.J.: Automatic detection and tracking of human motion with a view-based representation. In: European Conference on Computer Vision, May 2002

  15. Fleishman, S., Cohen-Or, D., Lischinski, D.: Automatic camera placement for image-based modeling. In: Proceedings of Pacific Graphics 99, pp. 12–20 (1999)

  16. Gasser, G., Bird, N., Masoud, O., Papanikolopoulos, N.: Human activity monitoring at bus stops. In: Proceedings of the IEEE Conference on Robotics and Automation, April 2004

  17. Gerkey, B., Vaughan, R.T., Howard, A.: The player/stage project: Tools for multi-robot and distributed sensor systems. In: Proceedings of the 11th International Conference on Advanced Robotics, pp. 317–323. Coimbra, Portugal, June 2003

  18. Grauman, K., Shakhnarovich, G., Darrell, T.: A bayesian approach to image-based visual hull reconstruction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2003)

  19. Haritaoglu, I., Harwood, D., Davis, L.: W4: Real-time surveillance of people and their activities. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 809–831 (2000)

    Article  Google Scholar 

  20. Isler, V., Kannan, S., Daniilidis, K.: Vc-dimension of exterior visibility. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 667–671, May 2004

    Article  Google Scholar 

  21. Jackson, B., Bodor, R., Papanikolopoulos, N.P.: Learning static occlusions from interactions with moving figures. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Japan, October 2004

  22. Masoud, O., Papanikolopoulos, N.P.: Using geometric primitives to calibrate traffic scenes. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Japan, October 2004

  23. Matusik, W., Buehler, C., Raskar, R., Gortler, S., McMillan, L.: Image-based visual hulls. In: Proceedings of ACM SIGGRAPH, July 2000

  24. Maurin, B., Masoud, O., Papanikolopoulos, N.: Monitoring crowded traffic scenes. In: Proceedings of the IEEE International Conference on Intelligent Transportation Systems. Singapore, September 2002

  25. McKenna, S., Jabri, S., Duric, Z., Wechsler, H.: Tracking interacting people. In: Proceedings of the Conference on Automatic Face and Gesture Recognition. Grenoble, France, March 2000

  26. Mizoguchi, M., Sato, J.: Space-time invariants and video motion extraction from arbitrary viewpoints. In: Proceedings of the International Conference on Pattern Recogition. Quebec, August 2002

  27. Mordohai, P., Medioni, G.: Dense multiple view stereo with general camera placement using tensor voting. In: Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission (2004)

  28. Mori, H., Charkari, M., Matsushita, T. On-line vehicle and pedestrian detections based on sign pattern. IEEE Trans. Ind. Electron. 41(4) (1994)

  29. Nelson, B., Khosla, P.K.: Increasing the tracking region of an eye-in-hand system by singluarity and joint limit avoidance. In: Proceedings of the IEEE International Conference on Robotics and Automation vol. 3, pp. 418–423. IEEE, Piscataway (1993)

  30. Nelson, B., Khosla, P.K.: Integrating sensor placement and visual tracking strategies. In: Proceedings of the 1994 IEEE International Conference on Robotics and Automation vol. 2, pp. 1351–1356. IEEE, Piscataway (1994)

  31. Nelson, B., Khosla, P.K.: The resolvability ellipsoid for visual servoing. In: Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, pp. 829–832. IEEE, Piscataway (1994)

  32. Nicolescu, M., Medioni, G.: Motion segmentation with accurate boundaries—a tensor voting approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. June 2003

  33. Niem, W.: Error analysis for silhouette-based 3D shape estimation from multiple views. In: Proceedings of the International Workshop on Synthetic-Natural Hybrid Coding and Three Dimensional Imaging (1997)

  34. Olague, G., Mohr, R.: Optimal 3D sensor placement to obtain accurate 3D point positions. In: Proceedings of the Fourteenth International Conference on Pattern Recognition vol. 1, pp. 16–20, August 1998

  35. O’Rourke, J.: Art Gallery Theorems and Algorithms. Oxford University Press, New York (1987)

    MATH  Google Scholar 

  36. Parameswaran, V., Chellappa, R.: View invariants for human action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2003

  37. Pless, R.: Image spaces and video trajectories: using isomap to explore video sequences. In: Proceedings of the International Conference on Computer Vision, pp. 1433–1440 (2003)

  38. Rosales, R., Sclaroff, S.: 3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 1999

  39. Roy, S., Chaudhury, S., Banerjee, S.: Active recognition through next view planning: a survey. Pattern Recogn. 37, 429–446 (2004)

    Article  Google Scholar 

  40. Scott, W., Roth, G., Rivest, J.-F.: View planning for automated three-dimensional object reconstruction and inspection. Comput. Surv. 35(1), 64–96, March 2003

    Article  Google Scholar 

  41. Sharma, R., Hutchinson, S.: Motion perceptibility and its application to active vision-based servo control. IEEE Trans. Robot. Autom. 13(4), 607–617 (1997)

    Article  Google Scholar 

  42. Stauffer, C., Tieu, K.: Automated multi-camera planar tracking correspondence modeling. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, vol. 1., June 2003

  43. Tarabanis, K.A., Allen, P., Tsai, R.Y.: A survey of sensor planning in computer vision. IEEE Trans. Robot. Autom. 11(1), 86–104, February 1995

    Article  Google Scholar 

  44. Tarabanis, K.A., Tsai, R.Y., Allen, P.: Automated sensor planning for robotic vision tasks. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 76–82, April 1991

  45. Tarabanis, K.A., Tsai, R.Y., Allen, P.: The mvp sensor planning system for robotic vision tasks. IEEE Trans. Robot. Autom. 11(1), 72–85, February 1995

    Article  Google Scholar 

  46. Tarabanis, K.A., Tsai, R.Y., Kaul, A.: Computing occlusion-free viewpoints. IEEE Trans. Pattern Anal. Mach. Intell. 18(3), 273–292, March 1996

    Article  Google Scholar 

  47. Tarbox, G., Gottschlich, S.: Planning for complete sensor coverage in inspection. Comput. Vis. Image Underst. 61(1), 84–111, January 1995

    Article  Google Scholar 

  48. Ukita, N., Matsuyama, T.: Incremental observable-area modeling for cooperative tracking. In: Proceedings of the International Conference on Pattern Recognition, September 2000

  49. Weik, S., Liedtke, C.: Hierarchical 3D pose estimation for articulated human body models from a sequence of volume data. In: Proceedings of the International Workshop on Robot Vision, pp. 27–34 (2001)

  50. Wong, K., Chang, M.: 3d model reconstruction by constrained bundle adjustment. In: Proceedings of the International Conference on Pattern Recognition, August 2004

  51. Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780–785, July 1997

    Article  Google Scholar 

  52. Yao, Y., Allen, P.: Computing robust viewpoints with multi-constraints using tree annealing. In: IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, pp. 993–998. IEEE, Piscataway (1995)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nikolaos Papanikolopoulos.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bodor, R., Drenner, A., Schrater, P. et al. Optimal Camera Placement for Automated Surveillance Tasks. J Intell Robot Syst 50, 257–295 (2007). https://doi.org/10.1007/s10846-007-9164-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-007-9164-7

Keywords

Navigation