Skip to main content
Log in

Utility based decision support engine for camera view selection in multimedia surveillance systems

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Design and implementation of an effective surveillance system is a challenging task. In practice, a large number of CCTV cameras are installed to prevent illegal and unacceptable activities where a human operator observes different camera views and identifies various alarming cases. But reliance on the human operator for real-time response can be expensive as he may be unable to pay full attention to all camera views at the same time. Moreover, the complexity of a situation may not be easily perceivable by the operator for which he might require additional support in response to an adverse situation. In this paper, we present a Decision Support Engine (DSE) to select and schedule most appropriate camera views that can help the operator to take an informed decision. For this purpose, we devise a utility based approach where the utility value changes based on automatically detected events in different surveillance zones, event co-relation, and operator’s feedback. In addition to the selected camera views, we propose to synthetically embed extra information around the camera views such as event summary and suggested action plan to globally perceive the current situation. The experimental results show the usefulness of the proposed decision support system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Ahmed DT, Shirmohammadi S (2011) A decision support engine for video surveillance systems. In: Proc. IEEE workshop on advances in automated multimedia surveillance for public safety, In: Proc. IEEE international conference on multimedia & Expo, pp 1–6

  2. Atrey PK, Kankanhalli M, Jain R (2006) Information assimilation framework for event detection in multimedia surveillance systems. In: Special issue on multimedia surveillance systems in springer/acm multimedia systems journal, pp 239–253

  3. Atrey PK, El Saddik A, KankanhalliM(2011) Effective multimedia surveillance using a humancentric approach. Multimedia Tools and Applications 51:697–721

    Article  Google Scholar 

  4. Baumann M, MacLean K, Hazelton T, McKay A (2010) Emulating human attention getting practices with wearable haptics. In: IEEE haptics symposium, pp 149–156

  5. Davis M (2003) Active capture: integrating human-computer interaction and computer vision/audition to automate media capture. In: Proc. IEEE international conference on multimedia & expo, vol 2, pp 185–188

  6. Hossain MA, Atrey PK, El Saddik A (2011) Modeling and assessing quality of information in multisensor multimedia monitoring systems. ACM Trans Multimedia Comput Commun Appl 7(3):1–3:30

    Article  Google Scholar 

  7. Ivanov Y, Bobick A (2000) Recognition of visual activities and interactions by stochastic parsing. IEEE Trans Pattern Anal Mach Intell 22:852–872

    Article  Google Scholar 

  8. Leykin A, Hammoud R (2008) Real-time estimation of human attention field in lwir and color surveillance videos. In: IEEE international workshop on object tracking and classification in and beyond the visible spectrum, pp 1–6

  9. Liu A, Zhang Y, Song Y, Zhang D, Li J, Yang Z (2008) Human attention model for semantic scene analysis in movies. In: Proc. IEEE international conference on multimedia & expo, pp 1473–1476

  10. Ma YF, Lu L, Zhang HJ, Li M (2003) A user attention model for video summarization. In: ACM international conference on multimedia, p 533542

  11. Nguyen N, Bui H, Venkatesh S, West G (2003) Recognising and monitoring high-level behaviours in complex spatial environments. IEEE Comput Soc Conf Comput Vis Pattern Recogn 2:620

    Google Scholar 

  12. Norris C, Armstrong G (1999) The maximum surveillance society: the rise of CCTV. Berg Publishers

  13. Oates T, Schmill M, Cohen P (2000) A method for clustering the experiences of a mobile robot that accords with human judgments. In: Proceedings of IJCAI, pp 846–851

  14. Peters C (2003) Attention-driven eye gaze and blinking for virtual humans

  15. Rath T, Manmatha R (2003) Features for word spotting in historical manuscripts. In: International conference on document analysis and recognition, vol 1, p 218

  16. Smith GJ (2004) Behind the screens: Examining constructions of deviance and informal practices among cctv control room operators in the UK. Surveillance & Society, CCTV Special (eds. Norris, McCahill and Wood) 2(3):376–395

    Google Scholar 

  17. Vaiapury K, Kankanhalli MS (2008) Finding interesting images in albums using attention. J Multimed 3:1–12

    Article  Google Scholar 

  18. Wallace E, Diffey C (1998) CCTV control room ergonomics. Tech. rep., police scientific development branch, UK home office

Download references

Acknowledgements

This research is supported by NPST program by King Saud University Project Number 11-INF1830-02.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dewan Tanvir Ahmed.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ahmed, D.T., Hossain, M.A., Shirmohammadi, S. et al. Utility based decision support engine for camera view selection in multimedia surveillance systems. Multimed Tools Appl 73, 219–240 (2014). https://doi.org/10.1007/s11042-012-1294-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-012-1294-7

Keywords

Navigation