Skip to main content
Log in

Visual Detection of Events of Interest from Urban Activity

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Learning patterns of human-related activities in outdoor urban spaces, and utilising them to detect activity outliers that represent events of interest, can have important applications in automatic news generation and security. This paper addresses the problem of detecting both expected and unexpected activities in the visual domain. We use a foreground extraction method to mark people and vehicles in videos from city surveillance cameras as foreground blobs. The extracted foreground blobs are then converted to an activity measure to indicate how crowded the scene is at any given video frame. The activity measure, collected over the period of a day, is used to build an activity feature vector describing that day. Day activity vectors are then clustered into different patterns of activities. Common patterns in the data are not considered important as they represent the everyday norm of urban life in that location. Outliers, on the other hand, are detected and reported as events of interest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Search engine for MultimediA enviRonment generated content, http://www.smartfp7.eu/.

References

  1. Bardas, G., Astaras, S., Diamantas, S., & Pnevmatikakis, A. (2017). 3d tracking and classification system using a monocular camera. Wireless Personal Communications, 92(1), 63–85. doi:10.1007/s11277-016-3839-y.

    Article  Google Scholar 

  2. Barnich, O., & Droogenbroeck, M. V. (2011). Vibe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing, 20(6), 1709–1724. doi:10.1109/TIP.2010.2101613.

    Article  MathSciNet  MATH  Google Scholar 

  3. Bloisi, D., & Iocchi, L. (2012). Independent multimodal background subtraction. In Computational Modelling of Objects Represented in Images-Fundamentals, Methods and Applications III, Third International Symposium, CompIMAGE 2012, Rome, Italy, September 5–7, 2012 (pp. 39–44). doi:10.1201/b12753-8.

  4. Elgammal, A., Duraiswami, R., Harwood, D., & Davis, L. S. (2002). Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceeding of the IEEE, 90, 1151–1163.

    Article  Google Scholar 

  5. Godbehere, A. B., Matsukawa, A., & Goldberg, K. Y. (2012). Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. In American Control Conference, ACC 2012, Montreal, QC, Canada, June 27–29, 2012 (pp. 4305–4312).

  6. Goya, Y., Chateau, T., Malaterre, L., & Trassoudaine, L. (2006). Vehicle trajectories evaluation by static video sensors. In 2006 IEEE Intelligent Transportation Systems Conference (pp. 864–869).

  7. Haines, T. S. F., & Xiang, T. (2014). Background subtraction with Dirichletprocess mixture models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4), 670–683.

    Article  Google Scholar 

  8. Heikkilä, M., & Pietikäinen, M. (2006). A texture-based method for modeling the background and detecting moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4), 657–662. doi:10.1109/TPAMI.2006.68.

    Article  Google Scholar 

  9. KaewTraKulPong, P., & Bowden, R. (2002). An improved adaptive background mixture model for real-time tracking with shadow detection. In Video-Based Surveillance Systems, chapter 11 (pp. 135–144). US: Springer.

  10. Katsarakis, N., Pnevmatikakis, A., Tan, Z. H., & Prasad, R. (2016). Improved Gaussian mixture models for adaptive foreground segmentation. Wireless Personal Communications, 87(3), 629–643.

    Article  Google Scholar 

  11. Maddalena, L., & Petrosino, A. (2008). A self-organizing approach to background subtraction for visual surveillance applications. IEEE Transactions on Image Processing, 17(7), 1168–1177. doi:10.1109/TIP.2008.924285.

    Article  MathSciNet  Google Scholar 

  12. Maddalena, L., & Petrosino, A. (2010). A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection. Neural Computing and Applications, 19(2), 179–186. doi:10.1007/s00521-009-0285-8.

    Article  Google Scholar 

  13. Noh, S., & Jeon, M. (2013). Computer Vision—ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5–9, 2012, Revised Selected Papers, Part III, chap. A New Framework for Background Subtraction Using Multiple Cues (pp. 493–506). Berlin: Springer.

  14. Powers, D. M. W. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. International Journal of Machine Learning Technology, 2(1), 37–63.

    MathSciNet  Google Scholar 

  15. Sobral, A. (2013). BGSLibrary: An OpenCV C++ Background Subtraction Library. In IX Workshop de Viso Computacional (WVC’2013). Rio de Janeiro, Brazil.

  16. Stauffer, C., & Grimson, W. E. L. (1999). Adaptive background mixture models for real-time tracking. In 1999 Conference on Computer Vision and Pattern Recognition (CVPR ’99), 23–25 June 1999, Ft. Collins, CO, USA(pp. 2246–2252). doi:10.1109/CVPR.1999.784637.

  17. St-Charles, P., & Bilodeau, G. (2014). Improving background subtraction using local binary similarity patterns. In IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, March 24–26, 2014 (pp. 509–515).

  18. St-Charles, P., Bilodeau, G., & Bergevin, R. (2014). Flexible background subtraction with self-balanced local sensitivity. In IEEE Conference on Computer Vision and Pattern Recognition , CVPR Workshops 2014, Columbus, OH, USA, June 23–28, 2014 (pp. 414–419).

  19. Zivkovic, Z. (2004). Improved adaptive Gaussian mixture model for background subtraction. In 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, UK, August 23–26, 2004 (pp. 28–31). doi:10.1109/ICPR.2004.1333992.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aristodemos Pnevmatikakis.

Additional information

Part of this work has been carried out in the scope of the EC co-funded project eWALL (FP7-610658).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Astaras, S., Pnevmatikakis, A. & Tan, ZH. Visual Detection of Events of Interest from Urban Activity. Wireless Pers Commun 97, 1877–1888 (2017). https://doi.org/10.1007/s11277-017-4651-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-017-4651-z

Keywords

Navigation