Abstract
Learning patterns of human-related activities in outdoor urban spaces, and utilising them to detect activity outliers that represent events of interest, can have important applications in automatic news generation and security. This paper addresses the problem of detecting both expected and unexpected activities in the visual domain. We use a foreground extraction method to mark people and vehicles in videos from city surveillance cameras as foreground blobs. The extracted foreground blobs are then converted to an activity measure to indicate how crowded the scene is at any given video frame. The activity measure, collected over the period of a day, is used to build an activity feature vector describing that day. Day activity vectors are then clustered into different patterns of activities. Common patterns in the data are not considered important as they represent the everyday norm of urban life in that location. Outliers, on the other hand, are detected and reported as events of interest.






Similar content being viewed by others
Notes
Search engine for MultimediA enviRonment generated content, http://www.smartfp7.eu/.
References
Bardas, G., Astaras, S., Diamantas, S., & Pnevmatikakis, A. (2017). 3d tracking and classification system using a monocular camera. Wireless Personal Communications, 92(1), 63–85. doi:10.1007/s11277-016-3839-y.
Barnich, O., & Droogenbroeck, M. V. (2011). Vibe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing, 20(6), 1709–1724. doi:10.1109/TIP.2010.2101613.
Bloisi, D., & Iocchi, L. (2012). Independent multimodal background subtraction. In Computational Modelling of Objects Represented in Images-Fundamentals, Methods and Applications III, Third International Symposium, CompIMAGE 2012, Rome, Italy, September 5–7, 2012 (pp. 39–44). doi:10.1201/b12753-8.
Elgammal, A., Duraiswami, R., Harwood, D., & Davis, L. S. (2002). Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceeding of the IEEE, 90, 1151–1163.
Godbehere, A. B., Matsukawa, A., & Goldberg, K. Y. (2012). Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. In American Control Conference, ACC 2012, Montreal, QC, Canada, June 27–29, 2012 (pp. 4305–4312).
Goya, Y., Chateau, T., Malaterre, L., & Trassoudaine, L. (2006). Vehicle trajectories evaluation by static video sensors. In 2006 IEEE Intelligent Transportation Systems Conference (pp. 864–869).
Haines, T. S. F., & Xiang, T. (2014). Background subtraction with Dirichletprocess mixture models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4), 670–683.
Heikkilä, M., & Pietikäinen, M. (2006). A texture-based method for modeling the background and detecting moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4), 657–662. doi:10.1109/TPAMI.2006.68.
KaewTraKulPong, P., & Bowden, R. (2002). An improved adaptive background mixture model for real-time tracking with shadow detection. In Video-Based Surveillance Systems, chapter 11 (pp. 135–144). US: Springer.
Katsarakis, N., Pnevmatikakis, A., Tan, Z. H., & Prasad, R. (2016). Improved Gaussian mixture models for adaptive foreground segmentation. Wireless Personal Communications, 87(3), 629–643.
Maddalena, L., & Petrosino, A. (2008). A self-organizing approach to background subtraction for visual surveillance applications. IEEE Transactions on Image Processing, 17(7), 1168–1177. doi:10.1109/TIP.2008.924285.
Maddalena, L., & Petrosino, A. (2010). A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection. Neural Computing and Applications, 19(2), 179–186. doi:10.1007/s00521-009-0285-8.
Noh, S., & Jeon, M. (2013). Computer Vision—ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5–9, 2012, Revised Selected Papers, Part III, chap. A New Framework for Background Subtraction Using Multiple Cues (pp. 493–506). Berlin: Springer.
Powers, D. M. W. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. International Journal of Machine Learning Technology, 2(1), 37–63.
Sobral, A. (2013). BGSLibrary: An OpenCV C++ Background Subtraction Library. In IX Workshop de Viso Computacional (WVC’2013). Rio de Janeiro, Brazil.
Stauffer, C., & Grimson, W. E. L. (1999). Adaptive background mixture models for real-time tracking. In 1999 Conference on Computer Vision and Pattern Recognition (CVPR ’99), 23–25 June 1999, Ft. Collins, CO, USA(pp. 2246–2252). doi:10.1109/CVPR.1999.784637.
St-Charles, P., & Bilodeau, G. (2014). Improving background subtraction using local binary similarity patterns. In IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, March 24–26, 2014 (pp. 509–515).
St-Charles, P., Bilodeau, G., & Bergevin, R. (2014). Flexible background subtraction with self-balanced local sensitivity. In IEEE Conference on Computer Vision and Pattern Recognition , CVPR Workshops 2014, Columbus, OH, USA, June 23–28, 2014 (pp. 414–419).
Zivkovic, Z. (2004). Improved adaptive Gaussian mixture model for background subtraction. In 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, UK, August 23–26, 2004 (pp. 28–31). doi:10.1109/ICPR.2004.1333992.
Author information
Authors and Affiliations
Corresponding author
Additional information
Part of this work has been carried out in the scope of the EC co-funded project eWALL (FP7-610658).
Rights and permissions
About this article
Cite this article
Astaras, S., Pnevmatikakis, A. & Tan, ZH. Visual Detection of Events of Interest from Urban Activity. Wireless Pers Commun 97, 1877–1888 (2017). https://doi.org/10.1007/s11277-017-4651-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11277-017-4651-z