Skip to main content

Advertisement

Log in

Human activity recognition algorithm in video sequences based on the fusion of multiple features for realistic and multi-view environment

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Video-based human activity recognition (HAR) is an active and challenging research area in the field of computer vision. The presence of camera motion, irregular motion of humans, varying illumination conditions, complex backgrounds, and variations in the shape and size of human objects in video clips of the same activity category makes human activity recognition more difficult. Therefore, to overcome these challenges, we introduce a novel feature representation technique for human activity recognition based on the fusion of multiple features. This paper presents a robust and view-invariant feature descriptor based on the combination of motion information and the local appearance of human objects for video-based human activity recognition in realistic and multi-view environments. Firstly, we used a combination of Optical Flow (OF) and Histogram of Oriented Gradients (HOG) to compute the dynamic pattern of motion information. Then, we computed shape information by combining Local Ternary Pattern (LTP) and Zernike Moment (ZM) feature descriptors. Finally, a feature fusion strategy is used to integrate the motion information and shape information to construct the final feature vector. The experiments are performed on three different publically available video datasets– IXMAS, CASIA, and TV human interaction (TV-HI) and achieved classification accuracy values are 98.25%, 92.21%, 98.66%, and 96.48% respectively on IXMAS, CASIA Single Person, CASIA Interaction and TV-HI datasets. The results are evaluated in terms of seven different performance measures- accuracy, precision, recall, specificity, F-measure, Matthew's correlation coefficient (MCC) and computation time. The effectiveness of the proposed method is proven by comparing its results with other existing state-of-the-art methods. The obtained results have demonstrated the usefulness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

Data will be made available on reasonable request.

References

  1. Aggarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv 43:1–43

    Article  Google Scholar 

  2. Ahad MA, Islam MN, Jahan I (2016) Action recognition based on binary patterns of action-history and histogram of oriented gradient. J Multimodal User Interfaces 10(4):335–344

    Article  Google Scholar 

  3. Ahmad M, Lee SW (2008) Human action recognition using shape and CLG-motion flow from multi-view image sequences. Pattern Recogn 41(7):2237–2252

    Article  ADS  Google Scholar 

  4. Aly S, Sayed A (2019) Human action recognition using bag of global and local ZM features. Multimed Tools Appl 78(17):24923–24953

    Article  Google Scholar 

  5. Barron JL, Fleet DJ, Beauchemin SS (1994) Performance of optical flow techniques. Int J Comput Vision 12:43–77

    Article  Google Scholar 

  6. Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on computational learning theory, pp 144–152

  7. Bruhn A, Weickert J, Schnörr C (2005) Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods. Int J Comput Vision 61:211–231

    Article  Google Scholar 

  8. Celebi ME, Aslandogan YA (2005) A comparative study of three moment-based shape descriptors. InInternational Conference on Information Technology: Coding and Computing (ITCC'05)-Volume II;1, pp 788–793

  9. Chang CC, Lin CJ (2011) LIBSVM. A library for support vector machines. ACM Trans Intell Syst Technol 2(3):27

    Article  Google Scholar 

  10. Chen W, Lan S, Xu P (2015) Multiple feature fusion via hierarchical matching for TV logo recognition. In: 2015 8th International Congress on Image and Signal Processing (CISP). IEEE, pp 659–663

  11. Colque RV, Caetano C, de Andrade MT, Schwartz WR (2016) Histograms of optical flow orientation and magnitude and entropy to detect anomalous events in videos. IEEE Trans Circuits Syst Video Technol 27(3):673–682

    Article  Google Scholar 

  12. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), vol 1, pp 886–893

  13. Duman E, Erdem OA (2019) Anomaly detection in videos using optical flow and convolutional autoencoder. IEEE Access 18(7):183914–183923

    Article  Google Scholar 

  14. Huynh HH, Meunier J, Sequeira J, Daniel M (2009) Real time detection, tracking and recognition of medication intake. World Acad Sci Eng Technol 9(60):280–287

    Google Scholar 

  15. Ke SR, Thuc HL, Lee YJ, Hwang JN, Yoo JH, Choi KH (2013) A review on video-based human activity recognition. Computers 2(2):88–131

    Article  Google Scholar 

  16. Khare M, Binh NT, Srivastava RK (2014) Human object classification using dual tree complex wavelet transform and Zernike moment. transactions on large-scale data-and knowledge-centered systems XVI: selected papers from ACOMP 2013, pp 87–101

  17. Kim SJ, Kim SW, Sandhan T, Choi JY (2014) View invariant action recognition using generalized 4D features. Pattern Recogn Lett 1(49):40–47

    Article  ADS  Google Scholar 

  18. KM AD, Murthy OR (2017) Optical flow based anomaly detection in traffic scenes. In: 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). IEEE, pp 1–7

  19. Kuo YM, Lee JS, Chung PC (2010) A visual context-awareness-based sleeping-respiration measurement system. IEEE Trans Inf Technol Biomed 14(2):255–265

    Article  PubMed  Google Scholar 

  20. Kushwaha A, Khare A, Prakash O, Khare M (2020) Dense optical flow based background subtraction technique for object segmentation in moving camera environment. IET Image Proc 14(14):3393–3404

    Article  Google Scholar 

  21. Kushwaha A, Khare A, Srivastava P (2021) On integration of multiple features for human activity recognition in video sequences. Multimed Tools Appl 31:1–28

    Google Scholar 

  22. Kushwaha A, Khare A, Khare M (2021) Human activity recognition algorithm in video sequences based on integration of magnitude and orientation information of optical flow. Int J Image Graph 22:2250009

    Article  Google Scholar 

  23. Ladjailia A, Bouchrika I, Merouani HF, Harrati N, Mahfouf Z (2019) Human activity recognition via optical flow: decomposing activities into basic actions. Neural Comput Appl:1–4

  24. Laptev I (2005) On space-time interest points. Int J Comput Vision 64:107–123

    Article  Google Scholar 

  25. Laptev I, Marszalek M, Schmid C, Rozenfeld B (2008) Learning realistic human actions from movies. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp 1–8

  26. Lee SK, Jang D (1996) Translation, rotation and scale invariant pattern recognition using spectral analysis and hybrid genetic-neural-fuzzy networks. Comput Ind Eng 30(3):511–522

    Article  Google Scholar 

  27. Liu S, Wang S, Liu X, Lin CT, Lv Z (2020) Fuzzy detection aided real-time and robust visual tracking under complex environments. IEEE Trans Fuzzy Syst 29(1):90–102

    Article  Google Scholar 

  28. Liu S, Wang S, Liu X, Gandomi AH, Daneshmand M, Muhammad K, De Albuquerque VH (2021) Human memory update strategy: a multi-layer template update mechanism for remote visual monitoring. IEEE Trans Multimed 12(23):2188–2198

    Article  Google Scholar 

  29. Liu S, Wang S, Liu X, Dai J, Muhammad K, Gandomi AH, Ding W, Hijji M, de Albuquerque VH. Human inertial thinking strategy: A novel fuzzy reasoning mechanism for IoT-assisted visual monitoring. IEEE Internet Things J 10(5):3735–3748

  30. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vision 60:91–110

    Article  Google Scholar 

  31. Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: IJCAI'81: 7th international joint conference on artificial intelligence, vol 2, pp 674–679

  32. Nasrudin MW, Yaakob SN, Othman RR, Ismail I, Jais MI, Nasir AS (2014) Analysis of geometric, Zernike and united moment invariants techniques based on intra-class evaluation. In: 2014 5th international conference on intelligent systems, modelling and simulation, pp 7–11

  33. Nigam S, Khare A (2016) Integration of moment invariants and uniform local binary patterns for human activity recognition in video sequences. Multimed Tools Appl 75(24):17303–17332

    Article  Google Scholar 

  34. Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on featured distributions. Pattern Recogn 29(1):51–59

    Article  ADS  Google Scholar 

  35. Papadopoulos GT, Axenopoulos A, Daras P (2014) Real-time skeleton-tracking-based human action recognition using kinect data. In: International conference on multimedia modeling, pp 473–483

  36. Patron-Perez A, Marszalek M, Reid I, Zisserman A (2012) Structured learning of human interactions in TV shows. IEEE Trans Pattern Anal Mach Intell 34(12):2441–2453

    Article  PubMed  Google Scholar 

  37. Prakash O, Gwak J, Khare M, Khare A, Jeon M (2018) Human detection in complex real scenes based on combination of biorthogonal wavelet transform and ZMs. Optik 1(157):1267–1281

    Article  ADS  Google Scholar 

  38. Roitberg A, Perzylo A, Somani N, Giuliani M, Rickert M, Knoll A (2014) Human activity recognition in the context of industrial human-robot interaction. In: Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific. IEEE, pp 1–10

  39. Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Proceedings of the 17th international conference on pattern recognition, 2004. ICPR 2004, vol 3, pp 32–36

  40. Seemanthini K, Manjunath SS (2018) Human detection and tracking using HOG for action recognition. Procedia Comput Sci 1(132):1317–1326

    Google Scholar 

  41. Singh D, Singh B (2019) Investigating the impact of data normalization on classification performance. Appl Soft Comput 23:105524

    Google Scholar 

  42. Singh R, Kushwaha AK, Srivastava R (2019) Multi-view recognition system for human activity based on multiple features for video surveillance system. Multimed Tools Appl 78(12):17165–17196

    Article  Google Scholar 

  43. Srivastava P, Binh NT, Khare A (2014) Content-based image retrieval using moments of local ternary pattern. Mobile Netw Appl 19:618–625

    Article  Google Scholar 

  44. Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19(6):1635–1650

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  45. Wang Y, Huang K, Tan T (2007) Human activity recognition based on r transform. In: 2007 IEEE conference on computer vision and pattern recognition, pp 1–8

  46. Won J, Park JW, Park K, Yoon H, Moon DS (2019) Non-target structural displacement measurement using reference frame-based deepflow. Sensors 19(13):2992

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  47. Yamato J, Ohya J, Ishii K (1992) Recognizing human action in time-sequential images using hidden markov model. In: CVPR, vol 92, pp 379–385

  48. Yeffet L, Wolf L (2009) Local trinary patterns for human action recognition. In: 2009 IEEE 12th international conference on computer vision, pp 492–497

  49. Zhang H, Dong Z, Shu H (2010) Object recognition by a complete set of pseudo-ZM invariants. In: 2010 IEEE international conference on acoustics, speech and signal processing, pp 930–933

  50. Zhu H, Vial R, Lu S (2017) Tornado: a spatio-temporal convolutional regression network for video action proposal. In: Proceedings of the IEEE international conference on computer vision, pp 5813–5821

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashish Khare.

Ethics declarations

Conflict of interest

There is no conflict of interest.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kushwaha, A., Khare, A. & Prakash, O. Human activity recognition algorithm in video sequences based on the fusion of multiple features for realistic and multi-view environment. Multimed Tools Appl 83, 22727–22748 (2024). https://doi.org/10.1007/s11042-023-16364-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-16364-z

Keywords

Navigation