Skip to main content

Advertisement

Log in

Applied Human Action Recognition Network Based on SNSP Features

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Recognition of human action is a daunting challenge considering action sequences' embodied and dynamic existence. Recently designed material depth sensors and the skeleton estimation algorithm have developed a renewed interest in human skeletal action recognition. This paper performed human action recognition by using a novel SNSP descriptor that acquired complex spatial information among all skeletal joints. In particular, the proposed SNSP determines combined and unite details using the prominent joint. Our features are calculated using the standard normal, slope, and parameter space features. The neck is proposed as a super-joint, SNSP is utilizing features and a prominent joint. We evaluate the proposed approach on three challenging action recognition datasets i.e., UTD Multimodal Human Action Dataset, KARD- Kinect Activity Recognition Dataset, and SBU Kinect Interaction Dataset. The experimental results demonstrate that the proposed method outperforms state-of-the-art human action recognition methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Gao Z, Wang P, Wang H, Mingliang Xu, Li W (2020) A review of dynamic maps for 3D human motion recognition using ConvNets and its improvement. Neural Process Lett 52(2):1501–1515

    Article  Google Scholar 

  2. Islam MS, Bakhat K, Khan R, Iqbal M, Islam MM, Ye Z (2021) Action recognition using interrelationships of 3D joints and frames based on angle sine relation and distance features using interrelationships. Appl Intell, 1–13

  3. Liao Z, Haifeng Hu, Liu Y (2020) Action recognition with multiple relative descriptors of trajectories. Neural Process Lett 51(1):287–302

    Article  Google Scholar 

  4. Mishra SR, Mishra TK, Sanyal G, Sarkar A, Satapathy SC (2020) Real time human action recognition using triggered frame extraction and a typical CNN heuristic. Pattern Recogn Lett 135(2020):329–336

    Article  Google Scholar 

  5. Li M, Leung H (2017) Graph-based approach for 3D human skeletal action recognition. Pattern Recogn Lett 87:195–202

    Article  Google Scholar 

  6. Chen C, Jafari R, Kehtarnavaz N (2015) Utd-mhad: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: 2015 IEEE international conference on image processing (ICIP), pp 168–172. IEEE

  7. Gaglio S, Re GL, Morana M (2014) Human activity recognition process using 3-D posture data. IEEE Transactions on Human-Machine Systems 45(5):586–597

    Article  Google Scholar 

  8. Yun K, Honorio J, Chattopadhyay D, Berg TL, Samaras D (2012) Two-person interaction detection using body-pose features and multiple instance learning. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, pp 28–35. IEEE

  9. Keller JM, Gray MR, Givens JA (1985) A fuzzy k-nearest neighbor algorithm. IEEE Trans Syst Man Cybern 4:580–585

    Article  Google Scholar 

  10. Liu J, Wang G, Duan L-Y, Abdiyeva K, Kot AC (2017) Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Trans Image Process 27(4):1586–1599

    Article  MathSciNet  Google Scholar 

  11. Liu J, Akhtar N, Mian A (2017) Skepxels: Spatio-temporal image representation of human skeleton joints for action recognition. arXiv preprint arXiv:1711.05941

  12. Cippitelli E, Gasparrini S, Gambi E, Spinsante S (2016) A human activity recognition system using skeleton data from rgbd sensors. Comput Intell Neurosci 2016:21

    Article  Google Scholar 

  13. Papadopoulos K, Antunes M, Aouada D, Ottersten B (2017) Enhanced trajectory-based action recognition using human pose. In: 2017 IEEE international conference on image processing (ICIP), pp 1807–1811. IEEE

  14. Zhu W, Lan C, Xing J, Zeng W, Li Y, Shen L, Xie X (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: Thirtieth AAAI Conference on Artificial Intelligence

  15. Song S, Lan C, Xing J, Zeng W, Liu J (2017) An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In: Thirty-first AAAI conference on artificial intelligence

  16. Liu J, Wang G, Hu P, Duan L-Y, Kot AC (2017) Global context-aware attention LSTM networks for 3D action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1647–1656

  17. Baradel F, Christian W, Julien M (2017) Pose-conditioned spatio-temporal attention for human action recognition." arXiv preprint arXiv:1703.10106

  18. Ke Q, An S, Bennamoun M, Sohel F, Boussaid F (2017) Skeletonnet: Mining deep part features for 3-d action recognition. IEEE Signal Process Lett 24(6):731–735

    Article  Google Scholar 

  19. Ke Q, Bennamoun M, An S, Sohel F, Boussaid F (2017) A new representation of skeleton sequences for 3d action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3288–3297

  20. Escobedo E, Camara G (2016) A new approach for dynamic gesture recognition using skeleton trajectory representation and histograms of cumulative magnitudes. In: 2016 29th SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), pp 209–216. IEEE

  21. Li C, Hou Y, Wang P, Li W (2017) Joint distance maps based action recognition with convolutional neural networks. IEEE Signal Process Lett 24(5):624–628

    Article  Google Scholar 

  22. Wang P, Li Z, Hou Y, Li W (2016) Action recognition based on joint trajectory maps using convolutional neural networks. In: Proceedings of the 24th ACM international conference on Multimedia, pp 102–106. ACM

  23. Chikhaoui B, and Gouineau F (2017) Towards automatic feature extraction for activity recognition from wearable sensors: a deep learning approach. In: 2017 IEEE international conference on data mining workshops (ICDMW), pp 693–702. IEEE

  24. Wang P, Wang S, Gao Z, Hou Y, Li W (2017) Structured images for RGB-D action recognition. In: Proceedings of the IEEE international conference on computer vision, pp 1005–1014

  25. Gori I, Aggarwal JK, Matthies L, Ryoo MS (2016) Multitype activity recognition in robot-centric scenarios. IEEE Robot Automat Lett 1(1):593–600

    Article  Google Scholar 

  26. Liu M, Junsong Y (2018) Recognizing human actions as the evolution of pose estimation maps. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1159–1168

  27. McNally W, Wong A, McPhee J (2019) STAR-Net: Action recognition using spatio-temporal activation reprojection. arXiv preprint arXiv:1902.10024

  28. Ji Y, Ye G, Cheng H (2014) Interactive body part contrast mining for human interaction recognition. In: 2014 IEEE international conference on multimedia and expo workshops (ICMEW), pp 1–6. IEEE

  29. Jin Ke, Jiang M, Kong J, Huo H, Wang X (2017) Action recognition using vague division DMMs. J Eng 2017(4):77–84

    Article  Google Scholar 

  30. Islam MS, Iqbal M, Naqvi N, Bakhat K, Islam MM, Kanwal S, Ye Z (2019) CAD: Concatenated Action Descriptor for one and two Person (s), using Silhouette and Silhouette's Skeleton. IET Image Processing

  31. Islam S, Qasim T, Yasir M, Bhatti N, Mahmood H, Zia M (2018) Single-and two-person action recognition based on silhouette shape and optical point descriptors. SIViP 12(5):853–860

    Article  Google Scholar 

  32. Yu J, Rui Y, Tao D (2014) Click prediction for web image reranking using multimodal sparse coding. IEEE Trans Image Process 23(5):2019–2032

    Article  MathSciNet  Google Scholar 

  33. Yu J, Tao D, Wang M, Rui Y (2014) Learning to rank using user clicks and visual features for image retrieval. IEEE Trans Cybern 45(4):767–779

    Article  Google Scholar 

  34. Yu J, Tan M, Zhang H, Tao D, Rui Y (2019) Hierarchical deep click feature prediction for fine-grained image recognition. IEEE transactions on pattern analysis and machine intelligence

  35. Tianjin et al.88 Lemieux N, Noumeir R (2020) A hierarchical learning approach for human action recognition. Sensors, 20(17): 4946

  36. Ranieri CM, Vargas PA, Romero RAF (2020) Uncovering human multimodal activity recognition with a deep learning approach. In: 2020 International joint conference on neural networks (IJCNN), pp 1–8

  37. Mohite A, Rege P, Chakravarty D (2021) Human activity recognition using positioning sensor and deep learning technique. In: Advances in signal and data processing, Springer, pp 473–489

  38. Dhiman C, Vishwakarma DK (2019) A robust framework for abnormal human action recognition using $\boldsymbol {\mathcal R} $-transform and zernike moments in depth videos. IEEE Sens J 19(13):5195–5203

    Article  Google Scholar 

  39. Saini R, Kumar P, Kaur B, Roy PP, Dogra DP, Santosh KC (2019) Kinect sensor-based interaction monitoring system using the BLSTM neural network in healthcare. Int J Mach Learn Cybern 10(9):2529–2540

    Article  Google Scholar 

  40. Ashwini K, Amutha R (2020) Skeletal data based activity recognition system. In: 2020 International conference on communication and signal processing (ICCSP), pp 444–447

  41. Ashwini K, Amutha R (2021) Compressive sensing based recognition of human upper limb motions with kinect skeletal data. Multimed Tools Appl, pp 1–19

  42. Pham HH, Salmane H, Khoudour L, Crouzil A, Velastin SA, Zegers P (2020) A unified deep framework for joint 3d pose estimation and action recognition from a single rgb camera. Sensors 20(7):1825

    Article  Google Scholar 

  43. Li S, Jiang T, Huang T, Tian Y (2020) Global co-occurrence feature learning and active coordinate system conversion for skeleton-based action recognition. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 586–594

  44. Xiaomin P, Huijie F, Yandong T (2020) Two-person interaction recognition based on multi-stream spatio-temporal fusion network. 红外与激光工程, 49(5), 20190552

  45. Huynh-The T, Hua C-H, Ngo T-T, Kim D-S (2020) Image representation of pose-transition feature for 3D skeleton-based action recognition. Inf Sci (Ny) 513:112–126

    Article  Google Scholar 

  46. Proffitt DR, Gilden DL (1989) Understanding natural dynamics. J Exp Psychol Hum Percept Perform 15(2):384

    Article  Google Scholar 

  47. Youdas JW, Garrett TR, Suman VJ, Bogard CL, Hallman HO, Carey JR (1992) Normal range of motion of the cervical spine: an initial goniometric study. Phys Ther 72(11):770–780

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the Fundamental Research Funds for the Central Universities (Grant no. WK2350000002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M Shujah Islam.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Islam, M.S., Bakhat, K., Khan, R. et al. Applied Human Action Recognition Network Based on SNSP Features. Neural Process Lett 54, 1481–1494 (2022). https://doi.org/10.1007/s11063-021-10585-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10585-9

Keywords

Navigation