Skip to main content

Action Recognition for Solo-Militant Based on ResNet and Rule Matching

  • Conference paper
  • First Online:
Data Mining and Big Data (DMBD 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1744))

Included in the following conference series:

  • 478 Accesses

Abstract

To solve the problem of low accuracy of solo-militant action recognition under small sample data set, a new method of solo-militant behavior analysis based on ResNet and rule matching is proposed in this paper. The militant's action classification is done by 2 levels of classification. Firstly, the skeleton key points are extracted from the militant's combat video frames by OpenPose. Then, the first level classification of militant's action is performed by the ResNet deep learning network based on RGB images and combined with the skeleton key point rule set of militant's action. Next, the second level classification of militant's action is performed by the CNN network based on skeleton map. At last, the final classification of militant's action is output according to the 2 levels of classification. The experimental results show that the proposed method in this paper can achieve more effective recognition rate of solo-militant action under small sample data set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hong-Bo, Z., et al.: A comprehensive survey of vision-based human action recognition methods. Sensors 19(5), 1005 (2019)

    Google Scholar 

  2. Fujiyoshi, H., Lipton, A.J., Kanade T.: Real-time human motion analysis by image skeletonization. IEICE Trans. Inf. Syst. 87-D(1), 113–120 (2004)

    Google Scholar 

  3. Weinland, D., Ronfard, R., Boyer, E.: Free viewpoint action recognition using motion history volumes. Comput. Vis. Image Underst. 104(2–3), 249–257 (2006)

    Article  Google Scholar 

  4. Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001)

    Article  Google Scholar 

  5. Das Dawn, D., Shaikh, S.H.: A comprehensive survey of human action recognition with spatio-temporal interest point(STIP) detector. The Vis, Comput. 32(3), 289–306 (2016)

    Article  Google Scholar 

  6. Wang, H., et al.: Evaluation of local spatio-temporal features for action recognition. In: Proceedings of the 2009 British Machine Vision Conference, pp. 124.1–124.11. BMVA Press, London, UK (2009)

    Google Scholar 

  7. Wang, H., et al.: Action recognition by dense trajectories. In: Proeedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, pp. 3169–3176. IEEE, RI (2011)

    Google Scholar 

  8. Jie, X., et al.: A fast human action recogntion network based on spatio-temporal features. Neurocomputing 441, 350–358 (2021)

    Article  Google Scholar 

  9. Wang, J., Liang, S.: Pose-enhanced relation feature for action recognition in still images. In: Þór Jónsson, B. et al. (eds.) Multi-media Modeling. MMM 2022. LNCS, vol. 13141, pp. 154–165. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98358-1_13

  10. Sima, M., et al.: Action recognition algorithm based on skeletal joint data and adaptive time pyramid. Signal Image Video Process. 16, 1615–1622 (2022)

    Google Scholar 

  11. Sarikaya, R., Hinton, G.E., Deoras, A.: Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. 22(4), 778–784 (2014)

    Article  Google Scholar 

  12. Yuanfang, R., Yan, W.: Convolutional deep belief networks for feature extraction of EEG signal. In: Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), pp. 2850–2853, IEEE, Beijing, China: (2014)

    Google Scholar 

  13. Russel, N.S., Selvaraj, A.: Fusion of spatial and dynamic CNN streams for action recognition. Multim. Syst. 27(5), 969–984 (2021). https://doi.org/10.1007/s00530-021-00773-x

    Article  Google Scholar 

  14. Yixue, L., et al.: Human action recognition algorithm based on improved ResNet and skeletal key points in single image. Math. Probl. Eng. 2020 (2020)

    Google Scholar 

  15. Sijie, S., Cuiling, L., Junliang, X., et al.: Spatio-temporal attention based LSTM networks for 3D action recognition and detection. IEEE Trans. Image Process. 27(7), 3459–3471 (2018)

    Article  MATH  Google Scholar 

  16. Donahue, J., Hendrcks, A.L., Rohrbach, M., et al.: Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 677–691 (2017)

    Article  Google Scholar 

  17. Xiangpei, S., Yanrui, D.: Human skeleton representation for 3D action recognition based on complex network coding and LSTM. J. Vis. Commun. Image Represent. 82 (2022)

    Google Scholar 

  18. Lei, S., Yifan, Z.: Action recognition via pose-based graph convolutional networks with intermediate dense supervision. Pattern Recogn. 121 (2022)

    Google Scholar 

  19. Jie, X., et al.: A fast human action recognition network based on spatio-temporal features. Nerucomputing 41, 350–358 (2021)

    Google Scholar 

  20. Zhenyue, Q., et al.: Fusing higher-order features in graph neural networks for skeleton-based action recognition. IEEE Trans. Neural Netw. Learn. Syst.: 1–15 (2022)

    Google Scholar 

  21. Yang, G., Zou, W.-X.: Deep learning network model based on fusion of spatiotemporal features for action recognition. Multim. Tools Appl. 81(7), 9875–9896 (2022). https://doi.org/10.1007/s11042-022-11937-w

    Article  Google Scholar 

  22. Chen, C., Jafari, R., Kehtarnavaz, N.: Action recognition from depth sequences using depth motion maps-based local binary patterns. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision, pp. 1092–1099. IEEE Press, Piscataway, NJ: (2015)

    Google Scholar 

  23. Wenbin, C., Guodong, G., TriViews.: A general framework to use 3D depth data effectively for action recognition. J. Vis. Commun. Image Represent. 26(1), 182–191 (2015)

    Google Scholar 

  24. Alsawadi, M.S., Rio, M.: Skeleton-split framework using spatial temporal graph convolutional networks for action recogntion. arXiv Accession number: 20210391100, E-ISSN: 23318422, 4 November 2021

    Google Scholar 

  25. Shahroudy, A., et al.: Deep multimodal feature analysis for action recognition in RGB + D videos. IEEE Trans Pattern Anal. Mach. Intell. 40(5), 1045–1058 (2017)

    Article  Google Scholar 

  26. Tamam, A., Usman, A., Hongtao, L.: Enhanced discriminative graph convolutional network with adaptive temporal modelling for skeleton-based action recognition. Comput. Vis. Image Underst. 216 (2022)

    Google Scholar 

  27. Xiaolei, L., et al.: Two-stream spatial graphormer networks for skeleton-based action recognition. IEEE Access. 10, 100426–100437 (2022)

    Article  Google Scholar 

  28. Jacek, T., Bogdan, K.: Human action recognition on raw depth maps. In: 2021 International Conference on Visual Communications and Image Processing, pp. 1–4 (2021)

    Google Scholar 

  29. Weiyao, X., et al.: Multimodal feature fusion model for RGB-D action recognition. In: 2021 IEEE International Conference on Multimedia & Expo Workshops, pp. 1–15 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tong, L., Feng, J., Zhao, H., Liu, K. (2022). Action Recognition for Solo-Militant Based on ResNet and Rule Matching. In: Tan, Y., Shi, Y. (eds) Data Mining and Big Data. DMBD 2022. Communications in Computer and Information Science, vol 1744. Springer, Singapore. https://doi.org/10.1007/978-981-19-9297-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-9297-1_15

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-9296-4

  • Online ISBN: 978-981-19-9297-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics