Abstract
To solve the problem of low accuracy of solo-militant action recognition under small sample data set, a new method of solo-militant behavior analysis based on ResNet and rule matching is proposed in this paper. The militant's action classification is done by 2 levels of classification. Firstly, the skeleton key points are extracted from the militant's combat video frames by OpenPose. Then, the first level classification of militant's action is performed by the ResNet deep learning network based on RGB images and combined with the skeleton key point rule set of militant's action. Next, the second level classification of militant's action is performed by the CNN network based on skeleton map. At last, the final classification of militant's action is output according to the 2 levels of classification. The experimental results show that the proposed method in this paper can achieve more effective recognition rate of solo-militant action under small sample data set.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hong-Bo, Z., et al.: A comprehensive survey of vision-based human action recognition methods. Sensors 19(5), 1005 (2019)
Fujiyoshi, H., Lipton, A.J., Kanade T.: Real-time human motion analysis by image skeletonization. IEICE Trans. Inf. Syst. 87-D(1), 113–120 (2004)
Weinland, D., Ronfard, R., Boyer, E.: Free viewpoint action recognition using motion history volumes. Comput. Vis. Image Underst. 104(2–3), 249–257 (2006)
Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001)
Das Dawn, D., Shaikh, S.H.: A comprehensive survey of human action recognition with spatio-temporal interest point(STIP) detector. The Vis, Comput. 32(3), 289–306 (2016)
Wang, H., et al.: Evaluation of local spatio-temporal features for action recognition. In: Proceedings of the 2009 British Machine Vision Conference, pp. 124.1–124.11. BMVA Press, London, UK (2009)
Wang, H., et al.: Action recognition by dense trajectories. In: Proeedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, pp. 3169–3176. IEEE, RI (2011)
Jie, X., et al.: A fast human action recogntion network based on spatio-temporal features. Neurocomputing 441, 350–358 (2021)
Wang, J., Liang, S.: Pose-enhanced relation feature for action recognition in still images. In: Þór Jónsson, B. et al. (eds.) Multi-media Modeling. MMM 2022. LNCS, vol. 13141, pp. 154–165. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98358-1_13
Sima, M., et al.: Action recognition algorithm based on skeletal joint data and adaptive time pyramid. Signal Image Video Process. 16, 1615–1622 (2022)
Sarikaya, R., Hinton, G.E., Deoras, A.: Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. 22(4), 778–784 (2014)
Yuanfang, R., Yan, W.: Convolutional deep belief networks for feature extraction of EEG signal. In: Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), pp. 2850–2853, IEEE, Beijing, China: (2014)
Russel, N.S., Selvaraj, A.: Fusion of spatial and dynamic CNN streams for action recognition. Multim. Syst. 27(5), 969–984 (2021). https://doi.org/10.1007/s00530-021-00773-x
Yixue, L., et al.: Human action recognition algorithm based on improved ResNet and skeletal key points in single image. Math. Probl. Eng. 2020 (2020)
Sijie, S., Cuiling, L., Junliang, X., et al.: Spatio-temporal attention based LSTM networks for 3D action recognition and detection. IEEE Trans. Image Process. 27(7), 3459–3471 (2018)
Donahue, J., Hendrcks, A.L., Rohrbach, M., et al.: Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 677–691 (2017)
Xiangpei, S., Yanrui, D.: Human skeleton representation for 3D action recognition based on complex network coding and LSTM. J. Vis. Commun. Image Represent. 82 (2022)
Lei, S., Yifan, Z.: Action recognition via pose-based graph convolutional networks with intermediate dense supervision. Pattern Recogn. 121 (2022)
Jie, X., et al.: A fast human action recognition network based on spatio-temporal features. Nerucomputing 41, 350–358 (2021)
Zhenyue, Q., et al.: Fusing higher-order features in graph neural networks for skeleton-based action recognition. IEEE Trans. Neural Netw. Learn. Syst.: 1–15 (2022)
Yang, G., Zou, W.-X.: Deep learning network model based on fusion of spatiotemporal features for action recognition. Multim. Tools Appl. 81(7), 9875–9896 (2022). https://doi.org/10.1007/s11042-022-11937-w
Chen, C., Jafari, R., Kehtarnavaz, N.: Action recognition from depth sequences using depth motion maps-based local binary patterns. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision, pp. 1092–1099. IEEE Press, Piscataway, NJ: (2015)
Wenbin, C., Guodong, G., TriViews.: A general framework to use 3D depth data effectively for action recognition. J. Vis. Commun. Image Represent. 26(1), 182–191 (2015)
Alsawadi, M.S., Rio, M.: Skeleton-split framework using spatial temporal graph convolutional networks for action recogntion. arXiv Accession number: 20210391100, E-ISSN: 23318422, 4 November 2021
Shahroudy, A., et al.: Deep multimodal feature analysis for action recognition in RGB + D videos. IEEE Trans Pattern Anal. Mach. Intell. 40(5), 1045–1058 (2017)
Tamam, A., Usman, A., Hongtao, L.: Enhanced discriminative graph convolutional network with adaptive temporal modelling for skeleton-based action recognition. Comput. Vis. Image Underst. 216 (2022)
Xiaolei, L., et al.: Two-stream spatial graphormer networks for skeleton-based action recognition. IEEE Access. 10, 100426–100437 (2022)
Jacek, T., Bogdan, K.: Human action recognition on raw depth maps. In: 2021 International Conference on Visual Communications and Image Processing, pp. 1–4 (2021)
Weiyao, X., et al.: Multimodal feature fusion model for RGB-D action recognition. In: 2021 IEEE International Conference on Multimedia & Expo Workshops, pp. 1–15 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Tong, L., Feng, J., Zhao, H., Liu, K. (2022). Action Recognition for Solo-Militant Based on ResNet and Rule Matching. In: Tan, Y., Shi, Y. (eds) Data Mining and Big Data. DMBD 2022. Communications in Computer and Information Science, vol 1744. Springer, Singapore. https://doi.org/10.1007/978-981-19-9297-1_15
Download citation
DOI: https://doi.org/10.1007/978-981-19-9297-1_15
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-9296-4
Online ISBN: 978-981-19-9297-1
eBook Packages: Computer ScienceComputer Science (R0)