skip to main content
10.1145/3443467.3443809acmotherconferencesArticle/Chapter ViewAbstractPublication PageseitceConference Proceedingsconference-collections
research-article

skeleton-based Action recognition with a triple-stream graph convolutional network

Authors Info & Claims
Published:01 February 2021Publication History

ABSTRACT

Skeleton data has been widely used in action recognition tasks to achieve effective recognition because it can stably adapt to dynamic environments and complex backgrounds. In those existing methods, two kinds of information, joint and bone segments, have been proven to be very useful. However, the inexplicit motion information, which is between bones, is always be ignored. Therefore, how to combine these three kinds of information, how to make full use of the joint information of bones and the relationship between bone segments and bone movement information, and how to represent bone movement information are still problems that need to be solved urgently. This paper designs a triple-stream undirected graph neural network, named 3s-AGCN, to extract motion information features of joints, bone segments, and bones for action recognition. Our final model has been tested on the large data set NTU-RGBD and has reached the latest level of the current model.

References

  1. Lin Li, Wu Zheng, Zhaoxiang Zhang, Yan Huang, and Liang Wang. Skeleton-Based Relational Modeling for Action Recognition. arXiv:1805.02556 [cs], 2018.Google ScholarGoogle Scholar
  2. Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (indrnn): Building A longer and deeper RNN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5457--5466, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  3. Qiuhong Ke, Mohammed Bennamoun, Senjian An, Ferdous Ahmed Sohel, and Farid Broussard. A New Representation of Skeleton Sequences for 3d Action Recognition. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4570--4579, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  4. Chao Li, Qiaoyong Zhong, Di Xie, and Shiliang Pu. Skeleton-based action recognition with convolutional neural networks. In Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on, pages 597--600. IEEE, 2017.Google ScholarGoogle Scholar
  5. Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial-Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition. In AAAI, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  6. Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. NTU RGB+D: A Large Scale Dataset for 3d Human Activity Analysis. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.Google ScholarGoogle ScholarCross RefCross Ref
  7. Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Non Local Graph Convolutional Networks for Skeleton-Based Action Recognition. arXiv:1805.07694 [cs], May 2018.Google ScholarGoogle Scholar
  8. Raviteja Vemulapalli, Felipe Arrate, and Rama Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 588--595, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Pengfei Zhang, Curling Lan, Junliang Xing, Wenjun Zeng, Jianru Xue, and Nanning Zheng. View Adaptive Recurrent Neural Networks for High-Performance Human Action Recognition From Skeleton Data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2117--2126, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  10. Mengyuan Liu, Hong Liu, and Chen Chen. Enhanced skeleton visualization for view-invariant human action recognition. Pattern Recognition, 68:346--362, 2017. 1, 2, 8Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Bo Li, Yuchao Dai, Xuelian Cheng, Huahui Chen, Yi Lin, and Mingyi He. Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep CNN. In Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on, pages 601--604. IEEE, 2017.Google ScholarGoogle Scholar
  12. Yong Du, Wei Wang, and Liang Wang. Hierarchical recurrent neural network for skeleton-based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1110--1118, 2015.Google ScholarGoogle Scholar
  13. Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. Spatial-Temporal LSTM with Trust Gates for 3d Human Action Recognition. In Computer Vision ECCV 2016, volume 9907, pages 816--833. SpringerGoogle ScholarGoogle Scholar
  14. Hong Liu, Juanhui Tu, and Mengyuan Liu. Two-Stream 3d Convolutional Neural Network for Skeleton-Based Action Recognition. arXiv:1705.08106 [cs], May 2017..Google ScholarGoogle Scholar
  15. Tae Soo Kim and Austin Reiter. Interpretable 3d human action analysis with temporal convolutional networks. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 1623--1631, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  16. Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pages 568-- 576, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Cao, C. Lan, Y. Zhang, W. Zeng, H. Lu, and Y. Zhang. Skeleton-Based Action Recognition with Gated Convolutional Neural Networks. IEEE Transactions on Circuits and Systems for Video Technology pages 1--1, 2018.Google ScholarGoogle Scholar
  18. Evangelidis, G., Singh, G., Horaud, R.: Skeletal quads: Human action recognition using joint quadruples. In: IEEE International Conference on Pattern Recognition. pp. 4513--4518 (2014)Google ScholarGoogle Scholar
  19. Vemulapalli, R., Arrate, F., Chellappa, R.: Human action recognition by representing 3D skeletons as points in a lie group. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 588--595 (2014)Google ScholarGoogle Scholar
  20. Luo, J., Wang, W., Qi, H.: Group sparsity and geometry constrained dictionary learning for action recognition from depth maps. In: IEEE International Conference on Computer Vision. pp. 1809--1816 (2013)Google ScholarGoogle Scholar
  21. Rahmani, H., Mian, A.: Learning a non-linear knowledge transfer model for crossview action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 2458--2466 (2015)Google ScholarGoogle Scholar
  22. Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for skeleton based action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 1110--1118 (2015)Google ScholarGoogle Scholar
  23. Song, S., Lan, C., Xing, J., Zeng, W., Liu, J.: An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In: AAAI Conference on Artificial Intelligence. pp. 4263--4270 (2017)Google ScholarGoogle Scholar
  24. Ke, Q., Bennamoun, M., An, S., Sohel, F., Boussaid, F.: A new representation of skeleton sequences for 3D action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 3288--3297 (2017)Google ScholarGoogle Scholar
  25. Zhao, R., Ali, H., Van der Smagt, P.: Two-stream RNN/CNN for action recognition in 3D videos. In: IEEE International Conference on Intelligent Robots and Systems. pp. 4260--4267 (2017)Google ScholarGoogle Scholar
  26. Bruna, J., Zaremba, W., Szlam, A., Lecun, Y.: Spectral networks and locally connected networks on graphs. In: International Conference on Learning Representations (2014)Google ScholarGoogle Scholar
  27. Niepert, M., Ahmed, M., Kutzkov, K.: Learning convolutional neural networks for graphs. In: IEEE International Conference on Machine Learning. pp. 2014--2023 (2016)Google ScholarGoogle Scholar
  28. Duvenaud, D.K., Maclaurin, D., Iparraguirre, J., Bombarell, R., Hirzel, T., AspuruGuzik, A., Adams, R.P.: Convolutional networks on graphs for learning molecular fingerprints. In: Advances in Neural Information Processing Systems. pp. 2224-- 2232 (2015)Google ScholarGoogle Scholar
  29. Henaff, M., Bruna, J., Lecun, Y.: Deep convolutional networks on graph-structured data. Computer Science (2015)Google ScholarGoogle Scholar
  30. Shi, L., Zhang, Y., Cheng, J., Lu, H.: Skeleton-based action recognition with directed graph neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 7912--7921 (2019)Google ScholarGoogle Scholar
  31. Li, M., Chen, S., Chen, X., Zhang, Y., Wang, Y., Tian, Q.: Actional-structural graph convolutional networks for skeleton-based action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 3595--3603Google ScholarGoogle Scholar

Index Terms

  1. skeleton-based Action recognition with a triple-stream graph convolutional network

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      EITCE '20: Proceedings of the 2020 4th International Conference on Electronic Information Technology and Computer Engineering
      November 2020
      1202 pages
      ISBN:9781450387811
      DOI:10.1145/3443467

      Copyright © 2020 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 February 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      EITCE '20 Paper Acceptance Rate214of441submissions,49%Overall Acceptance Rate508of972submissions,52%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader