skip to main content
10.1145/3633624.3633635acmotherconferencesArticle/Chapter ViewAbstractPublication PagesbdsicConference Proceedingsconference-collections
research-article

Integrating Dual-Stream Cross Fusion and Ambiguous Exclude Contrastive Learning for Enhanced Human Action Recognition

Published:29 January 2024Publication History

ABSTRACT

In the field of semi-supervised human action recognition, the effective utilization of both labeled and unlabeled data remains a central and challenging pursuit. To address this issue, we present an innovative framework (DSCF-AEC) that combines a Dual-stream Cross Fusion network (DSCF) with an Ambiguous Exclude Contrastive Learning (AEC) module. Specifically, our Dual-stream Cross Fusion network utilizes the ST-GCN as encoder, independently encoding two augmented versions of the joint and bone streams, which are subsequently cross-fused to achieve enhanced representation. To further bolster the performance, we designed the AEC module. This module constructs a memory bank capable of distinguishing reliable positive and negative samples, while ambiguous samples are excluded. This strategic approach ensures that, through contrastive learning, the model is trained solely on meaningful and trustworthy samples. Extensive experiments on NTU RGB+D and NW-UCLA datasets validate the effectiveness of our approach. The results indicate that, our proposed method significantly outperforms other existing methods.

References

  1. Mostafa A Abdelrazik, Abdelhaliem Zekry, and Wael A Mohamed. 2023. Efficient hybrid algorithm for human action recognition. Journal of Image and Graphics 11, 1 (2023), 72–81.Google ScholarGoogle ScholarCross RefCross Ref
  2. Tasweer Ahmad, Junaid Rafique, Hassam Muazzam, and Tahir Rizvi. 2015. Using discrete cosine transform based features for human action recognition. Journal of Image and Graphics 3, 2 (2015), 96–101.Google ScholarGoogle ScholarCross RefCross Ref
  3. David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2019. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785 (2019).Google ScholarGoogle Scholar
  4. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. 2020. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems 33 (2020), 9912–9924.Google ScholarGoogle Scholar
  5. Sung Ho Choi, Kyeong-Beom Park, Dong Hyeon Roh, Jae Yeol Lee, Mustafa Mohammed, Yalda Ghasemi, and Heejin Jeong. 2022. An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation. Robotics and Computer-Integrated Manufacturing 73 (2022), 102258.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yilin Dong, Xinde Li, Jean Dezert, Rigui Zhou, Changming Zhu, Lai Wei, and Shuzhi Sam Ge. 2021. Evidential reasoning with hesitant fuzzy belief structures for human activity recognition. IEEE Transactions on Fuzzy Systems 29, 12 (2021), 3607–3619.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Yves Grandvalet and Yoshua Bengio. 2005. Semi-supervised learning by entropy minimization. CAP 367 (2005), 281–296.Google ScholarGoogle Scholar
  8. Liang-Yan Gui, Kevin Zhang, Yu-Xiong Wang, Xiaodan Liang, José MF Moura, and Manuela Veloso. 2018. Teaching robots to predict human motion. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 562–567.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tianyu Guo, Hong Liu, Zhan Chen, Mengyuan Liu, Tao Wang, and Runwei Ding. 2022. Contrastive learning from extremely augmented skeleton sequences for self-supervised action recognition. 36, 1 (2022), 762–770.Google ScholarGoogle Scholar
  10. Muhammad Hassan, Tasweer Ahmad, Nudrat Liaqat, Ali Farooq, Syed Asghar Ali, and Syed Rizwan Hassan. 2014. A review on human actions recognition using vision based techniques. Journal of Image and Graphics 2, 1 (2014), 28–32.Google ScholarGoogle ScholarCross RefCross Ref
  11. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 9729–9738.Google ScholarGoogle ScholarCross RefCross Ref
  12. Lipeng Ke, Kuan-Chuan Peng, and Siwei Lyu. 2022. Towards to-at spatio-temporal focus for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 1131–1139.Google ScholarGoogle ScholarCross RefCross Ref
  13. Boeun Kim, Hyung Jin Chang, Jungho Kim, and Jin Young Choi. 2022. Global-local motion transformer for unsupervised skeleton-based action learning. In European Conference on Computer Vision. Springer, 209–225.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Yongqiang Kong, Zhengang Wei, and Shanshan Huang. 2018. Automatic analysis of complex athlete techniques in broadcast taekwondo video. Multimedia Tools and Applications 77 (2018), 13643–13660.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Hema S Koppula and Ashutosh Saxena. 2015. Anticipating human activities using object affordances for reactive robotic response. IEEE transactions on pattern analysis and machine intelligence 38, 1 (2015), 14–29.Google ScholarGoogle Scholar
  16. Naresh Kumar and Nagarajan Sukavanam. 2018. Motion trajectory for human action recognition using fourier temporal features of skeleton joints. Journal of Image and Graphics 6, 2 (2018), 174–180.Google ScholarGoogle ScholarCross RefCross Ref
  17. Dong-Hyun Lee 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, Vol. 3. Atlanta, 896.Google ScholarGoogle Scholar
  18. Heqing Li, Xinde Li, Zhentong Zhang, Chuanfei Hu, Fir Dunkin, and Shuzhi Sam Ge. 2023. ESUAV-NI: Endogenous Security Framework for UAV Perception System Based on Neural Immunity. IEEE Transactions on Industrial Informatics (2023).Google ScholarGoogle Scholar
  19. Jingyuan Li and Eli Shlizerman. 2020. Iterate & cluster: Iterative semi-supervised action recognition. arXiv preprint arXiv:2006.06911 (2020).Google ScholarGoogle Scholar
  20. Jingyuan Li and Eli Shlizerman. 2020. Sparse semi-supervised action recognition with active learning. arXiv preprint arXiv:2012.01740 (2020).Google ScholarGoogle Scholar
  21. Duohan Liang, Guoliang Fan, Guangfeng Lin, Wanjun Chen, Xiaorong Pan, and Hong Zhu. 2019. Three-stream convolutional neural network with multi-task and ensemble learning for 3d action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 0–0.Google ScholarGoogle ScholarCross RefCross Ref
  22. Lilang Lin, Sijie Song, Wenhan Yang, and Jiaying Liu. 2020. Ms2l: Multi-task self-supervised learning for skeleton based action recognition. In Proceedings of the 28th ACM International Conference on Multimedia. 2490–2498.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Yunyao Mao, Wengang Zhou, Zhenbo Lu, Jiajun Deng, and Houqiang Li. 2022. Cmd: Self-supervised 3d action representation learning with cross-modal mutual distillation. In European Conference on Computer Vision. Springer, 734–752.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence 41, 8 (2018), 1979–1993.Google ScholarGoogle Scholar
  25. Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. 2016. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1010–1019.Google ScholarGoogle ScholarCross RefCross Ref
  26. Weijie Sheng and Xinde Li. 2021. Multi-task learning for gait-based identity recognition and emotion recognition using attention enhanced temporal graph convolutional network. Pattern Recognition 114 (2021), 107868.Google ScholarGoogle ScholarCross RefCross Ref
  27. Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. 2019. Skeleton-based action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 7912–7921.Google ScholarGoogle ScholarCross RefCross Ref
  28. Xiangbo Shu, Binqian Xu, Liyan Zhang, and Jinhui Tang. 2023. Multi-Granularity Anchor-Contrastive Representation Learning for Semi-Supervised Skeleton-Based Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 6 (2023), 7559–7576. https://doi.org/10.1109/TPAMI.2022.3222871Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Chenyang Si, Xuecheng Nie, Wei Wang, Liang Wang, Tieniu Tan, and Jiashi Feng. 2020. Adversarial self-supervised learning for semi-supervised 3d action recognition. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VII 16. Springer, 35–51.Google ScholarGoogle Scholar
  30. Yi-Fan Song, Zhang Zhang, Caifeng Shan, and Liang Wang. 2022. Constructing stronger and faster baselines for skeleton-based action recognition. IEEE transactions on pattern analysis and machine intelligence 45, 2 (2022), 1474–1488.Google ScholarGoogle ScholarCross RefCross Ref
  31. Fida Mohammad Thoker, Hazel Doughty, and Cees GM Snoek. 2021. Skeleton-contrastive 3D action representation learning. In Proceedings of the 29th ACM international conference on multimedia. 1655–1663.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Raviteja Vemulapalli, Felipe Arrate, and Rama Chellappa. 2014. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE conference on computer vision and pattern recognition. 588–595.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Jiang Wang, Xiaohan Nie, Yin Xia, Ying Wu, and Song-Chun Zhu. 2014. Cross-view action modeling, learning and recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2649–2656.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Minsi Wang, Bingbing Ni, and Xiaokang Yang. 2020. Learning multi-view interactional skeleton graph for action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).Google ScholarGoogle Scholar
  35. Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. 2018. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3733–3742.Google ScholarGoogle ScholarCross RefCross Ref
  36. Binqian Xu, Xiangbo Shu, and Yan Song. 2022. X-invariant contrastive augmentation and representation learning for semi-supervised skeleton-based action recognition. IEEE Transactions on Image Processing 31 (2022), 3852–3867.Google ScholarGoogle ScholarCross RefCross Ref
  37. Sijie Yan, Yuanjun Xiong, and Dahua Lin. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.Google ScholarGoogle ScholarCross RefCross Ref
  38. Siyuan Yang, Jun Liu, Shijian Lu, Meng Hwa Er, and Alex C Kot. 2021. Skeleton cloud colorization for unsupervised 3d action representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 13423–13433.Google ScholarGoogle ScholarCross RefCross Ref
  39. Seyma Yucer and Yusuf Sinan Akgul. 2018. 3D Human Action Recognition with Siamese-LSTM Based Deep Metric Learning. Journal of Image and Graphics 6, 1 (2018).Google ScholarGoogle ScholarCross RefCross Ref
  40. Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. 2019. S4l: Self-supervised semi-supervised learning. In Proceedings of the IEEE/CVF international conference on computer vision. 1476–1485.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Integrating Dual-Stream Cross Fusion and Ambiguous Exclude Contrastive Learning for Enhanced Human Action Recognition

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        BDSIC '23: Proceedings of the 2023 5th International Conference on Big-data Service and Intelligent Computation
        October 2023
        101 pages
        ISBN:9798400708923
        DOI:10.1145/3633624

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 29 January 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)17
        • Downloads (Last 6 weeks)9

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format