Skip to main content

Learning Complementary Instance Representation with Parallel Adaptive Graph-Based Network for Action Detection

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14555))

Included in the following conference series:

  • 283 Accesses

Abstract

Temporal action detection (TAD) aims to find action boundaries in untrimmed videos. Video sequences contain multiple actions with various durations, which brings about great challenges to accurate boundary location. Many methods are usually ineffective in multi-scale issues in complex scenes. Besides, there is a crucial fact that local information is vital for clear boundaries. However, the traditional convolution receptive field is limited in local methods and lacks critical frame-level attention. To deal with these problems, we propose the Parallel Adaptive Graph-Based Network (PAGN), which constructs a multi-branch parallel subnetwork that retains multiple video resolutions and enables information interaction between different levels. This results in feature outputs that can represent precise location information and rich semantic information simultaneously, making it more efficient and adaptable to changes in action scale. Additionally, we also propose a novel Complementary Graph Module (CGM) that assigns differential attention to different numbers of neighbors at the current timestamp. Extensive experimental validations are conducted on the challenging datasets ActivityNet-1.3 and THUMOS-14, respectively, and PAGN all consistently exhibit significantly better performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, Y., Wang, Y., Tong, Y., Yang, Y., Liu, Q., Liu, J.: Boundary content graph neural network for temporal action proposal generation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12373, pp. 121–137. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58604-1_8

    Chapter  Google Scholar 

  2. Chang, S., Wang, P., Wang, F., Li, H., Shou, Z.: Augmented transformer with adaptive graph for temporal action proposal generation. In: Proceedings of the 3rd International Workshop on Human-Centric Multimedia Analysis, pp. 41–50 (2022)

    Google Scholar 

  3. Gao, J., Yang, Z., Chen, K., Sun, C., Nevatia, R.: Turn tap: temporal unit regression network for temporal action proposals. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3628–3636 (2017)

    Google Scholar 

  4. Han, B., Zhang, X., Ren, S.: Pu-GACnet: graph attention convolution network for point cloud upsampling. Image Vis. Comput. 118, 104371 (2022)

    Article  Google Scholar 

  5. Heilbron, F.C., Escorcia, V., Ghanem, B., Niebles, J.C.: Activitynet: A large-scale video benchmark for human activity understanding. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 961–970. IEEE (2015)

    Google Scholar 

  6. Idrees, H., et al.: The thumos challenge on action recognition for videos “in the wild.” Comput. Vis. Image Underst. 155, 1–23 (2017)

    Google Scholar 

  7. Kang, H., Kim, H., An, J., Cho, M., Kim, S.J.: Soft-landing strategy for alleviating the task discrepancy problem in temporal action localization tasks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6514–6523 (2023)

    Google Scholar 

  8. Khaire, P.A., Kumar, P.: Deep learning and RGB-D based human action, human-human and human-object interaction recognition: a survey. J. Vis. Commun. Image Represent. 86, 103531 (2022)

    Article  Google Scholar 

  9. Lin, C., et al.: Learning salient boundary feature for anchor-free temporal action localization. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3319–3328 (2021)

    Google Scholar 

  10. Lin, T., Liu, X., Li, X., Ding, E., Wen, S.: BMN: boundary-matching network for temporal action proposal generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3889–3898 (2019)

    Google Scholar 

  11. Lin, T., Zhao, X., Su, H., Wang, C., Yang, M.: BSN: boundary sensitive network for temporal action proposal generation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 3–21. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_1

    Chapter  Google Scholar 

  12. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

    Google Scholar 

  13. Liu, Q., Wang, Z.: Progressive boundary refinement network for temporal action detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11612–11619 (2020)

    Google Scholar 

  14. Liu, S., Zhao, X., Su, H., Hu, Z.: TSI: temporal scale invariant network for action proposal generation. In: Ishikawa, H., Liu, C.-L., Pajdla, T., Shi, J. (eds.) ACCV 2020. LNCS, vol. 12626, pp. 530–546. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-69541-5_32

    Chapter  Google Scholar 

  15. Liu, X., et al.: End-to-end temporal action detection with transformer. IEEE Trans. Image Process. 31, 5427–5441 (2022)

    Article  Google Scholar 

  16. Qing, Z., et al.: Temporal context aggregation network for temporal action proposal refinement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 485–494 (2021)

    Google Scholar 

  17. Ratre, A., Pankajakshan, V.: Deep imbalanced data learning approach for video anomaly detection. In: 2022 National Conference on Communications (NCC), pp. 391–396 (2022)

    Google Scholar 

  18. Shang, J., Wei, P., Li, H., Zheng, N.: Multi-scale interaction transformer for temporal action proposal generation. Image Vis. Comput. 129, 104589 (2023)

    Article  Google Scholar 

  19. Tan, J., Tang, J., Wang, L., Wu, G.: Relaxed transformer decoders for direct action proposal generation. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 13506–13515 (2021)

    Google Scholar 

  20. Vahdani, E., Tian, Y.: Deep learning-based action detection in untrimmed videos: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 4302–4320 (2021)

    Google Scholar 

  21. Vo, K., Yamazaki, K., Truong, S., Tran, M.T., Sugimoto, A., Le, N.: ABN: agent-aware boundary networks for temporal action proposal generation. IEEE Access 9, 126431–126445 (2021)

    Article  Google Scholar 

  22. Wang, J., Long, X., Chen, G., Wu, Z., Chen, Z., Ding, E.: U-HRnet: delving into improving semantic representation of high resolution network for dense prediction. arXiv preprint arXiv:2210.07140 (2022)

  23. Wang, J., et al.: Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 43(10), 3349–3364 (2020)

    Article  Google Scholar 

  24. Wang, L., Zhai, C., Zhang, Q., Tang, W., Zheng, N., Hua, G.: Graph-based temporal action co-localization from an untrimmed video. Neurocomputing 434, 211–223 (2021)

    Article  Google Scholar 

  25. Wang, L., Xiong, Y., Lin, D., Van Gool, L.: Untrimmednets for weakly supervised action recognition and detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4325–4334 (2017)

    Google Scholar 

  26. Xiong, Y., et al.: Cuhk & ethz & siat submission to activitynet challenge 2016. arXiv preprint arXiv:1608.00797 (2016)

  27. Xu, M., Zhao, C., Rojas, D.S., Thabet, A.K., Ghanem, B.: G-tad: sub-graph localization for temporal action detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10153–10162 (2020)

    Google Scholar 

  28. Zeng, R., et al.: Graph convolutional networks for temporal action localization. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7093–7102 (2019)

    Google Scholar 

  29. Zhang, W., et al.: Graph attention multi-layer perceptron. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022)

    Google Scholar 

  30. Zhao, Y., Xiong, Y., Wang, L., Wu, Z., Tang, X., Lin, D.: Temporal action detection with structured segment networks. Int. J. Comput. Vision 128(1), 74–96 (2020)

    Article  MathSciNet  Google Scholar 

  31. Zhu, Z., Tang, W., Wang, L., Zheng, N., Hua, G.: Enriching local and global contexts for temporal action localization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13516–13525 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenzhu Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiao, Y., Yang, W., Xing, W. (2024). Learning Complementary Instance Representation with Parallel Adaptive Graph-Based Network for Action Detection. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14555. Springer, Cham. https://doi.org/10.1007/978-3-031-53308-2_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53308-2_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53307-5

  • Online ISBN: 978-3-031-53308-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics