Skip to main content

Dual-Evidential Learning for Weakly-supervised Temporal Action Localization

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13664))

Included in the following conference series:

Abstract

Weakly-supervised temporal action localization (WS-TAL) aims to localize the action instances and recognize their categories with only video-level labels. Despite great progress, existing methods suffer from severe action-background ambiguity, which mainly comes from background noise introduced by aggregation operations and large intra-action variations caused by the task gap between classification and localization. To address this issue, we propose a generalized evidential deep learning (EDL) framework for WS-TAL, called Dual-Evidential Learning for Uncertainty modeling (DELU), which extends the traditional paradigm of EDL to adapt to the weakly-supervised multi-label classification goal. Specifically, targeting at adaptively excluding the undesirable background snippets, we utilize the video-level uncertainty to measure the interference of background noise to video-level prediction. Then, the snippet-level uncertainty is further deduced for progressive learning, which gradually focuses on the entire action instances in an “easy-to-hard” manner. Extensive experiments show that DELU achieves state-of-the-art performance on THUMOS14 and ActivityNet1.2 benchmarks. Our code is available in github.com/MengyuanChen21/ECCV2022-DELU.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In WS-TAL, multiple types of action may appear simultaneously in a video.

References

  1. Alwassel, H., Heilbron, F.C., Thabet, A., Ghanem, B.: Refineloc: iterative refinement for weakly-supervised action localization. In: WACV (2019)

    Google Scholar 

  2. Amini, A., Schwarting, W., Soleimany, A., Rus, D.: Deep evidential regression. In: NeurIPS (2020)

    Google Scholar 

  3. Bao, W., Yu, Q., Kong, Y.: Evidential deep learning for open set action recognition. In: ICCV (2021)

    Google Scholar 

  4. Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: ICML (2009)

    Google Scholar 

  5. Bojanowski, P., Lajugie, R., Bach, F., Laptev, I., Ponce, J., Schmid, C., Sivic, J.: Weakly supervised action labeling in videos under ordering constraints. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 628–643. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_41

    Chapter  Google Scholar 

  6. Caba Heilbron, F., Escorcia, V., Ghanem, B., Carlos Niebles, J.: ActivityNet: a large-scale video benchmark for human activity understanding. In: CVPR (2015)

    Google Scholar 

  7. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: CVPR (2017)

    Google Scholar 

  8. Chao, Y.W., Vijayanarasimhan, S., Seybold, B., Ross, D.A., Deng, J., Sukthankar, R.: Rethinking the faster R-CNN architecture for temporal action localization. In: CVPR (2018)

    Google Scholar 

  9. Ciptadi, A., Goodwin, M.S., Rehg, J.M.: Movement pattern histogram for action recognition and retrieval. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 695–710. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10605-2_45

    Chapter  Google Scholar 

  10. Gal, Y., et al.: Uncertainty in deep learning. PhD thesis, University of Cambridge (2016)

    Google Scholar 

  11. Gan, C., Sun, C., Duan, L., Gong, B.: Webly-supervised video recognition by mutually voting for relevant web images and web video frames. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 849–866. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_52

    Chapter  Google Scholar 

  12. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: ICML. PMLR (2017)

    Google Scholar 

  13. Hong, F.T., Feng, J.C., Xu, D., Shan, Y., Zheng, W.S.: Cross-modal consensus network for weakly supervised temporal action localization. In: ACM MM (2021)

    Google Scholar 

  14. Huang, L., Wang, L., Li, H.: Foreground-action consistency network for weakly supervised temporal action localization. In: ICCV (2021)

    Google Scholar 

  15. Idrees, H., et al.: The thumos challenge on action recognition for videos “in the wild". Comput. Vis. Image Understand. 155, 1–23 (2017)

    Article  Google Scholar 

  16. Islam, A., Long, C., Radke, R.: A hybrid attention mechanism for weakly-supervised temporal action localization. In: AAAI (2021)

    Google Scholar 

  17. Islam, A., Radke, R.: Weakly supervised temporal action localization using deep metric learning. In: WACV (2020)

    Google Scholar 

  18. Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)

    Article  Google Scholar 

  19. Jsang, A.: Subjective Logic: A Formalism for Reasoning Under Uncertainty. Springer Verlag (2016). https://doi.org/10.1007/978-3-319-42337-1

  20. Kay, W., et al.: The kinetics human action video dataset. arXiv:1705.06950 (2017)

  21. Lee, P., Byun, H.: Learning action completeness from points for weakly-supervised temporal action localization. In: ICCV (2021)

    Google Scholar 

  22. Lee, P., Uh, Y., Byun, H.: Background suppression network for weakly-supervised temporal action localization. In: AAAI (2020)

    Google Scholar 

  23. Lee, P., Wang, J., Lu, Y., Byun, H.: Weakly-supervised temporal action localization by uncertainty modeling. In: AAAI (2021)

    Google Scholar 

  24. Lee, Y.J., Ghosh, J., Grauman, K.: Discovering important people and objects for egocentric video summarization. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1346–1353. IEEE (2012)

    Google Scholar 

  25. Lei, J., Yu, L., Bansal, M., Berg, T.L.: TVQA: localized, compositional video question answering. arXiv preprint arXiv:1809.01696 (2018)

  26. Li, Z., Yao, L.: Three birds with one stone: Multi-task temporal action detection via recycling temporal annotations. In: CVPR (2021)

    Google Scholar 

  27. Lin, C., et al.: Learning salient boundary feature for anchor-free temporal action localization. In: CVPR (2021)

    Google Scholar 

  28. Liu, D., Jiang, T., Wang, Y.: Completeness modeling and context separation for weakly supervised temporal action localization. In: CVPR (2019)

    Google Scholar 

  29. Liu, X., Hu, Y., Bai, S., Ding, F., Bai, X., Torr, P.H.: Multi-shot temporal event localization: a benchmark. In: CVPR (2021)

    Google Scholar 

  30. Liu, Z., et al.: ACSNet: action-context separation network for weakly supervised temporal action localization. arXiv:2103.15088 (2021)

  31. Long, F., Yao, T., Qiu, Z., Tian, X., Luo, J., Mei, T.: Gaussian temporal awareness networks for action localization. In: CVPR (2019)

    Google Scholar 

  32. Luo, W., et al.: Action unit memory network for weakly supervised temporal action localization. In: CVPR (2021)

    Google Scholar 

  33. Luo, Z., et al.: Weakly-supervised action localization with expectation-maximization multi-instance learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 729–745. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_43

    Chapter  Google Scholar 

  34. Ma, F., et al.: SF-Net: single-frame supervision for temporal action localization. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 420–437. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_25

    Chapter  Google Scholar 

  35. Ma, J., Gorti, S.K., Volkovs, M., Yu, G.: Weakly supervised action selection learning in video. In: CVPR (2021)

    Google Scholar 

  36. Malinin, A., Gales, M.: Predictive uncertainty estimation via prior networks. In: NeurIPS (2018)

    Google Scholar 

  37. Moniruzzaman, M., Yin, Z., He, Z., Qin, R., Leu, M.C.: Action completeness modeling with background aware networks for weakly-supervised temporal action localization. In: ACM MM (2020)

    Google Scholar 

  38. Narayan, S., Cholakkal, H., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: D2-Net: weakly-supervised action localization via discriminative embeddings and denoised activations. In: ICCV (2021)

    Google Scholar 

  39. Nguyen, P., Liu, T., Prasad, G., Han, B.: Weakly supervised action localization by sparse temporal pooling network. In: CVPR (2018)

    Google Scholar 

  40. Paul, S., Roy, S., Roy-Chowdhury, A.K.: W-TALC: weakly-supervised temporal activity localization and classification. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 588–607. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_35

    Chapter  Google Scholar 

  41. Ramezani, M., Yaghmaee, F.: A review on human action analysis in videos for retrieval applications. Artif. Intell. Rev. 46(4), 485–514 (2016). https://doi.org/10.1007/s10462-016-9473-y

    Article  Google Scholar 

  42. Qu, S., et al.: ACM-Net: action context modeling network for weakly-supervised temporal action localization. arXiv:2104.02967 (2021)

  43. Sensoy, M., Kaplan, L., Cerutti, F., Saleki, M.: Uncertainty-aware deep classifiers using generative models. In: AAAI (2020)

    Google Scholar 

  44. Sensoy, M., Kaplan, L., Kandemir, M.: Evidential deep learning to quantify classification uncertainty. In: NeurIPS (2018)

    Google Scholar 

  45. Shi, B., Dai, Q., Mu, Y., Wang, J.: Weakly-supervised action localization by generative attention modeling. In: CVPR (2020)

    Google Scholar 

  46. Shi, W., Zhao, X., Chen, F., Yu, Q.: Multifaceted uncertainty estimation for label-efficient deep learning. In: NeurIPS (2020)

    Google Scholar 

  47. Shou, Z., Gao, H., Zhang, L., Miyazawa, K., Chang, S.-F.: AutoLoc: weakly-supervised temporal action localization in untrimmed videos. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 162–179. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_10

    Chapter  Google Scholar 

  48. Singh, K.K., Lee, Y.J.: Hide-and-seek: forcing a network to be meticulous for weakly-supervised object and action localization. In: ICCV (2017)

    Google Scholar 

  49. Sridhar, D., Quader, N., Muralidharan, S., Li, Y., Dai, P., Lu, J.: Class semantics-based attention for action detection. In: ICCV (2021)

    Google Scholar 

  50. Sultani, W., Chen, C., Shah, M.: Real-world anomaly detection in surveillance videos. In: CVPR (2018)

    Google Scholar 

  51. Vishwakarma, S., Agrawal, A.: A survey on activity recognition and behavior understanding in video surveillance. Vis. Comput. 29(10), 983–1009 (2013)

    Article  Google Scholar 

  52. Wang, L., Xiong, Y., Lin, D., Van Gool, L.: Untrimmednets for weakly supervised action recognition and detection. In: CVPR (2017)

    Google Scholar 

  53. Xu, M., Zhao, C., Rojas, D.S., Thabet, A., Ghanem, B.: G-TAD: sub-graph localization for temporal action detection. In: CVPR (2020)

    Google Scholar 

  54. Xu, Y., et al.: Segregated temporal assembly recurrent networks for weakly supervised multiple action detection. In: AAAI (2019)

    Google Scholar 

  55. Yager, R.R., Liu, L.: Classic Works of the Dempster-Shafer Theory of Belief Functions, vol. 219, Springer (2008)

    Google Scholar 

  56. Yang, W., Zhang, T., Yu, X., Qi, T., Zhang, Y., Wu, F.: Uncertainty guided collaborative training for weakly supervised temporal action detection. In: CVPR (2021)

    Google Scholar 

  57. Yang, Z., Qin, J., Huang, D.: ACGNet: action complement graph network for weakly-supervised temporal action localization. arXiv preprint arXiv:2112.10977 (2021)

  58. Zhang, C., Cao, M., Yang, D., Chen, J., Zou, Y.: Cola: weakly-supervised temporal action localization with snippet contrastive learning. In: CVPR (2021)

    Google Scholar 

  59. Zhang, C., et al.: Adversarial seeded sequence growing for weakly-supervised temporal action localization. In: ACM MM (2019)

    Google Scholar 

  60. Zhao, P., Xie, L., Ju, C., Zhang, Y., Wang, Y., Tian, Q.: Bottom-Up temporal action localization with mutual regularization. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 539–555. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_32

    Chapter  Google Scholar 

  61. Zhong, J.X., Li, N., Kong, W., Zhang, T., Li, T.H., Li, G.: Step-by-step erasion, one-by-one collection: a weakly supervised temporal action detector. In: ACM MM (2018)

    Google Scholar 

  62. Zhu, Z., Tang, W., Wang, L., Zheng, N., Hua, G.: Enriching local and global contexts for temporal action localization. In: ICCV (2021)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Key Research & Development Plan of China under Grant 2020AAA0106200, in part by the National Natural Science Foundation of China under Grants 62036012, U21B2044, 61721004, 62102415, 62072286, 61720106006, 61832002, 62072455, 62002355, and U1836220, in part by Beijing Natural Science Foundation (L201001), in part by Open Research Projects of Zhejiang Lab (NO.2022RC0AB02), and in part by CCF-Hikvision Open Fund (20210004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mengyuan Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, M., Gao, J., Yang, S., Xu, C. (2022). Dual-Evidential Learning for Weakly-supervised Temporal Action Localization. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13664. Springer, Cham. https://doi.org/10.1007/978-3-031-19772-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19772-7_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19771-0

  • Online ISBN: 978-3-031-19772-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics