Skip to main content

Rotation Augmented Distillation for Exemplar-Free Class Incremental Learning with Detailed Analysis

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14428))

Included in the following conference series:

  • 823 Accesses

Abstract

Class incremental learning (CIL) aims to recognize both the old and new classes along the increment tasks. Deep neural networks in CIL suffer from catastrophic forgetting and some approaches rely on saving exemplars from previous tasks, known as the exemplar-based setting, to alleviate this problem. On the contrary, this paper focuses on the Exemplar-Free setting with no old class sample preserved. Balancing the plasticity and stability in deep feature learning with only supervision from new classes is more challenging. Most existing Exemplar-Free CIL methods report the overall performance only and lack further analysis. In this work, different methods are examined with complementary metrics in greater detail. Moreover, we propose a simple CIL method, Rotation Augmented Distillation (RAD), which achieves one of the top-tier performances under the Exemplar-Free setting. Detailed analysis shows our RAD benefits from the superior balance between plasticity and stability. Finally, more challenging exemplar-free settings with fewer initial classes are undertaken for further demonstrations and comparisons among the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhong, C., et al.: Discriminative distillation to reduce class confusion in continual learning. In: Proceedings of the Pattern Recognition and Computer Vision (PRCV) (2022)

    Google Scholar 

  2. Huang, T., Qu, W., Zhang, J.: Continual representation learning via auto-weighted latent embeddings on person ReIDD. In: Ma, H., et al. (eds.) PRCV 2021. LNCS, vol. 13021, pp. 593–605. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88010-1_50

    Chapter  Google Scholar 

  3. Shaheen, K., Hanif, M.A., Hasan, O., et al.: Continual learning for real-world autonomous systems: algorithms, challenges and frameworks. J. Intell. Robot. Syst. (2022)

    Google Scholar 

  4. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR) (2012)

    Google Scholar 

  5. Li, K., Chen, K., Wang, H., et al.: CODA: a real-world road corner case dataset for object detection in autonomous driving. In: Proceedings of the European Conference on Computer Vision (ECCV) (2022)

    Google Scholar 

  6. Rolnick, D., Ahuja, A., Schwarz, J., et al.: Experience replay for continual learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)

    Google Scholar 

  7. Masana, M., Liu, X., Twardowski, B., et al.: Class-incremental learning: survey and performance evaluation on image classification. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) (2022)

    Google Scholar 

  8. Rebuffi, S.A., Kolesnikov, A., Sperl, G., et al.: iCaRL: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

  9. French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. (1999)

    Google Scholar 

  10. Zhou, D.W., Yang, Y., Zhan, D.C.: Learning to classify with incremental new class. IEEE Trans. Neural Netw. Learn. Syst. (2021)

    Google Scholar 

  11. Zhu, K., et al.: Self-sustaining representation expansion for non-exemplar class-incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  12. Zhu, F., et al.: Prototype augmentation and self-supervision for incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  13. Zhu, F., et al.: Class-incremental learning via dual augmentation. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)

    Google Scholar 

  14. Xu, Q., et al.: Constructing deep spiking neural networks from artificial neural networks with knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)

    Google Scholar 

  15. Petit, G., et al.: FetrIL: feature translation for exemplar-free class-incremental learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2023)

    Google Scholar 

  16. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  17. Yu, L., et al.: Semantic drift compensation for class-incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  18. Belouadah, E., Popescu, A.: DeeSIL: deep-shallow incremental learning. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshop (2018)

    Google Scholar 

  19. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. In: Proceedings of the National Academy of Sciences(PNAS) (2017)

    Google Scholar 

  20. Dhar, P., et al.: Learning without memorizing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  21. Lee, H., Ju Hwang, S., Shin, J.: Self-supervised label augmentation via input transformations. In: International Conference on Machine Learning (ICML) (2020)

    Google Scholar 

  22. Liu, Y., et al.: More classifiers, less forgetting: a generic multi-classifier paradigm for incremental learning. In: Proceedings of the European Conference on Computer Vision (ECCV) (2020)

    Google Scholar 

  23. Smith, J., et al.: Always be dreaming: a new approach for data-free class-incremental learning. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  24. Hou, S., et al.: Learning a unified classifier incrementally via rebalancing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  25. Wang, Y., Huang, Z., Hong. X.: S-prompts learning with pre-trained transformers: an Occam’s Razor for domain incremental learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2022)

    Google Scholar 

  26. Wu, Y., et al.: Large scale incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  27. Zhao, B., et al.: Maintaining discrimination and fairness in class incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  28. Ginsca, A.L., Popescu, A., Le Borgne, H., Ballas, N., Vo, P., Kanellos, I.: Large-scale image mining with flickr groups. In: He, X., Luo, S., Tao, D., Xu, C., Yang, J., Hasan, M.A. (eds.) MMM 2015. LNCS, vol. 8935, pp. 318–334. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-14445-0_28

    Chapter  Google Scholar 

  29. Le, Y., Xuan Y.: Tiny imageNet visual recognition challenge. CS 231N 7.7, 3 (2015)

    Google Scholar 

  30. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) (2015)

    Google Scholar 

  31. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical Report TR 2009, University of Toronto, Toronto (2009)

    Google Scholar 

  32. Grossberg, S.T.: Studies of mind and brain: neural principles of learning, perception, development, cognition, and motor control. Springer Science & Business Media (2012). https://doi.org/10.1007/978-94-009-7758-7

  33. He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  34. Chaudhry, A., et al.: Riemannian walk for incremental learning: understanding forgetting and intransigence. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

Download references

Acknowledgement

This research is supported by the National Science Foundation for Young Scientists of China (No. 62106289).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaobin Chang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, X., Chang, X. (2024). Rotation Augmented Distillation for Exemplar-Free Class Incremental Learning with Detailed Analysis. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14428. Springer, Singapore. https://doi.org/10.1007/978-981-99-8462-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8462-6_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8461-9

  • Online ISBN: 978-981-99-8462-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics