Abstract
Class incremental learning (CIL) aims to recognize both the old and new classes along the increment tasks. Deep neural networks in CIL suffer from catastrophic forgetting and some approaches rely on saving exemplars from previous tasks, known as the exemplar-based setting, to alleviate this problem. On the contrary, this paper focuses on the Exemplar-Free setting with no old class sample preserved. Balancing the plasticity and stability in deep feature learning with only supervision from new classes is more challenging. Most existing Exemplar-Free CIL methods report the overall performance only and lack further analysis. In this work, different methods are examined with complementary metrics in greater detail. Moreover, we propose a simple CIL method, Rotation Augmented Distillation (RAD), which achieves one of the top-tier performances under the Exemplar-Free setting. Detailed analysis shows our RAD benefits from the superior balance between plasticity and stability. Finally, more challenging exemplar-free settings with fewer initial classes are undertaken for further demonstrations and comparisons among the state-of-the-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Zhong, C., et al.: Discriminative distillation to reduce class confusion in continual learning. In: Proceedings of the Pattern Recognition and Computer Vision (PRCV) (2022)
Huang, T., Qu, W., Zhang, J.: Continual representation learning via auto-weighted latent embeddings on person ReIDD. In: Ma, H., et al. (eds.) PRCV 2021. LNCS, vol. 13021, pp. 593–605. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88010-1_50
Shaheen, K., Hanif, M.A., Hasan, O., et al.: Continual learning for real-world autonomous systems: algorithms, challenges and frameworks. J. Intell. Robot. Syst. (2022)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR) (2012)
Li, K., Chen, K., Wang, H., et al.: CODA: a real-world road corner case dataset for object detection in autonomous driving. In: Proceedings of the European Conference on Computer Vision (ECCV) (2022)
Rolnick, D., Ahuja, A., Schwarz, J., et al.: Experience replay for continual learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)
Masana, M., Liu, X., Twardowski, B., et al.: Class-incremental learning: survey and performance evaluation on image classification. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) (2022)
Rebuffi, S.A., Kolesnikov, A., Sperl, G., et al.: iCaRL: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. (1999)
Zhou, D.W., Yang, Y., Zhan, D.C.: Learning to classify with incremental new class. IEEE Trans. Neural Netw. Learn. Syst. (2021)
Zhu, K., et al.: Self-sustaining representation expansion for non-exemplar class-incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
Zhu, F., et al.: Prototype augmentation and self-supervision for incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Zhu, F., et al.: Class-incremental learning via dual augmentation. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)
Xu, Q., et al.: Constructing deep spiking neural networks from artificial neural networks with knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Petit, G., et al.: FetrIL: feature translation for exemplar-free class-incremental learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2023)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Yu, L., et al.: Semantic drift compensation for class-incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Belouadah, E., Popescu, A.: DeeSIL: deep-shallow incremental learning. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshop (2018)
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. In: Proceedings of the National Academy of Sciences(PNAS) (2017)
Dhar, P., et al.: Learning without memorizing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Lee, H., Ju Hwang, S., Shin, J.: Self-supervised label augmentation via input transformations. In: International Conference on Machine Learning (ICML) (2020)
Liu, Y., et al.: More classifiers, less forgetting: a generic multi-classifier paradigm for incremental learning. In: Proceedings of the European Conference on Computer Vision (ECCV) (2020)
Smith, J., et al.: Always be dreaming: a new approach for data-free class-incremental learning. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2021)
Hou, S., et al.: Learning a unified classifier incrementally via rebalancing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Wang, Y., Huang, Z., Hong. X.: S-prompts learning with pre-trained transformers: an Occam’s Razor for domain incremental learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2022)
Wu, Y., et al.: Large scale incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Zhao, B., et al.: Maintaining discrimination and fairness in class incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Ginsca, A.L., Popescu, A., Le Borgne, H., Ballas, N., Vo, P., Kanellos, I.: Large-scale image mining with flickr groups. In: He, X., Luo, S., Tao, D., Xu, C., Yang, J., Hasan, M.A. (eds.) MMM 2015. LNCS, vol. 8935, pp. 318–334. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-14445-0_28
Le, Y., Xuan Y.: Tiny imageNet visual recognition challenge. CS 231N 7.7, 3 (2015)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) (2015)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical Report TR 2009, University of Toronto, Toronto (2009)
Grossberg, S.T.: Studies of mind and brain: neural principles of learning, perception, development, cognition, and motor control. Springer Science & Business Media (2012). https://doi.org/10.1007/978-94-009-7758-7
He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Chaudhry, A., et al.: Riemannian walk for incremental learning: understanding forgetting and intransigence. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Acknowledgement
This research is supported by the National Science Foundation for Young Scientists of China (No. 62106289).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, X., Chang, X. (2024). Rotation Augmented Distillation for Exemplar-Free Class Incremental Learning with Detailed Analysis. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14428. Springer, Singapore. https://doi.org/10.1007/978-981-99-8462-6_3
Download citation
DOI: https://doi.org/10.1007/978-981-99-8462-6_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8461-9
Online ISBN: 978-981-99-8462-6
eBook Packages: Computer ScienceComputer Science (R0)