Skip to main content
Log in

Class-incremental learning via prototype similarity replay and similarity-adjusted regularization

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The task of incremental learning is to enable machine learning models to continuously learn and adapt to new tasks and data in changing environments while maintaining knowledge of prior tasks. Recently, researchers have proposed a variety of incremental learning methods. Some methods rely on data storage or complex generative models to perform satisfactorily. However, existing incremental learning approaches typically focus on mitigating catastrophic forgetting, with less emphasis on effectively applying old knowledge to facilitate learning new tasks. In this paper, we propose a non-exemplar-based incremental learning approach called Class-Incremental Learning via Prototype Similarity Replay and Similarity-adjusted Regularization (PSSR) to tackle catastrophic forgetting in incremental learning. The essence of PSSR is leveraging prior knowledge of prior tasks to facilitate the acquisition of new tasks. PSSR memorizes a prototype for each old class, representing the class, and learns the new classes based on the similarity between the prototypes and the new class samples during the learning process. The feature space distribution is modified by the old class prototypes to enhance the model’s learning of the new classes. Extensive experiments on three benchmark datasets demonstrate the superior incremental performance of PSSR, with classification accuracy improvements of 2.73%, 3.37%, and 4.21% over state-of-the-art methods. Code available at https://github.com/FutureIAI/PSSR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The data that support the findings of this study are openly available in [CIFAR-100 dataset and Tiny-ImageNet data].

References

  1. Turkoglu B, Uymaz SA, Kaya E (2023) Chaos theory in metaheuristics. In: Comprehensive metaheuristics (pp 1–20). Academic, Cambridge

  2. Koçer HG, Türkoğlu B, Uymaz SA (2023) Chaotic golden ratio guided local search for big data optimization. Eng Sci Technol Int J 41:101388

    Google Scholar 

  3. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE, pp 248–255

  4. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105

  5. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778)

  6. Ali A, Zhu Y, Zakarya M (2022) Exploiting dynamic spatio-temporal graph convolutional neural networks for citywide traffic flows prediction. Neural Netw 145:233–247

    Article  Google Scholar 

  7. Zhou DW, Yang Y, Zhan DC (2021) Learning to classify with incremental new class. IEEE Trans Neural Netw Learn Syst 33(6):2429–2443

    Article  MathSciNet  Google Scholar 

  8. Zhou DW, Ye HJ, Zhan DC (2021) Learning placeholders for open-set recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp 4401–4410)

  9. Wang L, Zhang X, Su H, Zhu J (2024) A comprehensive survey of continual learning: theory, method and application. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2024.3367329

  10. Zhou DW, Wang QW, Qi ZH, Ye HJ, Zhan DC, Liu Z (2023) Deep class-incremental learning: a survey. arxiv preprint arxiv:2302.03648

  11. Bang J, Kim H, Yoo Y, Ha JW, Choi J (2021) Rainbow memory: Continual learning with a memory of diverse samples. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8218–8227)

  12. Sun W, Li Q, Zhang J, Wang D, Wang W, Geng YA (2023) Exemplar-free class incremental learning via discriminative and comparable parallel one-class classifiers. Pattern Recogn 140:109561

    Article  Google Scholar 

  13. Petit G, Popescu A, Schindler H, Picard D, Delezoide B (2023) Fetril: Feature translation for exemplar-free class-incremental learning. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp 3911–3920)

  14. Wang S, Li X, Sun J, Xu Z (2021) Training networks in null space of feature covariance for continual learning. In: Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition (pp 184–193)

  15. Zhou DW, Wang QW, Ye HJ, Zhan DC (2022) A model or 603 exemplars: towards memory-efficient class-incremental learning. arxiv preprint arxiv:2205.13218

  16. Wang Z, Zhang Z, Lee CY, Zhang H, Sun R, Ren X, Su G, Perot V, Dy J, Pfister T (2022) Learning to prompt for continual learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp 139–149)

  17. Xie J, Yan S, He X (2022) General incremental learning with domain-aware categorical representations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp 14351–14360)

  18. Lu Y, Wang M, Deng W (2022) Augmented geometric distillation for data-free incremental person reid. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp 7329–7338)

  19. Gao Q, Zhao C, Ghanem B, Zhang J (2022) R-dfcil: Relation-guided representation learning for data-free class incremental learning. In: European Conference on Computer Vision (pp 423–439). Cham: Springer Nature Switzerland

  20. Zhou DW, Wang FY, Ye HJ, Ma L, Pu S, Zhan DC (2022) Forward compatible few-shot class-incremental learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp 9046–9056)

  21. Díaz-Rodríguez N, Lomonaco V, Filliat D, Maltoni D (2018) Don’t forget, there is more than forgetting: new metrics for Continual Learning. arXiv preprint arXiv:1810.13166

  22. Lopez-Paz D, Ranzato MA (2017) Gradient episodic memory for continual learning. Adv Neural Inf Process Syst 30:6470–6479

  23. Lei CH, Chen YH, Peng WH, Chiu WC (2020) Class-incremental learning with rectified feature-graph preservation. In: Proceedings of the Asian Conference on Computer Vision

  24. Shi Y, Zhou K, Liang J, Jiang Z, Feng J, Torr PH, … Tan VY (2022) Mimicking the oracle: an initial phase decorrelation approach for class incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 16722–16731

  25. Hersche M, Karunaratne G, Cherubini G, Benini L, Sebastian A, Rahimi A (2022) Constrained few-shot class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 9057–9067

  26. Zhu K, Zhai W, Cao Y, Luo J, Zha ZJ (2022) Self-sustaining representation expansion for non-exemplar class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 9296–9305

  27. Smith J, Hsu YC, Balloch J, Shen Y, Jin H, Kira Z (2021) Always be dreaming: a new approach for data-free class-incremental learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp 9374–9384

  28. Jin H, Kim E (2022) Helpful or harmful: inter-task association in continual learning. In: European Conference on Computer Vision. Springer Nature Switzerland, Cham, pp 519–535

  29. Zhu F, Cheng Z, Zhang XY, Liu CL (2021) Class-incremental learning via dual augmentation. Adv Neural Inf Process Syst 34:14306–14318

    Google Scholar 

  30. Zhu F, Zhang XY, Wang C, Yin F, Liu CL (2021) Prototype augmentation and self-supervision for incremental learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp 5871–5880)

  31. Shi Y, Shi D, Qiao Z, Wang Z, Zhang Y, Yang S, Qiu C (2023) Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning. Neural Netw 164:617–630

    Article  Google Scholar 

  32. Zhang W, Gu X (2023) Few shot class incremental learning via efficient prototype replay and calibration. Entropy 25(5):776

    Article  Google Scholar 

  33. Mao K, Luo Y, Ren Y, Wang R (2023) Prototype representation expansion in incremental learning. Neural Process Lett 55(6):8401–8417

    Article  Google Scholar 

  34. Asadi N, Davari M, Mudur S, Aljundi R, Belilovsky E (2023) Prototype-sample relation distillation: towards replay-free continual learning. In: International Conference on Machine Learning. PMLR, pp 1093–1106

  35. Shen M, Chen D, Hu S, Xu G (2023) Class incremental learning of remote sensing images based on class similarity distillation. PeerJ Comput Sci 9:e1583

    Article  Google Scholar 

  36. Bansal G, Nushi B, Kamar E, Weld DS, Lasecki WS, Horvitz E (2019) Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff. In: Proceedings of the AAAI Conference on Artificial Intelligence (vol 33, No 01, pp 2429–2437)

  37. Srivastava M, Nushi B, Kamar E, Shah S, Horvitz E (2020) An empirical analysis of backward compatibility in machine learning systems. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp 3272–3280

  38. Hou S, Pan X, Loy CC, Wang Z, Lin D (2019) Learning a unified classifier incrementally via rebalancing. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp 831–839)

  39. Richie R, Bhatia S (2021) Similarity judgment within and across categories: a comprehensive model comparison. Cogn Sci 45(8):e13030

    Article  Google Scholar 

  40. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images

  41. Yao L, Miller J (2015) Tiny imagenet classification with convolutional neural networks. CS 231 N(5):82

  42. Cohen G, Afshar S, Tapson J, Van Schaik A (2017) EMNIST: Extending MNIST to handwritten letters. In: 2017 international joint conference on neural networks (IJCNN) (pp 2921–2926). IEEE

  43. Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The caltech-ucsd birds-200-2011 dataset

  44. Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747

  45. Buzzega P, Boschini M, Porrello A, Abati D, Calderara S (2020) Dark experience for general continual learning: a strong, simple baseline. Adv Neural Inf Process Syst 33:15920–15930

    Google Scholar 

  46. Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference track proceedings

  47. Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu AA, Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu AA, Milan K, Quan J, Ramalho T, Grabska-Barwinska A, Hadsell R (2017) Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences 114(13):3521–3526

    Article  MathSciNet  Google Scholar 

  48. Li Z, Hoiem D (2017) Learning without forgetting. IEEE Trans Pattern Anal Mach Intell 40(12):2935–2947

    Article  Google Scholar 

  49. Liu Y, Parisot S, Slabaugh G, Jia X, Leonardis A, Tuytelaars T (2020) More classifiers, less forgetting: a generic multi-classifier paradigm for incremental learning. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16. Springer International Publishing, pp 699–716

  50. Rebuffi SA, Kolesnikov A, Sperl G, Lampert CH (2017) icarl: Incremental classifier and representation learning. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp 2001–2010

  51. Castro FM, Marín-Jiménez MJ, Guil N, Schmid C, Alahari K (2018) End-to-end incremental learning. In: Proceedings of the European Conference on Computer Vision (ECCV). pp 233–248

Download references

Acknowledgements

We extend our special thanks to Dr. Zhu, the author of [30], for generously sharing the experimental comparative data, which greatly enriched our study.

Funding

This paper was supported by the Sichuan International Hong Kong Macao Taiwan Science and Technology Innovation Cooperation Project [No.2023YFH0029], and the Sichuan Province Science and Technology Support Project[No.2022YFG0198].

Author information

Authors and Affiliations

Authors

Contributions

Runji Chen: Methodology, Writing- Original draft preparation.

Guangzhu Chen: Conceptualization, Supervision, Project administration, Writing-Reviewing and Editing, Funding acquisition.

Xiaojuan Liao: Supervision, Writing-Reviewing and Editing, Project administration.

Wenjie Xiong: Experiment.

Corresponding authors

Correspondence to Guangzhu Chen or Xiaojuan Liao.

Ethics declarations

Ethical and informed consent for data used

This paper was done by the authors, and no human participants other than the authors were involved in it, and informed consent was obtained from all authors.

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, R., Chen, G., Liao, X. et al. Class-incremental learning via prototype similarity replay and similarity-adjusted regularization. Appl Intell 54, 9971–9986 (2024). https://doi.org/10.1007/s10489-024-05695-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-024-05695-5

Keywords