skip to main content
10.1145/3581783.3611926acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Non-Exemplar Class-Incremental Learning via Adaptive Old Class Reconstruction

Published: 27 October 2023 Publication History

Abstract

In the Class-Incremental Learning (CIL) task, rehearsal-based approaches have received a lot of attention recently. However, storing old class samples is often infeasible in application scenarios where device memory is insufficient or data privacy is important. Therefore, it is necessary to rethink Non-Exemplar Class-Incremental Learning (NECIL). In this paper, we propose a novel NECIL method named POLO with an adaPtive Old cLass recOnstruction mechanism, in which a density-based prototype reinforcement method (DBR), a topology-correction prototype adaptation method (TPA), and an adaptive prototype augmentation method (APA) are designed to reconstruct pseudo features of old classes in new incremental sessions. Specifically, the DBR focuses on the low-density features to maintain the model's discriminative ability for old classes. Afterward, the TPA is designed to adapt old class prototypes to new feature spaces in the incremental learning process. Finally, the APA is developed to further adapt pseudo feature spaces of old classes to new feature spaces. Experimental evaluations on four benchmark datasets demonstrate the effectiveness of our proposed method over the state-of-the-art NECIL methods.

References

[1]
Hongjoon Ahn, Jihwan Kwak, Su Fang Lim, Hyeonsu Bang, Hyojun Kim, and Taesup Moon. 2021. SS-IL: Separated Softmax for Incremental Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 844--853.
[2]
Arjun Ashok, KJ Joseph, and Vineeth N Balasubramanian. 2022. Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer. In Proceedings of European Conference on Computer Vision. 105--122.
[3]
Eden Belouadah and Adrian Popescu. 2019. Il2m: Class incremental learning with dual memory. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 583--592.
[4]
Massimo Caccia, Pau Rodríguez López, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Page-Caccia, Issam H. Laradji, Irina Rish, Alexandre Lacoste, David Vázquez, and Laurent Charlin. 2020. Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning. In Advances in Neural Information Processing Systems. 16532--16545.
[5]
Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-end incremental learning. In Proceedings of European Conference on Computer Vision. 233--248.
[6]
Hyuntak Cha, Jaeho Lee, and Jinwoo Shin. 2021. Co2L: Contrastive Continual Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9516--9525.
[7]
Lin Chen. 2005. The topological approach to perceptual organization. Visual Cognition, Vol. 12, 4 (2005), 553--637.
[8]
Jiahua Dong, Lixu Wang, Zhen Fang, Gan Sun, Shichao Xu, Xiao Wang, and Qi Zhu. 2022. Federated class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10164--10173.
[9]
Songlin Dong, Xiaopeng Hong, Xiaoyu Tao, Xinyuan Chang, Xing Wei, and Yihong Gong. 2021. Few-Shot Class-Incremental Learning via Relation Knowledge Distillation. In Proceedings of the AAAI Conference on Artificial Intelligence. 1255--1263.
[10]
Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle. 2020. Podnet: Pooled outputs distillation for small-tasks incremental learning. In Proceedings of European Conference on Computer Vision. 86--102.
[11]
Arthur Douillard, Alexandre Ramé, Guillaume Couairon, and Matthieu Cord. 2022. DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 9285--9295.
[12]
Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, Vol. 3, 4 (1999), 128--135.
[13]
Tian Gan, Shaokun Wang, Meng Liu, Xuemeng Song, Yiyang Yao, and Liqiang Nie. 2019. Seeking Micro-Influencers for Brand Promotion. In Proceedings of the ACM International Conference on Multimedia. 1933--1941.
[14]
Yanan Gu, Xu Yang, Kun Wei, and Cheng Deng. 2022. Not Just Selection, but Exploration: Online Class-Incremental Continual Learning via Dual View Consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7442--7451.
[15]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 770--778.
[16]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. Computer Science, Vol. 14, 7 (2015), 38--39.
[17]
Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. 2019. Learning a Unified Classifier Incrementally via Rebalancing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 831--839.
[18]
Xinting Hu, Kaihua Tang, Chunyan Miao, Xiansheng Hua, and Hanwang Zhang. 2021. Distilling Causal Effect of Data in Class-Incremental Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 3957--3966.
[19]
Minsoo Kang, Jaeyoo Park, and Bohyung Han. 2022. Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16071--16080.
[20]
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, Vol. 114, 13 (2017), 3521--3526.
[21]
Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, 12 (2017), 2935--2947.
[22]
Fan Liu, Huilin Chen, Zhiyong Cheng, Anan Liu, Liqiang Nie, and Mohan Kankanhalli. 2022a. Disentangled Multimodal Representation Learning for Recommendation. IEEE Transactions on Multimedia (2022), 1--11. https://doi.org/10.1109/TMM.2022.3217449
[23]
Fan Liu, Zhiyong Cheng, Lei Zhu, Zan Gao, and Liqiang Nie. 2021a. Interest-Aware Message-Passing GCN for Recommendation. In Proceedings of the Web Conference 2021. 1296--1305.
[24]
Xialei Liu, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li, and Ming-Ming Cheng. 2022b. Long-Tailed Class Incremental Learning. In Proceedings of European Conference on Computer Vision. 495--512.
[25]
Yu Liu, Sarah Parisot, Gregory Slabaugh, Xu Jia, Ales Leonardis, and Tinne Tuytelaars. 2020a. More classifiers, less forgetting: A generic multi-classifier paradigm for incremental learning. In Proceedings of European Conference on Computer Vision. 699--716.
[26]
Yaoyao Liu, Bernt Schiele, and Qianru Sun. 2021b. Adaptive aggregation networks for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2544--2553.
[27]
Yaoyao Liu, Yuting Su, An-An Liu, Bernt Schiele, and Qianru Sun. 2020b. Mnemonics training: Multi-class incremental learning without forgetting. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 12245--12254.
[28]
Arun Mallya and Svetlana Lazebnik. 2018. PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7765--7773.
[29]
Shie Mannor, Dori Peleg, and Reuven Rubinstein. 2005. The cross entropy method for classification. In Proceedings of the international conference on Machine learning. 561--568.
[30]
Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D Bagdanov, and Joost van de Weijer. 2022. Class-incremental learning: survey and performance evaluation on image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, 5 (2022), 5513--5533.
[31]
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2001--2010.
[32]
Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021. Contrastive Learning with Hard Negative Samples. In International Conference on Learning Representations.
[33]
Christian Simon, Piotr Koniusz, and Mehrtash Harandi. 2021. On learning the geodesic path for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1591--1600.
[34]
Xiaoyu Tao, Xinyuan Chang, Xiaopeng Hong, Xing Wei, and Yihong Gong. 2020. Topology-Preserving Class-Incremental Learning. In Proceedings of European Conference on Computer Vision. 254--270.
[35]
Marco Toldo and Mete Ozay. 2022. Bring Evanescent Representations to Life in Lifelong Class Incremental Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16732--16741.
[36]
Johannes von Oswald, Christian Henning, Benjamin F. Grewe, and Jo ao Sacramento. 2020. Continual learning with hypernetworks. In International Conference on Learning Representations.
[37]
Shaokun Wang, Tian Gan, Yuan Liu, Li Zhang, JianLong Wu, and Liqiang Nie. 2022a. Discover Micro-Influencers for Brands via Better Understanding. IEEE Transactions on Multimedia, Vol. 24 (2022), 2595--2605.
[38]
Shaokun Wang, Weiwei Shi, Songlin Dong, Xinyuan Gao, Xiang Song, and Yihong Gong. 2023. Semantic Knowledge Guided Class-Incremental Learning. IEEE Transactions on Circuits and Systems for Video Technology (2023), 1--1. https://doi.org/10.1109/TCSVT.2023.3262739
[39]
Yabin Wang, Zhiwu Huang, and Xiaopeng Hong. 2022b. S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning. Advances in Neural Information Processing Systems, Vol. 35, 5682--5695.
[40]
Ning Wei, Tiangang Zhou, Zihao Zhang, Yan Zhuo, and Lin Chen. 2019. Visual working memory representation as a topological defined perceptual object. Journal of Vision, Vol. 19, 7 (07 2019), 12--12.
[41]
Max Welling. 2009. Herding dynamical weights to learn. In Proceedings of the Annual International Conference on Machine Learning. 1121--1128.
[42]
Guile Wu, Shaogang Gong, and Pan Li. 2021. Striking a balance between stability and plasticity for class-incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1124--1133.
[43]
Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. 2019. Large Scale Incremental Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 374--382.
[44]
Hong Xuan, Abby Stylianou, Xiaotong Liu, and Robert Pless. 2020. Hard Negative Examples are Hard, but Useful. In Proceedings of European Conference on Computer Vision. 126--142.
[45]
Shipeng Yan, Jiangwei Xie, and Xuming He. 2021. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3014--3023.
[46]
Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. 2018. Lifelong Learning with Dynamically Expandable Networks. In International Conference on Learning Representations.
[47]
Lu Yu, Bartlomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, and Joost van de Weijer. 2020. Semantic drift compensation for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6982--6991.
[48]
Hanbin Zhao, Yongjian Fu, Mintong Kang, Qi Tian, Fei Wu, and Xi Li. 2021. Mgsvf: Multi-grained slow vs. fast framework for few-shot class-incremental learning. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021), 1-1. https://doi.org/10.1109/TPAMI.2021.3133897
[49]
Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. 2021. Co-transport for class-incremental learning. In Proceedings of the ACM International Conference on Multimedia. 1645--1654.
[50]
Fei Zhu, Zhen Cheng, Xu-yao Zhang, and Cheng-lin Liu. 2021a. Class-Incremental Learning via Dual Augmentation. In Advances in Neural Information Processing Systems, Vol. 34. 14306--14318.
[51]
Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng-Lin Liu. 2021b. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5871--5880.
[52]
Kai Zhu, Wei Zhai, Yang Cao, Jiebo Luo, and Zheng-Jun Zha. 2022. Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9296--9305.

Cited By

View all
  • (2024)TMM-CLIP: Task-guided Multi-Modal Alignment for Rehearsal-Free Class Incremental LearningProceedings of the 6th ACM International Conference on Multimedia in Asia10.1145/3696409.3700182(1-7)Online publication date: 3-Dec-2024
  • (2024)Class-incremental Learning for Time Series: Benchmark and EvaluationProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671581(5613-5624)Online publication date: 25-Aug-2024
  • (2024)Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual LearningComputer Vision – ECCV 202410.1007/978-3-031-73013-9_6(89-106)Online publication date: 27-Nov-2024
  • Show More Cited By

Index Terms

  1. Non-Exemplar Class-Incremental Learning via Adaptive Old Class Reconstruction

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '23: Proceedings of the 31st ACM International Conference on Multimedia
    October 2023
    9913 pages
    ISBN:9798400701085
    DOI:10.1145/3581783
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 October 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. class-incremental learning
    2. old class reconstruction

    Qualifiers

    • Research-article

    Funding Sources

    • Key Research and Development Program of Shaanxi Province
    • Key Scientific Research Project of the Education Department of Shaanxi Province
    • National Natural Science Foundation of China
    • National Key Research and Development Project of China

    Conference

    MM '23
    Sponsor:
    MM '23: The 31st ACM International Conference on Multimedia
    October 29 - November 3, 2023
    Ottawa ON, Canada

    Acceptance Rates

    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)148
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 03 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)TMM-CLIP: Task-guided Multi-Modal Alignment for Rehearsal-Free Class Incremental LearningProceedings of the 6th ACM International Conference on Multimedia in Asia10.1145/3696409.3700182(1-7)Online publication date: 3-Dec-2024
    • (2024)Class-incremental Learning for Time Series: Benchmark and EvaluationProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671581(5613-5624)Online publication date: 25-Aug-2024
    • (2024)Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual LearningComputer Vision – ECCV 202410.1007/978-3-031-73013-9_6(89-106)Online publication date: 27-Nov-2024
    • (2024)Non-exemplar Domain Incremental Learning via Cross-Domain Concept IntegrationComputer Vision – ECCV 202410.1007/978-3-031-72967-6_9(144-162)Online publication date: 3-Nov-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media