Impact Statement:Conventional learning systems typically train a fixed number of classes in a closed world using precollected datasets. However, in real-world applications, new classes ma...Show More
Abstract:
Despite the impressive performance of deep learning models, they suffer from catastrophic forgetting, which refers to a significant decline in overall performance when tr...Show MoreMetadata
Impact Statement:
Conventional learning systems typically train a fixed number of classes in a closed world using precollected datasets. However, in real-world applications, new classes may emerge and need to be learned gradually. Catastrophic forgetting is one of the issues that can arise due to a change in representations between old and new tasks. Contrastive learning has been suggested as a solution to this problem by learning more generalized features. However, in the continual learning (CL) setting, the imbalance between new and old class samples limits the performance improvement achieved through direct application of contrastive learning. To address this issue, we propose a sampling method based on determinant point processes (DPPs) that comprehensively selects diverse and high-quality samples to complete the efficient sampling process. Additionally, we propose a loss function weighting strategy for new and old categories, which can significantly enhance the performance of contrastive learning i...
Abstract:
Despite the impressive performance of deep learning models, they suffer from catastrophic forgetting, which refers to a significant decline in overall performance when trained with new classes added incrementally. The primary reason for this phenomenon is the overlapping or confusion between the feature space representations of old and new classes. In this study, we examine this issue and propose a model that can mitigate the problem by learning more transferable features. We employ contrastive learning, a recent breakthrough in deep learning, which can learn visual representations better than the task-specific supervision method. Specifically, we introduce an exemplar-based continual learning (CL) method using contrastive learning to learn a task-agnostic and continuously improved feature expression. However, the class imbalance between old and new samples in CL can affect the final learned features. To address this issue, we propose two approaches. First, we use a novel exemplar-base...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 7, July 2024)