Loading [a11y]/accessibility-menu.js
Online Continual Learning Benefits From Large Number of Task Splits | IEEE Journals & Magazine | IEEE Xplore

Online Continual Learning Benefits From Large Number of Task Splits


Impact Statement:In the realm of OCL, particularly within the class-incremental learning paradigm, existing methodologies frequently exhibit a marked deterioration in performance as the n...Show More

Abstract:

This work tackles the significant challenges inherent in online continual learning (OCL), a domain characterized by its handling of numerous tasks over extended periods. ...Show More
Impact Statement:
In the realm of OCL, particularly within the class-incremental learning paradigm, existing methodologies frequently exhibit a marked deterioration in performance as the number of task splits escalates. This degradation is pronounced enough that peer-reviewed methodologies predominantly report mean accuracy only for scenarios with no more than 20 task splits, beyond which performance declines to negligible levels. This limitation starkly contrasts with real-world applications where continual learning models are expected to operate over extended periods and handle a significantly larger number of task splits. Addressing this discrepancy, our proposed method not only accommodates but thrives on increased task splits, demonstrating enhanced performance with splits numbering 50 or even 100. Contrary to the conventional trend, our findings reveal that a greater number of splits can lead to improved performance. This improvement is attributed to our innovative framework, which leverages KDE f...

Abstract:

This work tackles the significant challenges inherent in online continual learning (OCL), a domain characterized by its handling of numerous tasks over extended periods. OCL is designed to adapt evolving data distributions and previously unseen classes through a single-pass analysis of a data stream, mirroring the dynamic nature of real-world applications. Despite its promising potential, existing OCL methodologies often suffer from catastrophic forgetting (CF) when confronted with a large array of tasks, compounded by substantial computational demands that limit their practical utility. At the heart of our proposed solution is the adoption of a kernel density estimation (KDE) learning framework, aimed at resolving the task prediction (TP) dilemma and ensuring the separability of all tasks. This is achieved through the incorporation of a linear projection head and a probability density function (PDF) for each task, while a shared backbone is maintained across tasks to provide raw featu...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 11, November 2024)
Page(s): 5746 - 5759
Date of Publication: 27 May 2024
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.