TexCIL: Text-Guided Continual Learning of Disease with Vision-Language Model | IEEE Conference Publication | IEEE Xplore

TexCIL: Text-Guided Continual Learning of Disease with Vision-Language Model


Abstract:

Current intelligent diagnostic systems often catastrophically forget old knowledge when learning new diseases only from the training dataset of the new diseases. Inspired...Show More

Abstract:

Current intelligent diagnostic systems often catastrophically forget old knowledge when learning new diseases only from the training dataset of the new diseases. Inspired by human learning of visual classes with the effective help of language, we propose a continual learning framework based on a pre-trained visual-language model (VLM) without storing any image of previously learned diseases. In this framework, textual prior knowledge of each new disease can be obtained by utilizing the frozen VLM’s text encoder, and then used to guide the visual learning of the new disease. This framework innovatively utilizes the textual prior knowledge of all previously learned diseases as out-of-distribution (OOD) information to help differentiate currently being-learned diseases from others. Extensive empirical evaluations on both medical and natural image datasets confirm the superiority of the proposed method over existing state-of-the-art methods in continual learning of new visual classes. The source code is available at https://openi.pcl.ac.cn/OpenMedIA/TexCIL.
Date of Conference: 03-06 December 2024
Date Added to IEEE Xplore: 10 January 2025
ISBN Information:

ISSN Information:

Conference Location: Lisbon, Portugal

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.