Abstract:
Class incremental learning (CIL) devotes to addressing catastrophic forgetting while continually learning new tasks. Recently, prompt tuning techniques based on vision tr...Show MoreMetadata
Abstract:
Class incremental learning (CIL) devotes to addressing catastrophic forgetting while continually learning new tasks. Recently, prompt tuning techniques based on vision transformers (ViT) have achieved promising results in rehearsal-free CIL. To alleviate forgetting, representative methods use a query-key mechanism to generate prompts and attach them to the frozen pre-trained ViT. However, these methods neglect the effect of query, and the learning capacity of the model is limited due to unsuitable prompts. In this paper, we propose a new approach called Prompting to Prompt (P2P). Instead of using a task-independent query function, we learn sample queries together with prompts in response to the shift of data distribution in CIL. P2P can better separate classes across tasks because the generated prompts are effective and more discriminative sample features can be extracted. Besides, the whole training process is end-to-end and queries are decided by prompts themselves, which avoids additional parameters. P2P improves the plasticity of model while maintaining good resistance to forgetting in the long task sequence. Experiments show that our approach achieves state-of-the-art results with even fewer parameters.
Published in: ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 14-19 April 2024
Date Added to IEEE Xplore: 18 March 2024
ISBN Information: