Abstract:
Deep Neural Networks (DNNs) suffer from the long known phenomenon of catastrophic forgetting when trained on a sequence of tasks without appropriate counter measures. Ove...Show MoreMetadata
Abstract:
Deep Neural Networks (DNNs) suffer from the long known phenomenon of catastrophic forgetting when trained on a sequence of tasks without appropriate counter measures. Overcoming this is of great interest as it would enable DNNs to accumulate knowledge over a potentially long sequence of tasks without forgetting. In this paper, we study the commonly used method of rehearsal for mitigating catastrophic forgetting and show that it can cause an unwanted distribution shift that negatively affects performance. Building on recently introduced Dirichlet Prior Networks (DPNs), we propose a novel method that incorporates prior knowledge of known distribution shifts into its predictions in order to reduce their negative influence. The proposed method is evaluated on commonly used benchmark datasets and compared to related methods.
Date of Conference: 18-23 July 2022
Date Added to IEEE Xplore: 30 September 2022
ISBN Information: