Abstract
Continual learning is a novel learning setup for an environment where data are introduced sequentially, and a model continually learns new tasks. However, the model forgets the learned knowledge as it learns new classes. There is an approach that keeps a few previous data, but this causes other problems such as overfitting and class imbalance. In this paper, we propose a method that retrains a network with generated representations from an estimated multivariate Gaussian distribution. The representations are the vectors coming from CNN that is trained using a gradient regularization to prevent a distribution shift, allowing the stored means and covariances to create realistic representations. The generated vectors contain every class seen so far, which helps preventing the forgetting. Our 6-fold cross-validation experiment shows that the proposed method outperforms the existing continual learning methods by 1.14%p and 4.60%p in CIFAR10 and CIFAR100, respectively. Moreover, we visualize the generated vectors using t-SNE to confirm the validity of multivariate Gaussian mixture to estimate the distribution of the data representations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv. 24, 109–165 (1989)
De Lange, M., et al.: A continual learning survey: defying forgetting in classification tasks. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3366–3385 (2021)
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. In: Proceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526 (2017)
Rusu, A.A., et al.: Progressive neural networks. arXiv preprint. arXiv:1606.04671 (2016)
Rebuffi, S.-A., Kolesnikov, A., Sperl, G., Lampert, C.H.: icarl: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017)
Wu, Y., et al.: Large scale incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 374–382 (2019)
Buzzega, P., Boschini, M., Porrello, A., Calderara, S.: Rethinking experience replay: a bag of tricks for continual learning. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 2180–2187. IEEE (2021)
Belouadah, E., Popescu, A.: Il2m: class incremental learning with dual memory. Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 583–592 (2019)
Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: Advances in Neural Information Processing systems, vol. 30, pp. 2994–3003 (2017)
Zeng, G., Chen, Y., Cui, B., Yu, S.: Continual learning of context-dependent processing in neural networks. Nature Mach. Intell. 1(8), 364–372 (2019)
Wang, S., Li, X., Sun, J., Xu, Z.: Training networks in null space of feature covariance for continual learning. In: Proceedings. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 184–193 (2021)
Saha, G., Garg, I., Roy, K.: Gradient projection memory for continual learning. arXiv preprint. arXiv:2103.09762 (2021)
Zhang, B., Guo, Y., Li, Y., He, Y., Wang, H., Dai, Q.: Memory recall: a simple neural network training framework against catastrophic forgetting. In: IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 5, pp. 2010–2022 (2021)
Hou, S., Pan, X., Loy, C.C., Wang, Z., Lin, D.: Learning a unified classifier incrementally via rebalancing. In: Proceedings. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 831–839 (2019)
Douillard, A., Cord, M., Ollion, C., Robert, T., Valle, E.: Podnet: pooled outputs distillation for small-tasks incremental learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12365, pp. 86–102. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58565-5_6
Chaudhry, A., Gordo, A., Dokania, P., Torr, P., Lopez-Paz, D.: Using hindsight to anchor past knowledge in continual learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 8, pp. 6993–7001 (2021)
Shim, D., Mai, Z., Jeong, J., Sanner, S., Kim, H., Jang, J.: Online class-incremental continual learning with adversarial shapley value. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 11, pp. 9630–9638 (2021)
Bang, J., Kim, H., Yoo, Y., Ha, J.W., Choi, J.: Rainbow memory: continual learning with a memory of diverse samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8218–8227 (2021)
Wu, C., Herranz, L., Liu, X., van de Weijer, J., Raducanu, B.: Memory replay gans: learning to generate new categories without forgetting. In: Advances in Neural Information Processing Systems, vol. 31, pp. 5966–5976 (2018)
Lopez-Paz, D., Ranzato, M.A.: Gradient episodic memory for continual learning. In: Advances in Neural Information Processing Systems, vol. 30, pp. 6470–6479 (2017)
Aljundi, R., Lin, M., Goujaud, B., Bengio, Y.: Gradient based sample selection for online continual learning. In: Advances in Neural Information Processing Systems, pp. 11816–11825, vol. 32 (2019)
Hu, W., Qin, Q., Wang, M., Ma, J., Liu, B.: Continual learning by using information of each class holistically. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 9, pp. 7797–7805 (2021)
Shen, G., Zhang, S., Chen, X., Deng, Z.H.: Generative feature replay with orthogonal weight modification for continual learning. In: Proceedings of International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc.: Ser. B (Methodol.) 39(1), 1–22 (1977)
Acknowledgment
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University); No. 2022-0-00113, Developing a Sustainable Collaborative Multi-modal Lifelong Learning Framework).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kim, TH., Moon, HJ., Cho, SB. (2022). Gradient Regularization with Multivariate Distribution of Previous Knowledge for Continual Learning. In: Yin, H., Camacho, D., Tino, P. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2022. IDEAL 2022. Lecture Notes in Computer Science, vol 13756. Springer, Cham. https://doi.org/10.1007/978-3-031-21753-1_35
Download citation
DOI: https://doi.org/10.1007/978-3-031-21753-1_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21752-4
Online ISBN: 978-3-031-21753-1
eBook Packages: Computer ScienceComputer Science (R0)