Abstract
Lifelong machine learning or continual learning models attempt to learn incrementally by accumulating knowledge across a sequence of tasks. Therefore, these models learn better and faster. They are used in various intelligent systems that have to interact with humans or any dynamic environment. Dynamically expandable networks are continual deep learning models that allow its architecture to expand with a sequence of tasks. The model retains knowledge from the previous tasks that results in high performance on newer tasks. The existing models use Minkowski distance measures to separate nodes of the current network, resulting in higher catastrophic forgetting. These measures are susceptible to high dimensional sparse vectors, resulting in sub-optimum performance. We propose ang-DEN, as a dynamically expanding continual learning architecture that use angular distance metric. It addresses semantic drift through better separation of nodes achieving 97% average accuracy with an improvement of 1.3% across all tasks on MNIST variant datasets.
K. Saadi—Independent Researcher.
This work was supported by Fatima Al-Fihri predoctoral fellowship program (https://fatimafellowship.com/).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 420–434. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44503-X_27
Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)
Chen, Z., Liu, B.: Lifelong machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–207 (2018)
d’Autume, C.d.M., Ruder, S., Kong, L., Yogatama, D.: Episodic memory in lifelong language learning. arXiv preprint arXiv:1906.01076 (2019)
Fernando, C., et al.: PathNet: evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734 (2017)
Fontenla-Romero, Ó., Guijarro-Berdiñas, B., Martinez-Rego, D., Pérez-Sánchez, B., Peteiro-Barral, D.: Online machine learning. In: Efficiency and Scalability Methods for Computational Intellect, pp. 27–54. IGI Global (2013)
Khan, M.T., Azam, N., Khalid, S., Yao, J.: A three-way approach for learning rules in automatic knowledge-based topic models. Int. J. Approx. Reason. 82, 210–226 (2017)
Khan, M.T., Durrani, M., Khalid, S., Aziz, F.: Online knowledge-based model for big data topic extraction. In: Computational Intelligence and Neuroscience 2016 (2016)
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)
LeCun, Y., Cortes, C., Burges, C.: MNIST Handwritten Digit Database, vol. 2. ATT Labs (2010). http://yann.lecun.com/exdb/mnist
Li, X., Zhou, Y., Wu, T., Socher, R., Xiong, C.: Learn to grow: a continual structure learning framework for overcoming catastrophic forgetting. In: International Conference on Machine Learning, pp. 3925–3934. PMLR (2019)
Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2017)
Liu, X., Masana, M., Herranz, L., Van de Weijer, J., Lopez, A.M., Bagdanov, A.D.: Rotate your networks: better weight consolidation and less catastrophic forgetting. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2262–2268. IEEE (2018)
Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. Adv. Neural. Inf. Process. Syst. 30, 6467–6476 (2017)
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: A review. Neural Netw. 113, 54–71 (2019)
Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: ICARL: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017)
Rusu, A.A., et al.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)
Wu, Y., et al.: Large scale incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 374–382 (2019)
Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547 (2017)
Zhang, J., Zhang, J., Ghosh, S., Li, D., Zhu, J., Zhang, H., Wang, Y.: Regularize, expand and compress: nonexpansive continual learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 854–862 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Saadi, K., Taimoor Khan, M. (2022). Effective Prevention of Semantic Drift in Continual Deep Learning. In: Yin, H., Camacho, D., Tino, P. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2022. IDEAL 2022. Lecture Notes in Computer Science, vol 13756. Springer, Cham. https://doi.org/10.1007/978-3-031-21753-1_44
Download citation
DOI: https://doi.org/10.1007/978-3-031-21753-1_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21752-4
Online ISBN: 978-3-031-21753-1
eBook Packages: Computer ScienceComputer Science (R0)