Abstract
Humans learn new information incrementally while consolidating old information at every stage in a lifelong learning process. While this appears perfectly natural for humans, the same task has proven to be challenging for learning machines. Deep neural networks are still prone to catastrophic forgetting of previously learnt information when presented with information from a sufficiently new distribution. To address this problem, we present NeoNet, a simple yet effective method that is motivated by recent findings in computational neuroscience on the process of long-term memory consolidation in humans. The network relies on a pseudorehearsal strategy to model the working of relevant sections of the brain that are associated with long-term memory consolidation processes. Experiments on benchmark classification tasks achieve state-of-the-art results that demonstrate the potential of the proposed method, with improvements in additions of novel information attained without requiring to store exemplars of past classes.
A. Patra and T. Chakraborti—Both authors are equally contributed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019)
Goodfellow, I.J., Mirza, M., Xiao, D., Courville, A., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv:1312.6211 (2013)
Zhang, L., et al.: A simplified computational memory model from information processing. Sci. Rep. 6, 37470 (2016)
French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3, 4 (1999)
Fiebig, F., Lansner, A.: Memory consolidation from seconds to weeks: a three-stage neural network model with autonomous reinstatement dynamics. Front. Comput. Neurosci. 8, 64 (2014)
Manohar, S.G., Zokaei, N., Fallon, S.J., Vogels, T., Husain, M.: Neural mechanisms of attending to items in working memory. Neurosci. Biobehav. Rev. 101, 1–12 (2019)
Mermillod, M., Bugaiska, A., Bonin, P.: The stability-plasticity dilemma: investigating the continuum from catastrophic forgetting to age-limited learning effects. Front. Psychol. 4, 504 (2013)
Kitamura, T., et al.: Engrams and circuits crucial for systems consolidation of a memory. Science 356, 6333 (2017)
Marslen-Wilson, W.D., Teuber, H.L.: Memory for remote events in anterograde amnesia: recognition of public figures from newsphotographs. Neuropsychologia 13, 353–364 (1975)
Tomita, H., Ohbayashi, M., Nakahara, K., Hasegawa, I., Miyashita, Y.: Top-down signal from prefrontal cortex in executive control of memory retrieval. Nature 401, 699 (1999)
Maddock, R.J., Garrett, A.S., Buonocore, M.H.: Remembering familiar people: the posterior cingulate cortex and autobiographical memory retrieval. Neuroscience 104, 667–676 (2001)
Siegel, J.M.: The rem sleep-memory consolidation hypothesis. Science 294, 1058–1063 (2001)
Gepperth, A., Karaoguz, C.: A bio-inspired incremental learning architecture for applied perceptual problems. Cogn. Comput. 8, 5 (2016)
Kemker, R., Kanan, C.: FearNet: brain-inspired model for incremental learning. arXiv:1711.10563 (2017)
Specht, D.F.: Probabilistic neural networks. Neural Netw. 3, 109–118 (1990)
Robins, A.: Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Sci. 7, 123–146 (1995)
Dabak, A.G., Johnson, D.H.: Relations between Kullback-Leibler distance and Fisher information. Technical report (2002)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017)
Dhar, P., Singh, R.V., Peng, K.C., Wu, Z., Chellappa, R.: Learning without memorizing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5138–5146 (2019)
Hou, S., Pan, X., Loy, C.C., Wang, Z., Lin, D.: Learning a unified classifier incrementally via rebalancing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 831–839 (2019)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
Griffin, G., Holub, A., Perona, P.: Caltech-256 Object Category Dataset. California Institute of Technology (2007)
Welinder, P., et al.: Caltech-UCSD birds 200, California institute of technology. CNS-TR- 2010–001 (2010)
Kemker, R., McClure, M., Abitino, A., Hayes, T., Kanan, C.: Measuring catastrophic forgetting in neural networks. In: AAAI Conference on Artificial Intelligence (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Patra, A., Chakraborti, T. (2021). Learn More, Forget Less: Cues from Human Brain. In: Ishikawa, H., Liu, CL., Pajdla, T., Shi, J. (eds) Computer Vision – ACCV 2020. ACCV 2020. Lecture Notes in Computer Science(), vol 12625. Springer, Cham. https://doi.org/10.1007/978-3-030-69538-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-69538-5_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-69537-8
Online ISBN: 978-3-030-69538-5
eBook Packages: Computer ScienceComputer Science (R0)