Abstract
This paper extends prior work on knowledge consolidation and the stability-plasticity problem within the context of a Lifelong Machine Learning (LML) system. A context-sensitive multiple task learning (csMTL) neural network is used as a consolidated domain knowledge store. Prior work has demonstrated that a csMTL network, in combination with task rehearsal, can retain previous task knowledge when consolidating a sequence of up to ten tasks from a domain. However subsequent experimentation has shown that the method suffers from scaling problems as the learning sequence increases resulting in the loss of prior task accuracy and a growing computational cost for rehearsing prior tasks using larger training sets. A solution to these two problems is presented that uses a sweep method of rehearsal that requires only a small number of rehearsal examples (as few as one) for each prior task per training iteration in order to maintain prior task accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Baxter, J.: Learning model bias. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 169–175 (1996)
Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka, Jr., E.R., Mitchell, T.M.: Toward an architecture for never-ending language learning. In: Fox, M., Poole, D. (eds.) AAAI. AAAI Press (2010)
Caruana, R.A.: Multitask learning. Machine Learning 28, 41–75 (1997)
Eljabu, L.: Knowledge Consolidation Using Multiple Task Learning: Overcoming The Stability-Plasticity Problem. Masters Thesis, Jodrey School of Computer Science, Acadia University, Wolfville, NS (2014)
Fowler, B., Silver, D.L.: Consolidation using context-sensitive multiple task learning. In: Butz, C., Lingras, P. (eds.) Canadian AI 2011. LNCS, vol. 6657, pp. 128–139. Springer, Heidelberg (2011)
Grossberg, S.: Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11, 23–64 (1987)
McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. The Psychology of Learning and Motivation 24, 109–165 (1989)
Robins, A.V.: Catastrophic forgetting, rehearsal, and pseudorehearsal. Connection Science 7, 123–146 (1995)
Robins, A.V.: Consolidation in neural networks and in the sleeping brain. Connection Science Special Issue: Transfer in Inductive Systems 8(2), 259–275 (1996)
Silver, D.L.: The consolidation of task knowledge for lifelong machine learning. In: AAAI Spring Symposium 2012 (2012)
Silver, D.L. Mercer, R.E.: The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness. Learning to Learn, 213–233 (1997)
Silver, D.L., Mercer, R.E.: The task rehearsal method of life-long learning: overcoming impoverished data. In: Cohen, R., Spencer, B. (eds.) AI 2002. LNCS, vol. 2338, pp. 90–101. Springer, Heidelberg (2002)
Silver, D.L., Poirier, R.: Sequential consolidation of learned task knowledge. In: Tawfik, A.Y., Goodwin, S.D. (eds.) Canadian AI 2004. LNCS (LNAI), vol. 3060, pp. 217–232. Springer, Heidelberg (2004)
Silver, D.L., Poirier, R., Currie, D.: Inductive tranfser with context-sensitive neural networks. Machine Learning 73(3), 313–336 (2008)
Thrun, S.: Is learning the n-th thing any easier than learning the first? In: Advances in Neural Information Processing Systems, pp. 640–646. The MIT Press (1996)
Thrun, S.: Lifelong learning algorithms. Learning to Learn, ch. 1, pp. 181–209. Kluwer Academic Publisher (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Silver, D.L., Mason, G., Eljabu, L. (2015). Consolidation Using Sweep Task Rehearsal: Overcoming the Stability-Plasticity Problem. In: Barbosa, D., Milios, E. (eds) Advances in Artificial Intelligence. Canadian AI 2015. Lecture Notes in Computer Science(), vol 9091. Springer, Cham. https://doi.org/10.1007/978-3-319-18356-5_27
Download citation
DOI: https://doi.org/10.1007/978-3-319-18356-5_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-18355-8
Online ISBN: 978-3-319-18356-5
eBook Packages: Computer ScienceComputer Science (R0)