Abstract
A fundamental problem of life-long machine learning is how to consolidate the knowledge of a learned task within a long-term memory structure (domain knowledge) without the loss of prior knowledge. Consolidated domain knowledge makes more efficient use of memory and can be used for more efficient and effective transfer of knowledge when learning future tasks. Relevant background material on knowledge based inductive learning and the transfer of task knowledge using multiple task learning (MTL) neural networks is reviewed. A theory of task knowledge consolidation is presented that uses a large MTL network as the long-term memory structure and task rehearsal to overcome the stability-plasticity problem and the loss of prior knowledge. The theory is tested on a synthetic domain of diverse tasks and it is shown that, under the proper conditions, task knowledge can be sequentially consolidated within an MTL network without loss of prior knowledge. In fact, a steady increase in the accuracy of consolidated domain knowledge is observed.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Abu-Mostafa, Y.S.: Hints. Neural Computation 7, 639–671 (1995)
Baxter, J.: Learning internal representations. In: Proceedings of the Eighth International Conference on Computational Learning Theory (1995)
Caruana, R.A.: Multitask learning. Machine Learning 28, 41–75 (1997)
Grossberg, S.: Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11, 23–64 (1987)
McClelland, J.L., McNaughton, B.L., O’Reilly, R.C.: Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Technical Report PDP.CNS.94.1 (1994)
McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. The Psychology of Learning and Motivation 24, 109–165 (1989)
Mitchell, T., Thrun, S.: Explanation based neural network learning for robot control. Advances in Neural Information Processing Systems 5(5), 287–294 (1993); Giles, C.L., Hanson, S.J., Cowan, J.D. (eds.)
Mitchell, T.M.: Machine Learning. McGraw Hill, New York (1997)
Naik, D.K., Mammone, R.J.: Learning by learning in neural networks. Artificial Neural Networks for Speech and Vision (1993)
Pratt, L.Y.: Discriminability-based transfer between neural networks. Advances in Neural Information Processing Systems 5(5), 204–211 (1993); Giles, C.L., Hanson, S.J., Cowan, J.D. (eds.)
Ring, M.: Learning sequential tasks by incrementally adding higher orders. Advances in Neural Information Processing Systems 5(5), 155–122 (1993); Giles, C.L., Hanson, S.J., Cowan, J.D. (eds.)
Robins, A.V.: Catastrophic forgetting, rehearsal, and pseudorehearsal. Connection Science 7, 123–146 (1995)
Sharkey, N.E., Sharkey, A.J.C.: Adaptive generalization and the transfer of knowledge. Working paper - Center for Connection Science (1992)
Shavlik, J.W., Dietterich, T.G.: Readings in Machine Learning. Morgan Kaufmann Publishers, San Mateo (1990)
Silver, D.L.: Selective Transfer of Neural Network Task Knowledge. PhD Thesis, Dept. of Computer Science, University of Western Ontario, London, Canada (2000)
Silver, D.L., McCracken, P.: The consolidation of neural network task knowledge. In: Arif Wani, M. (ed.) Proceedings of the Internation Conference on Machine Learning and Applications (ICMLA 2003), Los Angeles, CA, pp. 185–192 (2003)
Silver, D.L., McCracken, P.: Selective transfer of task knowledge using stochastic noise. In: Xiang, Y., Chaib-draa, B. (eds.) Advances in Artificial Intelligence, Proceedings of the 16th Conference of the Canadian Society for Computational Studies of Intelligence (AI 2003), pp. 190–205. Springer, Heidelberg (2003)
Silver, D.L., Mercer, R.E.: The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness. Connection Science Special Issue: Transfer in Inductive Systems 8(2), 277–294 (1996)
Silver, D.L., Mercer, R.E.: The task rehearsal method of life-long learning: Overcoming impoverished data. In: Cohen, R., Spencer, B. (eds.) Canadian AI 2002. LNCS (LNAI), vol. 2338, pp. 90–101. Springer, Heidelberg (2002)
Singh, S.P.: Transfer of learning by composing solutions for elemental sequential tasks. Machine Learning (1992)
Thrun, S.: Lifelong learning algorithms. Learning to Learn, 181–209 (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Silver, D.L., Poirier, R. (2004). Sequential Consolidation of Learned Task Knowledge. In: Tawfik, A.Y., Goodwin, S.D. (eds) Advances in Artificial Intelligence. Canadian AI 2004. Lecture Notes in Computer Science(), vol 3060. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24840-8_16
Download citation
DOI: https://doi.org/10.1007/978-3-540-24840-8_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22004-6
Online ISBN: 978-3-540-24840-8
eBook Packages: Springer Book Archive