Abstract
In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. More specifically, the model requires that, if a learner commits an example x to memory in some stage of the learning process, then there is some subsequent stage for which x no longer appears in the learner’s memory. This model is called temporary example memory (Tem) learning. In some sense, it captures the idea that memories fade.
Many interesting results concerning the Tem-learning model are presented. For example, there exists a class of languages that can be identified by memorizing k + 1 examples in the Tem sense, but that cannot be identified by memorizing k examples in the Bem sense. On the other hand, there exists a class of languages that can be identified by memorizing just 1 example in the Bem sense, but that cannot be identified by memorizing any number of examples in the Tem sense. (The proof of this latter result involves an infinitary self-reference argument.) Results are also presented concerning the special cases of: learning indexable classes of languages, and learning (arbitrary) classes of infinite languages.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bakker, B.: Reinforcement learning with long short-term memory. Advances in Neural Inform. Processing Systems 14, 1475–1482 (2002)
Case, J.: Periodicity in generations of automata. Mathematical Systems Theory 8(1), 15–32 (1974)
Case, J.: Infinitary self-reference in learning theory. Journal of Experimental and Theoretical Artificial Intelligence 6(1), 3–16 (1994)
Carlucci, L., Case, J., Jain, S., Stephan, F.: Results on memory-limited U-shaped learning. Inform. Comput. 205(10), 1551–1573 (2007)
Case, J., Jain, S., Lange, S., Zeugmann, T.: Incremental concept learning for bounded data mining. Inform. Comput. 152(1), 74–110 (1999)
Gold, E.M.: Language identification in the limit. Inform. Control 10(5), 447–474 (1967)
Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to automata theory, languages, and computation, 2nd edn. Addison Wesley, Reading (2001)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)
Jain, S., Stephan, F.: Learning in Friedberg numberings. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) ALT 2007. LNCS (LNAI), vol. 4754, pp. 79–93. Springer, Heidelberg (2007)
Kinber, E., Stephan, F.: Language learning from texts: mind changes, limited memory, and monotonicity. Inform. Comput. 123(2), 224–241 (1995)
Lin, L.J., Mitchell, T.: Reinforcement learning with hidden states. In: Proc. of 2nd Intl. Conf. on Simulation of Adaptive Behavior, pp. 271–280 (1992)
Lange, S., Moelius, S.E., Zilles, S.: Learning with temporary memory (expanded version). Technical report, University of Delaware (2008), http://www.cis.udel.edu/~moelius/publications
Lange, S., Zeugmann, T.: Incremental learning from positive data. Journal of Computer and System Sciences 53(1), 88–103 (1996)
Lange, S., Zeugmann, T., Zilles, S.: Learning indexed families of recursive languages from positive data: A survey. Theor. Comput. Sci. 397(1-3), 194–232 (2008)
McCallum, R.A.: Learning to use selective attention and short-term memory in sequential tasks. In: Proc. of 4th Intl. Conf. on Simulation of Adaptive Behavior, pp. 315–324 (1996)
Mitchell, T.M.: Machine Learning. McGraw-Hill Higher Education, New York (1997)
Osherson, D., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists, 1st edn. MIT Press, Cambridge (1986)
Rabaey, J.M., Chandrakasan, A., Nikolic, B.: Digital Integrated Circuits: A Design Perspective, 2nd edn. Prentice-Hall, Inc., Englewood Cliffs (2003)
Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1967) (Reprinted, MIT Press, 1987)
Wexler, K., Culicover, P.W.: Formal Principles of Language Acquisition. MIT Press, Cambridge (1980)
Wiehagen, R.: Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Elektron. Inform. Kybernetik 12(1/2), 93–99 (1976)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lange, S., Moelius, S.E., Zilles, S. (2008). Learning with Temporary Memory. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science(), vol 5254. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87987-9_36
Download citation
DOI: https://doi.org/10.1007/978-3-540-87987-9_36
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87986-2
Online ISBN: 978-3-540-87987-9
eBook Packages: Computer ScienceComputer Science (R0)