Abstract
We describe a new abstract model for the computational learning of grammars. The model deals with a learning process in which an algorithm is given an input of a large set of training sentences that belong to some unknown grammar. The algorithm then tries to infer this grammar. Our model is based on the well-known Minimum Description Length Principle. It is quite close to, but more general than several other existing approaches. We have shown that one of these approaches (based on n-gram statistics) coincides exactly with a restricted version of our own model. We have used a restricted version of the algorithm implied by the model to find classes of related words in natural language texts. It turns out that for this task, which can be seen as a ‘degenerate’ case of grammar learning, our approach gives quite good results. As opposed to many other approaches, it also provides a clear ‘stopping criterion’ indicating at what point the learning process should stop.
Partially supported by the European Union through Neuro-COLT ESPRIT Working Group Nr. 8556, and by NWO through NFI Project AL-ADDIN under Contract number NF 62-376.
Preview
Unable to display preview. Download preview PDF.
References
P.F. Brown, V.J. Della Pietra, P.V. deSouza, J.C. Lai, and R.L. Mercer. Class/based n-gram models of natural language. Computational Linguistics, 18:467–479, 1992.
N. Chomsky. Syntactic Structures. Mouton, The Hague, 1957.
A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 34:1–38, 1977.
S. Finch and N. Chater. A hybrid approach to the automatic learning of linguistic categories. AISB Quarterly, 78:16–24, 1991.
R.G. Gallager. Information Theory and Reliable Communication. Wiley, New York, 1968.
P.D. Grünwald. Automatic grammar induction using the MDL Principle. Master's thesis, Free University of Amsterdam, Amsterdam, 1994.
H. Kucera and W. Francis. Computational Analysis of Present Day American English. Brown University Press, 1967.
P. Langley. Machine learning and grammar induction. Machine Learning, 2:5–8, 1987. Editorial of special issue on language learning.
M. Li and P.M.B. Vitányi. An introduction to Kolmogorov complexity and its applications. Springer-Verlag, 1993.
J. Rissanen. A universal prior for integers and estimation by minimum description length. Ann. Statist., 11:416–431, 1982.
R.J. Solomonoff. A formal theory of inductive inference, part 1 and part 2. Inform. Contr., 7:1–22, 224–254, 1964.
A. Stolcke. Bayesian Learning of Probabilistic Language Models. PhD thesis, ICSI, Berkeley, 1994.
J.G. Wolff. Language acquisition, data compression, and generalization. Language and Communication, 2:57–89, 1982.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Grünwald, P. (1996). A minimum description length approach to grammar inference. In: Wermter, S., Riloff, E., Scheler, G. (eds) Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing. IJCAI 1995. Lecture Notes in Computer Science, vol 1040. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60925-3_48
Download citation
DOI: https://doi.org/10.1007/3-540-60925-3_48
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60925-4
Online ISBN: 978-3-540-49738-7
eBook Packages: Springer Book Archive