Abstract
Perplexity is a widely used criterion in order to compare language models without any task assumptions. However, the main drawback is that perplexity supposes probability distributions and hence cannot compare heterogeneous models. As an evaluation framework, we propose in this article to abandon perplexity and to extend the Shannon’s entropy idea which is based on model prediction performance using rank based statistics. Our methodology is able to predict joint word sequences being independent of the task or model assumptions. Experiments are carried out on the English language with different kind of language models. We show that long-term prediction language models are not more effective than the standard n-gram models. Ranking distributions follow exponential laws as already observed in predicting letter sequences. These distributions show a second mode not observed with letters and we propose to give some interpretation to this mode in this article.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Shannon, C.: Prediction and entropy of printed english. Bell System Technical Journal 30, 50–64 (1951)
Cover, T., King, R.: A convergent gambling estimate of the entropy of english. IEEE Transactions on Information Theory 24, 413–421 (1978)
Bimbot, F., El-Beze, M., Igounet, S., Jardino, M., Smaili, K., Zitouni, I.: An alternative scheme for perplexity estimation and its assessment for the evaluation of language models. Computer Speech and Language 15, 1–13 (2001)
Deligne, S., Bimbot, F.: Language modeling by variable length sequences: theoretical formulation and evaluation of multigrams. In: IEEE International Conference on Acoustics and Speech Signal Processing, pp. 169–172 (1995)
Chen, S., Goodman, J.: An empirical study of smoothing techniques for language modeling. Computer Speech and Language 13, 359–394 (1999)
Garside, R., Geoffrey, L., Geoffrey, S.: The computational analysis of english. a corpus-based approach. Longman, London (1987)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Alain, P., Boëffard, O., Barbot, N. (2006). Evaluating Language Models Within a Predictive Framework: An Analysis of Ranking Distributions. In: Sojka, P., Kopeček, I., Pala, K. (eds) Text, Speech and Dialogue. TSD 2006. Lecture Notes in Computer Science(), vol 4188. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11846406_40
Download citation
DOI: https://doi.org/10.1007/11846406_40
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-39090-9
Online ISBN: 978-3-540-39091-6
eBook Packages: Computer ScienceComputer Science (R0)