Skip to main content

Another Look at the Data Sparsity Problem

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4188))

Abstract

Performance on a statistical language processing task relies upon accurate information being found in a corpus. However, it is known (and this paper will confirm) that many perfectly valid word sequences do not appear in training corpora. The percentage of n-grams in a test document which are seen in a training corpus is defined as n-gram coverage, and work in the speech processing community [7] has shown that there is a correlation between n-gram coverage and word error rate (WER) on a speech recognition task. Other work (e.g. [1]) has shown that increasing training data consistently improves performance of a language processing task. This paper extends that work by examining n-gram coverage for far larger corpora, considering a range of document types which vary in their similarity to the training corpora, and experimenting with a broader range of pruning techniques. The paper shows that large portions of language will not be represented within even very large corpora. It confirms that more data is always better, but how much better is dependent upon a range of factors: the source of that additional data, the source of the test documents, and how the language model is pruned to account for sampling errors and make computation reasonable.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Banko, M., Brill, E.: Mitigating the Paucity of Data Problem. In: Proceedings of the Conference on Human Language Technology (2001)

    Google Scholar 

  2. Chen, S., Goodman, J.: An empirical study of smoothing techniques for language modeling. Technical report TR-10-98, Harvard University (1998)

    Google Scholar 

  3. Jelinek, F.: Up from trigrams! In: Proceedings Eurospeech 1991 (1991)

    Google Scholar 

  4. Manning, C., Schütze, H.: Foundations of Statistical Natural Language Processing. MIT Press, Cambridge (1999)

    MATH  Google Scholar 

  5. Moore, R.: There’s No Data Like More Data (But When Will Enough Be Enough?). In: Proceedings of IEEE International Workshop on Intelligent Signal Processing (2001)

    Google Scholar 

  6. Powell, W.: The Anarchist’s Cookbook. Ozark Pr Llc. (1970)

    Google Scholar 

  7. Rosenfeld, R.: Optimizing Lexical and N-gram Coverage Via Judicious Use of Linguistic Data. In: Proceedings Eurospeech 1995 (1995)

    Google Scholar 

  8. Klimt, B., Yang, Y.: Introducing the Enron Email Corpus. Carnegie Mellon University (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Allison, B., Guthrie, D., Guthrie, L. (2006). Another Look at the Data Sparsity Problem. In: Sojka, P., Kopeček, I., Pala, K. (eds) Text, Speech and Dialogue. TSD 2006. Lecture Notes in Computer Science(), vol 4188. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11846406_41

Download citation

  • DOI: https://doi.org/10.1007/11846406_41

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-39090-9

  • Online ISBN: 978-3-540-39091-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics