ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features

Barlas Oğuz, Issac Alphonso, Shuangyu Chang

Non-negative matrix based language models have been recently introduced [1] as a computationally efficient alternative to other feature-based models such as maximum-entropy models. We present a new entropy based pruning algorithm for this class of language models, which is fast and scalable. We present perplexity and word error rate results and compare these against regular n-gram pruning. We also train models with location and personalization features and report results at various pruning thresholds. We demonstrate that contextual features are helpful over the vanilla model even after pruning to a similar size.


doi: 10.21437/Interspeech.2016-130

Cite as: Oğuz, B., Alphonso, I., Chang, S. (2016) Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features. Proc. Interspeech 2016, 2328-2332, doi: 10.21437/Interspeech.2016-130

@inproceedings{oguz16_interspeech,
  author={Barlas Oğuz and Issac Alphonso and Shuangyu Chang},
  title={{Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={2328--2332},
  doi={10.21437/Interspeech.2016-130}
}