ISCA Archive Interspeech 2020
ISCA Archive Interspeech 2020

Rescore in a Flash: Compact, Cache Efficient Hashing Data Structures for n-Gram Language Models

Grant P. Strimel, Ariya Rastrow, Gautam Tiwari, Adrien Piérard, Jon Webb

We introduce DashHashLM, an efficient data structure that stores an n-gram language model compactly while making minimal trade-offs on runtime lookup latency. The data structure implements a finite state transducer with a lossless structural compression and outperforms comparable implementations when considering lookup speed in the small-footprint setting. DashHashLM introduces several optimizations to language model compression which are designed to minimize expected memory accesses. We also present variations of DashHashLM appropriate for scenarios with different memory and latency constraints. We detail the algorithm and justify our design choices with comparative experiments on a speech recognition task. Specifically, we show that with roughly a 10% increase in memory size, compared to a highly optimized, compressed baseline n-gram representation, our proposed data structure can achieve up to a 6× query speedup.


doi: 10.21437/Interspeech.2020-1939

Cite as: Strimel, G.P., Rastrow, A., Tiwari, G., Piérard, A., Webb, J. (2020) Rescore in a Flash: Compact, Cache Efficient Hashing Data Structures for n-Gram Language Models. Proc. Interspeech 2020, 3386-3390, doi: 10.21437/Interspeech.2020-1939

@inproceedings{strimel20_interspeech,
  author={Grant P. Strimel and Ariya Rastrow and Gautam Tiwari and Adrien Piérard and Jon Webb},
  title={{Rescore in a Flash: Compact, Cache Efficient Hashing Data Structures for n-Gram Language Models}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3386--3390},
  doi={10.21437/Interspeech.2020-1939}
}