ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

From Undercomplete to Sparse Overcomplete Autoencoders to Improve LF-MMI based Speech Recognition

Selen Hande Kabil, Herve Bourlard

Starting from a strong Lattice-Free Maximum Mutual Information (LF-MMI) baseline system, we explore different autoencoder configurations to enhance Mel-Frequency Cepstral Coefficients (MFCC) features. Autoencoders are expected to generate new MFCC features that can be used in our LF-MMI based baseline system (with or without retraining) towards speech recognition improvements. Starting from shallow undercomplete autoencoders, and their known equivalence with Principal Component Analysis (PCA), we go to deeper or sparser architectures. In the spirit of kernel-based learning methods, we explore alternatives where the autoencoder first goes overcomplete (i.e., expand the representation space) in a nonlinear way, and then we restrict the autoencoder by means of a sequent bottleneck layer. Finally, as a third solution, we use sparse overcomplete autoencoders where a sparsity constraint is imposed on the higher-dimensional encoding layer. Experimental results are provided on the Augmented Multiparty Interaction (AMI) dataset, where we show that all aforementioned architectures improve speech recognition performance.


doi: 10.21437/Interspeech.2022-11390

Cite as: Hande Kabil, S., Bourlard, H. (2022) From Undercomplete to Sparse Overcomplete Autoencoders to Improve LF-MMI based Speech Recognition. Proc. Interspeech 2022, 1061-1065, doi: 10.21437/Interspeech.2022-11390

@inproceedings{handekabil22_interspeech,
  author={Selen {Hande Kabil} and Herve Bourlard},
  title={{From Undercomplete to Sparse Overcomplete Autoencoders to Improve LF-MMI based Speech Recognition}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={1061--1065},
  doi={10.21437/Interspeech.2022-11390}
}