ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Recurrent Models for Auditory Attention in Multi-Microphone Distant Speech Recognition

Suyoun Kim, Ian Lane

Integration of multiple microphone data is one of the key ways to achieve robust speech recognition in noisy environments or when the speaker is located at some distance from the input device. Signal processing techniques such as beamforming are widely used to extract a speech signal of interest from background noise. These techniques, however, are highly dependent on prior spatial information about the microphones and the environment in which the system is being used. In this work, we present a neural attention network that directly combines multi-channel audio to generate phonetic states without requiring any prior knowledge of the microphone layout or any explicit signal preprocessing for speech enhancement. We embed an attention mechanism within a Recurrent Neural Network based acoustic model to automatically tune its attention to a more reliable input source. Unlike traditional multi-channel preprocessing, our system can be optimized towards the desired output in one step. Although attention-based models have recently achieved impressive results on sequence-to-sequence learning, no attention mechanisms have previously been applied to learn potentially asynchronous and non-stationary multiple inputs. We evaluate our neural attention model on the CHiME-3 task, and show that the model achieves comparable performance to beamforming using a purely data-driven method.


doi: 10.21437/Interspeech.2016-326

Cite as: Kim, S., Lane, I. (2016) Recurrent Models for Auditory Attention in Multi-Microphone Distant Speech Recognition. Proc. Interspeech 2016, 3838-3842, doi: 10.21437/Interspeech.2016-326

@inproceedings{kim16d_interspeech,
  author={Suyoun Kim and Ian Lane},
  title={{Recurrent Models for Auditory Attention in Multi-Microphone Distant Speech Recognition}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={3838--3842},
  doi={10.21437/Interspeech.2016-326}
}