ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Open-Domain Audio-Visual Speech Recognition: A Deep Learning Approach

Yajie Miao, Florian Metze

Automatic speech recognition (ASR) on video data naturally has access to two modalities: audio and video. In previous work, audio-visual ASR, which leverages visual features to help ASR, has been explored on restricted domains of videos. This paper aims to extend this idea to open-domain videos, for example videos uploaded to YouTube. We achieve this by adopting a unified deep learning approach. First, for the visual features, we propose to apply segment- (utterance-) level features, instead of highly restrictive frame-level features. These visual features are extracted using deep learning architectures which have been pre-trained on computer vision tasks, e.g., object recognition and scene labeling. Second, the visual features are incorporated into ASR under deep learning based acoustic modeling. In addition to simple feature concatenation, we also apply an adaptive training framework to incorporate visual features in a more flexible way. On a challenging video transcribing task, audio-visual ASR using our proposed approach gets notable improvements in terms of word error rates (WERs), compared to ASR merely using speech features.


doi: 10.21437/Interspeech.2016-412

Cite as: Miao, Y., Metze, F. (2016) Open-Domain Audio-Visual Speech Recognition: A Deep Learning Approach. Proc. Interspeech 2016, 3414-3418, doi: 10.21437/Interspeech.2016-412

@inproceedings{miao16_interspeech,
  author={Yajie Miao and Florian Metze},
  title={{Open-Domain Audio-Visual Speech Recognition: A Deep Learning Approach}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={3414--3418},
  doi={10.21437/Interspeech.2016-412}
}