ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Feature-Level Decision Fusion for Audio-Visual Word Prominence Detection

Martin Heckmann

Common fusion techniques in audio-visual speech processing operate on the modality level. I.e. they either combine the features extracted from the two modalities directly or derive a decision for each modality separately and then combine the modalities on the decision level. We investigate the audio-visual processing of linguistic prosody, more precisely the extraction of word prominence. In this context the different features for each modality can be assumed to be only partially dependent. Hence we propose to train a classifier for each of these features, acoustic and visual modality, and then combine them on a decision level. We compare this approach with conventional fusion methods, i.e. feature fusion and decision fusion on the modality level. Our results show that the feature-level decision fusion clearly outperforms the other approaches, in particular when we also additionally integrate the features resulting from the feature fusion. Compared to a detection based only on the full audio stream we obtain relative improvements from the audio-visual detection of 19% for clean audio and up to 50% for noisy audio.


doi: 10.21437/Interspeech.2016-163

Cite as: Heckmann, M. (2016) Feature-Level Decision Fusion for Audio-Visual Word Prominence Detection. Proc. Interspeech 2016, 575-579, doi: 10.21437/Interspeech.2016-163

@inproceedings{heckmann16_interspeech,
  author={Martin Heckmann},
  title={{Feature-Level Decision Fusion for Audio-Visual Word Prominence Detection}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={575--579},
  doi={10.21437/Interspeech.2016-163}
}