Abstract:
Articulatory feature models have been proposed in the automatic speech recognition community as an alternative to phone-based models of speech. In this paper, we extend t...Show MoreMetadata
Abstract:
Articulatory feature models have been proposed in the automatic speech recognition community as an alternative to phone-based models of speech. In this paper, we extend this approach to the visual modality. Specifically, we adapt a recently proposed feature-based model of pronunciation variation to visual speech recognition (VSR) using a set of visually-salient features. The model uses a dynamic Bayesian network (DBN) to represent the evolution of the feature streams. A bank of SVM feature classifiers, with outputs converted to likelihoods, provides input to the DBN. We present preliminary experiments on an isolated-word VSR task, comparing feature-based and viseme-based units and studying the effects of modeling inter-feature asynchrony.
Published in: Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.
Date of Conference: 23-23 March 2005
Date Added to IEEE Xplore: 09 May 2005
Print ISBN:0-7803-8874-7