ABSTRACT
This paper presents the development and evaluation of a speaker-independent audio-visual speech recognition (AVSR) system that utilizes a segment-based modeling strategy. To support this research, we have collected a new video corpus, called Audio-Visual TIMIT (AV-TIMIT), which consists of 4 total hours of read speech collected from 223 different speakers. This new corpus was used to evaluate our new AVSR system which incorporates a novel audio-visual integration scheme using segment-constrained Hidden Markov Models (HMMs). Preliminary experiments have demonstrated improvements in phonetic recognition performance when incorporating visual information into the speech recognition process.
- C. Benoit. The intrinsic bimodality of speech communication and the synthesis of talking faces. In Journal on Communications of the Scientific Society for Telecommunications, Hungary, number 43, pages 32--40, September 1992.Google Scholar
- M. T. Chan, Y. Zhang, and T. S. Huang. Real-time lip tracking and bimodal continuous speech recognition. In Proc. of the Workshop on Multimedia Signal Processing, pp. 65--70, Redondo Beach, CA, 1998.Google ScholarCross Ref
- S. Chu and T. Huang. Bimodal speech recognition using coupled hidden Markov models. In Proc. of the International Conference on Spoken Language Processing, vol. II, Beijing, October 2000.Google Scholar
- S. Dupont and J. Luettin. Audio-visual speech modeling for continuous speech recognition. In IEEE Transactions on Multimedia, number 2, pages 141--151, September 2000. Google ScholarDigital Library
- J. Glass. A probabilistic framework for segment-based speech recognition. To appear in Computer Speech and Language, 2003.Google Scholar
- A. Halberstadt and J. Glass. Heterogeneous measurements and multiple classifiers for speech recognition. In Proceedings of ICSLP 98, Sydney, Australia, November 1998.Google Scholar
- T. J. Hazen and A. Halberstadt, "Using aggregation to improve the performance of mixture Gaussian acoustic models," In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Seattle, May, 1998.Google Scholar
- IBM Research - Audio Visual Speech Technologies: Data Collection. Accessed online at http://www.research.ibm.com/AVSTG/data.html, May 2003.Google Scholar
- Intel's AVCSR Toolkit source code can be downloaded from http://sourceforge.net/projects/opencvlibrary/.Google Scholar
- K. F. Lee and H. W. Hon. Speaker-independent phone recognition using hidden Markov models. In IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 11, pp. 1641--1648, November 1989.Google ScholarCross Ref
- L. H. Liang, X. X. Liu, Y. Zhao, X. Pi and A.V. Nefian. Speaker independent audio-visual continuous speech recognition. In Proc. of the IEEE International Conference on Multimedia and Expo, vol.2, pp. 25--28, 2002.Google ScholarCross Ref
- I. Matthews, J. A. Bangham, and S. Cox. Audio-visual speech recognition using multiscale nonlinear image decomposition. In Proc. of the International Conference on Spoken Language Processing, pp. 38--41, Philadelphia, PA, 1996.Google Scholar
- U. Meier, R. Stiefelhagen, J. Yang, and A. Waibel. Towards unrestricted lip reading. In International Journal of Pattern Recognition and Artificial Intelligence, number 14, pages 571--585, August 2000.Google ScholarCross Ref
- K. Messer, J. Matas, J. Kittler, and K. Jonsson. XM2VTSDB: The extended M2VTS database. In Audio- and Video-based Biometric Person Authentication, AVBPA'99, pages 72--77, Washington, D.C., March 1999. 16 IDIAP--RR 99-02.Google Scholar
- C. Neti, et al. Audio-visual speech recognition. In Technical Report, Center for Language and Speech Processing, Baltimore, Maryland, 2000. The Johns Hopkins University.Google Scholar
- S. Pigeon and L. Vandendorpe. The M2VTS multimodal face database. In Proc. of the Audio- and Video-based Biometric Person Authentication Workshop, Germany, 1997. Google ScholarDigital Library
- G. Potamianos and C. Neti. Audio-visual speech recognition in challenging environments. In Proc. Of EUROSPEECH, pp. 1293--1296, Geneva, Switzerland, September 2003.Google Scholar
- K. Saenko, T. Darrel, and J. Glass. Articulatory features for robust visual speech recognition In these proceedings, ICMI'04, State College, Pennsylvania, 2004. Google ScholarDigital Library
- C. Sanderson. The VidTIMIT Database. IDIAP Communication 02-06, Martigny, Switzerland, 2002.Google Scholar
- C. Sanderson. Automatic Person Verification Using Speech and Face Information. PhD Thesis, Griffith University, Brisbane, Australia, 2002.Google Scholar
- N. Strom, L. Hetherington, T.J. Hazen, E. Sandness, and J. Glass. Acoustic modeling improvements in a segment-based speech recognizer. In Proc. 1999 IEEE ASRU Workshop, Keystone, CO, December 1999.Google Scholar
- V. Zue, S. Seneff, and J. Glass. Speech database development: TIMIT and beyond. Speech Communication, vol. 9, no. 4, pp. 351--356, 1990.Google ScholarCross Ref
Index Terms
- A segment-based audio-visual speech recognizer: data collection, development, and initial experiments
Recommendations
Audio-visual speech recognition using MPEG-4 compliant visual features
We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial ...
Audio-visual speech recognition integrating 3D lip information obtained from the Kinect
Audio-visual speech recognition (AVSR) has shown impressive improvements over audio-only speech recognition in the presence of acoustic noise. However, the problems of region-of-interest detection and feature extraction may influence the recognition ...
Visual model structures and synchrony constraints for audio-visual speech recognition
This paper presents the design and evaluation of a speaker-independent audio-visual speech recognition (AVSR) system that utilizes a segment-based modeling strategy. The audio and visual feature streams are integrated using a segment-constrained hidden ...
Comments