skip to main content
10.1145/2413097.2413100acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

Audio-visual speech recognition using depth information from the Kinect in noisy video conditions

Published: 06 June 2012 Publication History

Abstract

In this paper we build on our recent work, where we successfully incorporated facial depth data of a speaker captured by the Microsoft Kinect device, as a third data stream in an audio-visual automatic speech recognizer. In particular, we focus our interest on whether the depth stream provides sufficient speech information that can improve system robustness to noisy audio-visual conditions, thus studying system operation beyond the traditional scenarios, where noise is applied to the audio signal alone. For this purpose, we consider four realistic visual modality degradations at various noise levels, and we conduct small-vocabulary recognition experiments on an appropriate, previously collected, audiovisual database. Our results demonstrate improved system performance due to the depth modality, as well as considerable accuracy increase, when using both the visual and depth modalities over audio only speech recognition.

References

[1]
K. Iwano, S. Tamura, and S. Furui, "Bimodal speech recognition using lip movement measured by optical-flow analysis", in Proc. of HSC, 2001, pp. 187--190.
[2]
H. I. Nakamura and K. Shikano, "Stream weight optimization of speech and lip image sequence for audio-visual speech recognition", in Proc. of ICSLP, 2000, vol. III, pp. 20--24.
[3]
G. Galatas, G. Potamianos, D. Kosmopoulos, C. McMurrough, and F. Makedon, "Bilingual corpus for AVASR using multiple sensors and depth information", in Proc. of AVSP, 2011, pp. 103--106.
[4]
G. Galatas, G. Potamianos and F. Makedon, "Audio-visual speech recognition incorporating facial depth information captured by the Kinect", submitted to EUSIPCO, 2012.
[5]
G. Galatas, G. Potamianos, A. Papangelis, and F. Makedon, "Audio visual speech recognition in noisy visual environments", in Proc. of PETRA. 2011, pp. 19--23.
[6]
"The Primesensor reference design", {online} available at: http://www.primesensor.com/?p=514.
[7]
G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O'Reilly Media, 2008.
[8]
S. J. Young, D. Kershaw, J. Odell, D. Ollason, V. Valtchev, and P. Woodland, The HTK Book Version 3.4, Cambridge University Press, 2006.
[9]
"The HMM-based speech synthesis system (HTS)", {online} available at: http://hts.sp.nitech.ac.jp/.

Cited By

View all
  • (2024)Lip-Geometry Feature-based Visual Digit RecognitionSmart Trends in Computing and Communications10.1007/978-981-97-1313-4_34(397-406)Online publication date: 2-Jun-2024
  • (2023)Review of Various Machine Learning and Deep Learning Techniques for Audio Visual Automatic Speech Recognition2023 International Conference on Intelligent Systems, Advanced Computing and Communication (ISACC)10.1109/ISACC56298.2023.10084209(1-10)Online publication date: 3-Feb-2023
  • (2022)RETRACTED ARTICLE: Audio-Visual Automatic Speech Recognition Towards Education for DisabilitiesJournal of Autism and Developmental Disorders10.1007/s10803-022-05654-453:9(3581-3594)Online publication date: 12-Jul-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
PETRA '12: Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments
June 2012
307 pages
ISBN:9781450313001
DOI:10.1145/2413097
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • U of Tex at Arlington: U of Tex at Arlington

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 June 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Microsoft Kinect
  2. audio-visual speech recognition
  3. depth information
  4. video noise

Qualifiers

  • Research-article

Funding Sources

Conference

PETRA2012
Sponsor:
  • U of Tex at Arlington

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Lip-Geometry Feature-based Visual Digit RecognitionSmart Trends in Computing and Communications10.1007/978-981-97-1313-4_34(397-406)Online publication date: 2-Jun-2024
  • (2023)Review of Various Machine Learning and Deep Learning Techniques for Audio Visual Automatic Speech Recognition2023 International Conference on Intelligent Systems, Advanced Computing and Communication (ISACC)10.1109/ISACC56298.2023.10084209(1-10)Online publication date: 3-Feb-2023
  • (2022)RETRACTED ARTICLE: Audio-Visual Automatic Speech Recognition Towards Education for DisabilitiesJournal of Autism and Developmental Disorders10.1007/s10803-022-05654-453:9(3581-3594)Online publication date: 12-Jul-2022
  • (2020)A multimodel keyword spotting system based on lip movement and speech featuresMultimedia Tools and Applications10.1007/s11042-020-08837-279:27-28(20461-20481)Online publication date: 20-Apr-2020
  • (2020)Routine Statistical Framework to Speculate Kannada Lip ReadingAdvances in Computational Intelligence, Security and Internet of Things10.1007/978-981-15-3666-3_3(26-38)Online publication date: 5-Mar-2020
  • (2019)Recov-RACM Transactions on Computer-Human Interaction10.1145/332528026:4(1-38)Online publication date: 16-Jul-2019
  • (2018)Three-Dimensional Joint Geometric-Physiologic Feature for Lip-Reading2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)10.1109/ICTAI.2018.00155(1007-1012)Online publication date: Nov-2018
  • (2018)Audio-visual speech recognition integrating 3D lip information obtained from the KinectMultimedia Systems10.1007/s00530-015-0499-922:3(315-323)Online publication date: 27-Dec-2018
  • (2017)Firearms training simulator based on low cost motion tracking sensorMultimedia Tools and Applications10.5555/3048137.304822176:1(1403-1418)Online publication date: 1-Jan-2017
  • (2017)Deep Temporal Architecture for Audiovisual Speech RecognitionComputer Vision10.1007/978-981-10-7299-4_54(650-661)Online publication date: 30-Nov-2017
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media