Loading [a11y]/accessibility-menu.js
Depth-based features in audio-visual speech recognition | IEEE Conference Publication | IEEE Xplore
Scheduled Maintenance: On Monday, 27 January, the IEEE Xplore Author Profile management portal will undergo scheduled maintenance from 9:00-11:00 AM ET (1400-1600 UTC). During this time, access to the portal will be unavailable. We apologize for any inconvenience.

Depth-based features in audio-visual speech recognition


Abstract:

We study the impact of depth-based visual features in systems for visual and audio-visual speech recognition. Instead of reconstruction from multiple views, the depth map...Show More

Abstract:

We study the impact of depth-based visual features in systems for visual and audio-visual speech recognition. Instead of reconstruction from multiple views, the depth maps are obtained by the Kinect sensor, which is better suited for real world applications. We extract several types of visual features from video and depth channels and evaluate their performance both individually and in cross-channel combination. In order to show the information complementarity between the video-based and the depth-based features, we examine the relative importance of each channel when combined via weighted multi-stream Hidden Markov Models. We also introduce novel parametrizations based on discrete cosine transform and histogram of oriented gradients. The contribution of all presented visual speech features is demonstrated in the task of audio-visual speech recognition under noisy conditions.
Date of Conference: 27-29 June 2016
Date Added to IEEE Xplore: 01 December 2016
ISBN Information:
Conference Location: Vienna, Austria

References

References is not available for this document.