Loading [a11y]/accessibility-menu.js
Direct conversion from facial myoelectric signals to speech using Deep Neural Networks | IEEE Conference Publication | IEEE Xplore

Direct conversion from facial myoelectric signals to speech using Deep Neural Networks


Abstract:

This paper presents our first results using Deep Neural Networks for surface electromyographic (EMG) speech synthesis. The proposed approach enables a direct mapping from...Show More

Abstract:

This paper presents our first results using Deep Neural Networks for surface electromyographic (EMG) speech synthesis. The proposed approach enables a direct mapping from EMG signals captured from the articulatory muscle movements to the acoustic speech signal. Features are processed from multiple EMG channels and are fed into a feed forward neural network to achieve a mapping to the target acoustic speech output. We show that this approach is feasible to generate speech output from the input EMG signal and compare the results to a prior mapping technique based on Gaussian mixture models. The comparison is conducted via objective Mel-Cepstral distortion scores and subjective listening test evaluations. It shows that the proposed Deep Neural Network approach gives substantial improvements for both evaluation criteria.
Date of Conference: 12-17 July 2015
Date Added to IEEE Xplore: 01 October 2015
ISBN Information:

ISSN Information:

Conference Location: Killarney, Ireland

Contact IEEE to Subscribe

References

References is not available for this document.