ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

CAUSE: Crossmodal Action Unit Sequence Estimation from Speech

Hirokazu Kameoka, Takuhiro Kaneko, Shogo Seki, Kou Tanaka

This paper proposes a task and method for estimating a sequence of facial action units (AUs) solely from speech. AUs were introduced in the facial action coding system to objectively describe facial muscle activations. Our motivation is that AUs can be useful continuous quantities for representing speaker's subtle emotional states, attitudes, and moods in a variety of applications such as expressive speech synthesis and emotional voice conversion. We hypothesize that the information about the speaker's facial muscle movements is expressed in the generated speech and can somehow be predicted from speech alone. To verify this, we devise a neural network model that predicts an AU sequence from the mel-spectrogram of input speech and train it using a large-scale audio-visual dataset consisting of many speaking face-tracks. We call our method and model ``crossmodal AU sequence estimation/estimator (CAUSE)''. We implemented several of the most basic architectures for CAUSE, and quantitatively confirmed that the fully convolutional architecture performed best. Furthermore, by combining CAUSE with an AU-conditioned image-to-image translation method, we implemented a system that animates a given still face image from speech. Using this system, we confirmed the potential usefulness of AUs as a representation of non-linguistic features via subjective evaluations.


doi: 10.21437/Interspeech.2022-11232

Cite as: Kameoka, H., Kaneko, T., Seki, S., Tanaka, K. (2022) CAUSE: Crossmodal Action Unit Sequence Estimation from Speech. Proc. Interspeech 2022, 506-510, doi: 10.21437/Interspeech.2022-11232

@inproceedings{kameoka22_interspeech,
  author={Hirokazu Kameoka and Takuhiro Kaneko and Shogo Seki and Kou Tanaka},
  title={{CAUSE: Crossmodal Action Unit Sequence Estimation from Speech}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={506--510},
  doi={10.21437/Interspeech.2022-11232}
}