Abstract:
Both external and internal articulators are crucial to generating avatars in VR. Compared to traditional talking head with only appearance, we enhance it to 3D singing he...Show MoreMetadata
Abstract:
Both external and internal articulators are crucial to generating avatars in VR. Compared to traditional talking head with only appearance, we enhance it to 3D singing head with music signal as input, and focus on the entire head. For the results to look natural, a completed head model is first obtained by integrating matched multi-view visible images, face prior, and generic internal articulatory meshes. Then, the keyframes of song animation are substantially generated from real articulation data and physiology. Finally, the song synchronicity of articulators is learned from parallel audio-visual data using deep neural network. Experiments shows our system can produce realistic animation.
Date of Conference: 23-27 March 2019
Date Added to IEEE Xplore: 15 August 2019
ISBN Information: