Loading [MathJax]/extensions/MathZoom.js
Multimodal Inputs Driven Talking Face Generation With Spatial–Temporal Dependency | IEEE Journals & Magazine | IEEE Xplore

Multimodal Inputs Driven Talking Face Generation With Spatial–Temporal Dependency


Abstract:

Given an arbitrary speech clip or text information as input, the proposed work aims to generate a talking face video with accurate lip synchronization. Existing works mai...Show More

Abstract:

Given an arbitrary speech clip or text information as input, the proposed work aims to generate a talking face video with accurate lip synchronization. Existing works mainly have three limitations. (1) A single-modal learning is adopted with either audio or text as input, hence it lacks the complementarity of multimodal inputs. (2) Each frame is generated independently, hence it ignores the temporal dependency between consecutive frames. (3) Each face image is generated by the traditional convolution neural network (CNN) with a local receptive field, hence it cannot effectively capture the spatial dependency within internal representations of face images. To overcome these problems above, we decompose the talking face generation task into two steps: mouth landmarks prediction and video synthesis. First, a multimodal learning method is proposed to generate accurate mouth landmarks with multimedia inputs (both text and audio). Second, a network named Face2Vid is proposed to generate video frames conditioned on the predicted mouth landmarks. In Face2Vid, the optical flow is employed to model the temporal dependency between frames, meanwhile, a self-attention mechanism is introduced to model the spatial dependency across image regions. Extensive experiments demonstrate that our approach can generate photo-realistic video frames with the background, and exhibit the superiorities on accurate synchronization of lip movements and smooth transition of facial movements.
Page(s): 203 - 216
Date of Publication: 12 February 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.