Abstract:
The goal of our work is to complete missing areas of images of talking faces, exploiting information from both the visual and audio modalities. Existing image inpainting ...Show MoreMetadata
Abstract:
The goal of our work is to complete missing areas of images of talking faces, exploiting information from both the visual and audio modalities. Existing image inpainting methods rely solely on visual content that doesn't always provide sufficient information for the task. To counter this, we propose a neural network that employs an encoder-decoder architecture with a bimodal fusion mechanism, thus taking into account both visual and audio content. Our proposed method demonstrates consistently superior performance over a baseline visual-only model, reaching for example up to 17% relative improvement in mean absolute error. The presented model is applicable to practical video editing tasks, such as object and overlay-text removal from talking faces, where existing lip and face generation works are not applicable as they require clean input.
Published in: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 04-08 May 2020
Date Added to IEEE Xplore: 09 April 2020
ISBN Information: