Abstract:
We tackle audio-visual inpainting, the problem of completing an image in such a way to be consistent with the sound associated to the scene. To this end, we propose a mul...Show MoreMetadata
Abstract:
We tackle audio-visual inpainting, the problem of completing an image in such a way to be consistent with the sound associated to the scene. To this end, we propose a multimodal, audio-visual inpainting method (AVIN), and show how to leverage sound to reconstruct semantically consistent images. AVIN is a 2-stage algorithm, which first learns the scene semantics and reconstructs low resolution images based on a conditional probability distribution of pixels in the space conditioned to audio, and then refines such result with a GAN-based network to increase the resolution of the reconstructed image. We show that AVIN is able to recover the original content, especially in the hard cases where the missing area heavily degrades the scene semantics: it can perform cross-modal generation whenever no visual context is observed at all, reconstructing visual data from sound only. Code will be made available upon acceptance.
Published in: ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 04-10 June 2023
Date Added to IEEE Xplore: 05 May 2023
ISBN Information: