Adversarial nets with perceptual losses for text-to-image synthesis | IEEE Conference Publication | IEEE Xplore

Adversarial nets with perceptual losses for text-to-image synthesis


Abstract:

Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text. Despite the overall fair quality, the gen...Show More

Abstract:

Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text. Despite the overall fair quality, the generated images often expose visible flaws that lack structural definition for an object of interest. In this paper, we aim to extend state of the art for GAN-based text-to-image synthesis by improving perceptual quality of generated images. Differentiated from previous work, our synthetic image generator optimizes on perceptual loss functions that measure pixel, feature activation, and texture differences against a natural image. We present visually more compelling synthetic images of birds and flowers generated from text descriptions in comparison to some of the most prominent existing work.
Date of Conference: 25-28 September 2017
Date Added to IEEE Xplore: 07 December 2017
ISBN Information:
Conference Location: Tokyo, Japan

Contact IEEE to Subscribe

References

References is not available for this document.