Abstract:
Generating images via a generative adversarial network (GAN) has attracted much attention recently. However, most of the existing GAN-based methods can only produce lowre...Show MoreMetadata
Abstract:
Generating images via a generative adversarial network (GAN) has attracted much attention recently. However, most of the existing GAN-based methods can only produce lowresolution images of limited quality. Directly generating highresolution images using GANs is nontrivial, and often produces problematic images with incomplete objects. To address this issue, we develop a novel GAN called auto-embedding generative adversarial network, which simultaneously encodes the global structure features and captures the fine-grained details. In our network, we use an autoencoder to learn the intrinsic high-level structure of real images and design a novel denoiser network to provide photo-realistic details for the generated images. In the experiments, we are able to produce 512 × 512 images of promising quality directly from the input noise. The resultant images exhibit better perceptual photo-realism, that is, with sharper structure and richer details, than other baselines on several datasets, including Oxford-102 Flowers, Caltech-UCSD Birds (CUB), High-Quality Large-scale CelebFaces Attributes (CelebAHQ), Large-scale Scene Understanding (LSUN), and ImageNet.
Published in: IEEE Transactions on Multimedia ( Volume: 21, Issue: 11, November 2019)