Abstract
Unsupervised image-to-image translation techniques are able to map local texture between two domains, but they are typically unsuccessful when the domains require larger shape change. Inspired by semantic segmentation, we introduce a discriminator with dilated convolutions that is able to use information from across the entire image to train a more context-aware generator. This is coupled with a multi-scale perceptual loss that is better able to represent error in the underlying shape of objects. We demonstrate that this design is more capable of representing shape deformation in a challenging toy dataset, plus in complex mappings with significant dataset variation between humans, dolls, and anime faces, and between cats and dogs.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Unsupervised image-to-image translation is the process of learning an arbitrary mapping between image domains without labels or pairings. This can be accomplished via deep learning with generative adversarial networks (GANs), through the use of a discriminator network to provide instance-specific generator training, and the use of a cyclic loss to overcome the lack of supervised pairing. Prior works such as DiscoGAN [19] and CycleGAN [43] are able to transfer sophisticated local texture appearance between image domains, such as translating between paintings and photographs. However, these methods often have difficulty with objects that have both related appearance and shape changes; for instance, when translating between cats and dogs.
Coping with shape deformation in image translation tasks requires the ability to use spatial information from across the image. For instance, we cannot expect to transform a cat into a dog by simply changing the animals’ local texture. From our experiments, networks with fully connected discriminators, such as DiscoGAN, are able to represent larger shape changes given sufficient network capacity, but train much slower [17] and have trouble resolving smaller details. Patch-based discriminators, as used in CycleGAN, work well at resolving high frequency information and train relatively quickly [17], but have a limited ‘receptive field’ for each patch that only allows the network to consider spatially local content. These networks reduce the amount of information received by the generator. Further, the functions used to maintain the cyclic loss prior in both networks retains high frequency information in the cyclic reconstruction, which is often detrimental to shape change tasks.
We propose an image-to-image translation system, designated GANimorph, to address shortcomings present in current techniques. To allow for patch-based discriminators to use more image context, we use dilated convolutions in our discriminator architecture [39]. This allows us to treat discrimination as a semantic segmentation problem: the discriminator outputs per-pixel real-vs.-fake decisions, each informed by global context. This per-pixel discriminator output facilitates more fine-grained information flow from the discriminator to the generator. We also use a multi-scale structure similarity perceptual reconstruction loss to help represent error over image areas rather than just over pixels. We demonstrate that our approach is more successful on a challenging shape deformation toy dataset than previous approaches. We also demonstrate example translations involving both appearance and shape variation by mapping human faces to dolls and anime characters, and mapping cats to dogs (Fig. 1).
The source code to our GANimorph system and all datasets are online: https://github.com/brownvc/ganimorph/.
2 Related Work
Image-to-Image Translation. Image analogies provides one of the earliest examples of image-to-image translation [14]. The approach relies on non-parametric texture synthesis and can handle transformations such as seasonal scene shifts [20], color and texture transformation, and painterly style transfer. Despite the ability of the model to learn texture transfer, the model cannot affect the shape of objects. Recent research has extended the model to perform visual attribute transfer using neural networks [13, 23]. However, despite these improvements, deep image analogies are unable to achieve shape deformation.
Neural Style Transfer. These techniques show transfer of more complex artistic styles than image analogies [10]. They combine the style of one image with the content of another by matching the Gram matrix statistics of early-layer feature maps from neural networks trained on general supervised image recognition tasks. Further, Duomiln et al. [8] extended Gatys et al.’s technique to allow for interpolation between pre-trained styles, and Huang et al. [15] allowed real-time transfer. Despite this promise, these techniques have difficulty adapting to shape deformation, and empirical results have shown that these networks only capture low-level texture information [2]. Reference images can affect brush strokes, color palette, and local geometry, but larger changes such as anime-style combined appearance and shape transformations do not propagate.
Generative Adversarial Networks. Generative adversarial networks (GANs) have produced promising results in image editing [22], image translation [17], and image synthesis [11]. These networks learn an adversarial loss function to distinguish between real and generated samples. Isola et al. [17] demonstrated with Pix2Pix that GANs are capable of learning texture mappings between complex domains. However, this technique requires a large number of explicitly-paired samples. Some such datasets are naturally available, e.g., registered map and satellite photos, or image colorization tasks. We show in our supplemental material that our approach is also able to solve these limited-shape-change problems.
Unsupervised Image Translation GANs. Pix2Pix-like architectures have been extended to work with unsupervised pairs [19, 43]. Given image domains X and Y, these approaches work by learning a cyclic mapping from \(\mathrm{X}\rightarrow \mathrm{Y} \rightarrow \mathrm{X}\) and \(\mathrm{Y}\rightarrow \mathrm{X} \rightarrow \mathrm{Y}\). This creates a bijective mapping that prevents mode collapse in the unsupervised case. We build upon the DiscoGAN [19] and CycleGAN [43] architectures, which themselves extend Coupled GANs for style transfer [25]. We seek to overcome their shape change limitations through more efficient learning and expanded discriminator context via dilated convolutions, and by using a cyclic loss function that considers multi-scale frequency information (Table 1).
Other works tackle complementary problems. Yi et al. [38] focus on improving high frequency features over CycleGAN in image translation tasks, such as texture transfer and segmentation. Ma et al. [27] examine adapting CycleGAN to wider variety in the domains—so-called instance-level translation. Liu et al. [24] use two autoencoders to create a cyclic loss through a shared latent space with additional constraints. Several layers are shared between the two generators and an identity loss ensures that both domains resolve to the same latent vector. This produces some shape transformation in faces; however, the network does not improve the discriminator architecture to provide greater context awareness.
One qualitatively different approach is to introduce object-level segmentation maps into the training set. Liang et al.’s ContrastGAN [22] has demonstrated shape change by learning segmentation maps and combining multiple conditional cyclic generative adversarial networks. However, this additional input is often unavailable and time consuming to declare.
3 Our Approach
Crucial to the success of translation under shape deformation is the ability to maintain consistency over global shapes as well as local texture. Our algorithm adopts the cyclic image translation framework [19, 43] and achieves the required consistency by incorporating a new dilated discriminator, a generator with residual blocks and skip connections, and a multi-scale perceptual cyclic loss.
3.1 Dilated Discriminator
Initial approaches used a global discriminator with a fully connected layer [19]. Such a discriminator collapses an image to a single scalar value for determining image veracity. Later approaches [22, 43] used a patch-based DCGAN [32] discriminator, initially developed for style transfer and texture synthesis [21]. In this type of discriminator, each image patch is evaluated to determine a fake or real score. The patch-based approach allows for fast generator convergence by operating on each local patch independently. This approach has proven effective for texture transfer, segmentation, and similar tasks. However, this patch-based view limits the networks’ awareness of global spatial information, which limits the generator’s ability to perform coherent global shape change.
Reframing Discrimination as Semantic Segmentation. To solve this issue, we reframe the discrimination problem from determining real/fake images or subimages into the more general problem of finding real or fake regions of the image, i.e., a semantic segmentation task. Since the discriminator outputs a higher-resolution segmentation map, the information flow between the generator and discriminator increases. This allows for faster convergence than using a fully connected discriminator, such as in DiscoGAN.
Current state-of-the-art networks for segmentation use dilated convolutions, and have been shown to require far fewer parameters than conventional convolutional networks to achieve similar levels of accuracy [39]. Dilated convolutions provide advantages over both global and patch-based discriminator architectures. For the same parameter budget, they allow the prediction to incorporate data from a larger surrounding region. This increases the information flow between the generator and discriminator: by knowing which regions of the image contribute to making the image unrealistic, the generator can focus on that region of the image. An alternative way to think about dilated convolutions is that they allow the discriminator to implicitly learn context. While multi-scale discriminators have been shown to improve results and stability for high resolution image synthesis tasks [35], we will show that incorporating information from farther away in the image is useful in translation tasks as the discriminator can determine where a region should fit into an image based on surrounding data. For example, this increased spatial context helps localize the face of a dog relative to its body, which is difficult to learn from small patches or patches learned in isolation from their neighbors. Figure 2 (right) illustrates our discriminator architecture.
(Left) Generators from different unsupervised image translation models. The skip connections and residual blocks are combined via concatenation as opposed to addition. (Right) Our discriminator network architecture is a fully-convolutional segmentation network. Each colored block represents a convolution layer; block labels indicate filter size. In addition to global context from the dilations, the skip connection bypassing the dilated convolution blocks preserves the network’s view of local context. (Color figure online)
3.2 Generator
Our generator architecture builds on those of DiscoGAN and CycleGAN. DiscoGAN uses a standard encoder-decoder architecture (Fig. 2, top left). However, its narrow bottleneck layer can lead to output images that do not preserve all the important visual details from the input image. Furthermore, due to the low capacity of the network, the approach remains limited to low resolution images of size \(64\times 64\). The CycleGAN architecture seeks to increase capacity over DiscoGAN by using a residual block to learn the image translation function [12]. Residual blocks have been shown to work in extremely deep networks, and they are able to represent low frequency information [2, 40].
However, using residual blocks at a single scale limits the information that can pass through the bottleneck and thus the functions that the network can learn. Our generator includes residual blocks at multiple layers of both the decoder and encoder, allowing the network to learn multi-scale transformations that work on both higher and lower spatial resolution features (Fig. 2, bottom left).
3.3 Objective Function
Perceptual Cyclic Loss. As per prior unsupervised image-to-image translation work [19, 22, 24, 38, 43], we use a cyclic loss to learn a bijective mapping between two image domains. However, not all image translation functions can be perfectly bijective, e.g., when one domain has smaller appearance variation, like human face photos vs. anime drawings. When all information in the input image cannot be preserved in the translation, the cyclic loss term should aim to preserve the most important information. Since the network should focus on image attributes of importance to human viewers, we should choose a perceptual loss that emphasizes shape and appearance similarity between the generated and target images.
Defining an explicit shape loss is difficult, as any explicit term requires known image correspondences between domains. These do not exist for our examples and our unsupervised setting. Further, including a more-complex perceptual neural network into the loss calculation imparts a significant computational and memory overhead. While using pretrained image classification networks as a perceptual loss can speed up style transfer [18], these do not work on shape changes as the pretrained networks tend only to capture low-level texture information [2].
Instead, we use multi-scale structure similarity loss (MS-SSIM) [36]. This loss better preserves features visible to humans instead of noisy high frequency information. MS-SSIM can also better cope with shape change since it can recognize geometric differences through area statistics. However, MS-SSIM alone can ignore smaller details, and does not capture color similarity well. Recent work has shown that mixing MS-SSIM with L1 or L2 losses is effective for super resolution and segmentation tasks [41]. Thus, we also add a lightly-weighted L1 loss term, which helps increase the clarity of generated images.
Feature Matching Loss. To increase the stability of the model, our objective function uses a feature matching loss [33]:
Where \(f_i\in D(x)\) represents the raw activation potentials of the \(i^{th}\) layer of the discriminator D, and n is the number of discriminator layers. This term encourages fake and real samples to produce similar activations in the discriminator, and so encourages the generator to create images that look more similar to the target domain. We have found this loss term to prevent generator mode collapse, to which GANs are often susceptible [19, 33, 35].
Scheduled Loss Normalization (SLN). In a multi-part loss function, linear weights are often used to normalize the terms with respect to one another, with previous works often optimizing a single set of weights. However, finding appropriately-balanced weights can prove difficult without ground truth. Further, often a single set of weights is inappropriate because the magnitude of the loss terms changes over the course of training. Instead, we create a procedure to periodically renormalize each loss term and so control their relative values. This lets the user intuitively provide weights that sum to 1 to balance the loss terms in the model, without having knowledge of how their magnitudes will change over training.
Let \(\mathcal {L}\) be a loss function, and let \(\mathcal {X}_n = \{x_t\}_{t=1}^{bn}\) be a sequence of n batches of training inputs, each b images large, such that \(\mathcal {L}(x_t)\) is the training loss at iteration t. We compute an exponentially-weighted moving average of the loss:
where \(\beta \) is the decay rate. We can renormalize the loss function by dividing it by this moving average. If we do this on every training iteration, however, the loss stays at its normalized average and no training progress is made. Instead, we schedule the loss normalization:
Here, s is the scheduling parameter such that we apply normalization every s training iterations. For all experiments, we use \(\beta = 0.99\), \(\epsilon = 10^{-10}\), and \(s=200\).
One other normalization difference between CycleGAN/DiscoGAN and our approach is the use of instance normalization [15] and batch normalization [16], respectively. We found that batch normalization caused excessive over-fitting to the training data, and so we used instance normalization.
Final Objective. Our final objective comprises three loss normalized terms: a standard GAN loss, a feature matching loss, and two cyclic reconstruction losses. Given image domains X and Y, let \(G: X\rightarrow Y\) map from X to Y and \(F:Y\rightarrow X\) map from Y to X. \(D_{X}\) and \(D_{Y}\) denote discriminators for G and F, respectively.
For GAN loss, we combine normal GAN loss terms from Goodfellow et al. [11]:
For feature matching loss, we use Eq. 1 for each domain:
For the two cyclic reconstruction losses, we consider structural similarity [36] and an \(\mathbb {L}_1\) loss. Let \(X'= F(G(X))\) and \(Y'= G(F(Y))\) be the cyclically-reconstructed input images. Then:
where we compute MS-SSIM without discorrelation.
Our total objective function with scheduled loss normalization (SLN) is:
with \(\lambda _{\text {GAN}} + \lambda _{\text {FM}} + \lambda _{\text {CYC}} = 1\), \(\lambda _{\text {SS}} + \lambda _{\text {L1}} = 1\), and all coefficients \(\ge 0\). We set \(\lambda _{\text {GAN}}=0.49\), \(\lambda _{\text {FM}}=0.21\), and \(\lambda _{\text {CYC}}=0.3\), and \(\lambda _{\text {SS}}=0.7\) and \(\lambda _{\text {L1}}=0.3\). Empirically, these helped to reduce mode collapse and worked across all datasets. For all training details, we refer the reader to our supplemental material.
4 Experiments
4.1 Toy Problem: Learning 2D Dot and Polygon Deformations
We created a challenging toy problem to evaluate the ability of our network design to learn shape- and texture-consistent deformation. We define two domains: the regular polygon domain X and its deformed equivalent Y (Fig. 3). Each example \(X_{s, h, d}\in X\) contains a centered regular polygon with \(s\in \{3\ldots 7\}\) sides, plus a deformed matrix of dots overlaid. The dot matrix is computed by taking a unit dot grid and transforming it via h, a Gaussian random normal \(2\times 2\) matrix, and a displacement vector d, a Gaussian normal vector in \(\mathbb {R}^2\). The corresponding domain equivalent in Y is \(Y_{s, h, d}\), with instead the polygon transformed by h and the dot matrix remaining regular. This construction forms a bijection from X to Y, and so the translation problem is well-posed.
Learning a mapping from X to Y requires the network to use the large-scale cues present in the dot matrix to successfully deform the polygon, as local patches with a fixed image location cannot overcome the added displacement d. Table 2 shows that DiscoGAN is unable to learn to map between either domain, and produces an output that is close to the mean of the dataset (off-white). CycleGAN is able to learn only local deformation, which produces hue shifts towards the blue of the polygon when mapping from regular to deformed spaces, and which in most cases produces an undeformed dot matrix when mapping from deformed to regular spaces. In contrast, our approach is significantly more successful at learning the deformation as the dilated discriminator is able to incorporate information from across the image.
Quantitative Comparison. As our output is a highly-deformed image, we estimate the learned transform parameters by sampling. We compute a Hausdorff distance between 500 point samples on the ground truth polygon and on the image of the generated polygon after translation: for finite sets of points X and Y, \(d(X, Y) = \max _{y\in Y}\min _{x\in X} ||x - y||\). We hand annotate 220 generated polygon boundaries for our network, sampled uniformly at random along the boundary. Samples exist in a unit square with bottom left corner at (0, 0).
First, DiscoGAN fails to generate polygons at all, despite being able to reconstruct the original image. Second, for ‘regular to deformed’, CycleGAN fails to produce a polygon, whereas our approach produces average Hausdorff distance of \(0.20\pm 0.01\). Third, for ‘deformed to regular’, CycleGAN produces a polygon with distance of \(0.21\pm 0.04\), whereas our approach has distance of \(0.10\pm 0.03\). In the true dataset, note that regular polygons are centered, but CycleGAN only constructs polygons at the position of the original distorted polygon. Our network constructs a regular polygon at the center of the image as desired.
4.2 Real-World Datasets
We evaluate GANimorph on several image datasets. For human faces, we use the aligned version of the CelebFaces Attribute dataset [26], with 202,599 images.
Anime Faces. Previous works have noted that anime images are challenging for style transfer methods, since translating between photoreal and anime faces involves both shape and appearance changes. We create a large 966,777 image anime dataset crowdsourced from Danbooru [1]. The Danbooru dataset has a wide variety of styles from super-deformed chibi-style faces, to realistically-proportioned faces, to rough sketches. Since traditional face detectors yield poor results on drawn datasets, we ran the Animeface filter [29] on both datasets.
When translating humans to anime, we see an improvement in our approach for head pose and accessories such as glasses (Table 3, 3rd row, right), plus a larger degree of shape deformation such as reduced face vertical height. The final line of each group represents a particularly challenging example.
Doll Faces. Translating human faces to dolls provides an informative test case: both domains have similar photorealistic appearance, so the translation task focuses on shape more than texture. Similar to Morishita et al. [28], we extracted 13,336 images from the Flickr100m dataset [30] using specific doll manufacturers as keywords. Then, we extract local binary patterns [31] using OpenCV [4], and use the Animeface filter for facial alignment [29].
Table 3, bottom, shows that our architecture handles local deformation and global shape change better than CycleGAN and DiscoGAN, while preserving local texture similarity. Either the shape is malformed (DiscoGAN), or the shape shows artifacts from the original image or unnatural skin texture (CycleGAN). Our method matches skintones from the CelebA dataset, while capturing the overall facial structure and hair color of the doll. For a more difficult doll to human example in the bottom right-hand corner, while our transformation is not realistic, our method still creates more shape change than existing networks.
Pets in the Wild. To demonstrate our network on unaligned data, we evaluate on the Kaggle cat and dog dataset [9]. This contains 12,500 images of each species, across many animal breeds at varying scales, lighting conditions, poses, backgrounds, and occlusion factors.
When translating between cats and dogs (Table 4), the network is able to change both the local features such as the addition and removal of fur and whiskers, plus the larger shape deformation required to fool the discriminator, such as growing a snout. Most errors in this domain come from the generator failing to identify an animal from the background, such as forgetting the rear or tail of the animal. Sometimes the generator may fail to identify the animal at all.
We also translate between humans and cats. Table 5 demonstrates how our architecture handles large scale translation with these two variable data distributions. Our failure cases are approximately the same as that of the cats to dogs translation, with some promising results. Overall, we translate a surprising degree of shape deformation even when we might not expect this to be possible.



4.3 Quantitative Study
To quantify GANimorph’s translation ability, we consider classification-based metrics to detect class change, e.g., whether a cat was successfully translated into a dog. Since there is no per pixel ground truth in this task for any real-world datasets, we cannot use Fully Convolution Score. Using Inception Score [33] is uninformative since simply outputting the original image would score highly.
Further, similar to adversarial examples, CycleGAN is able to convince many classification networks that the image is translated even though to a human the image appears untranslated: all CycleGAN results from supplemental Table 3 convince both ResNet50 [12] and the traditional segmentation network of Zheng et al. [42], even though the image is unsuccessfully translated.
However, semantic segmentation networks that use dilated convolutions can distinguish CycleGAN’s ‘adversarial examples’ from true translations, such as DeepLabV3 [5]. As such, we run each test image through the DeepLabV3 network to generate a segmentation mask. Then, we compute the percent of non-background-labeled pixels per class, and average across the test set (Table 6). Our approach is able to more fully translate the image in the eyes of the classification network, with images also appearing translated to a human (Table 7).
4.4 Ablation Study
We use these quantiative settings for an ablation study (Table 6). First, we removed MS-SSIM to leave only L1 (\(\mathcal {L}_{\text {SS}}\), Eq. 7), which causes our network to mode collapse. Next, we removed feature match loss, but this decreases both our segmentation consistency and the stability of the network. Then, we replaced our dilated discriminator with a patch discriminator. However, the patch discriminator cannot use global context, and so the network confuses facial layouts. Finally, we replace our dilated discriminator with a fully connected discriminator. We see that our generator architecture and loss function allow our network to outperform DiscoGAN even with the same type of discriminator (fully connected).
Qualitative ablation study results are shown in Table 8. The patch based discriminator translates texture well, but fails to create globally-coherent images. Decreasing the information flow by using a fully-connected discriminator or removing feature match leads to better results. Maximizing the information flow ultimately leads to the best results (last column). Using L1 instead of a perceptual cyclic loss term leads to mode collapse.
5 Discussion
There exists a trade off in the relative weighting of the cyclic loss. A higher cyclic loss term weight \(\lambda _{cyc}\) will prevent significant shape change and weaken the generator’s ability to adapt to the discriminator. Setting it too low will cause the collapse of the network and prevent any meaningful mapping from existing between domains. For instance, the network can easily hallucinate objects in the other domain if the reconstruction loss is too low. Likewise, setting it too high will prevent the network from deforming the shape properly. As such, an architecture that allowed modifying the weighting of this term at test time would prove valuable for allowing the user control over how much deformation to allow.
One counter-intuitive result we discovered is that in domains with little variety, the mappings can lose semantic meaning (see supplemental material). One example of a failed mapping was from celebA to bitmoji faces [34, 37]. Many attributes were lost, including pose, and the mapping fell back to pseudo-steganographic encoding of the faces [7]. For example, background information would be encoded in color gradients of hair styles, and minor variations in the width of the eyes were used similarly. As such, the cyclic loss limits the ability of the network to abstract relevant details. Approaches such as relying on mapping the variance within each dataset, similar to Benaim et al. [3], may prove an effective means of ensuring the variance in either domain is maintained. We found that this term over-constrained the amount of shape change in the target domain; however, this may be worth further investigation.
Finally, trying to learn each domain simultaneously may also prove an effective way to increase the accuracy of image translation. Doing so allows the discriminator(s) and generator to learn how to better determine and transform regions of interest for either network. Better results might be obtained by mapping between multiple domains using parameter-efficient networks (e.g., StarGAN [6]).
Repository: The source code to our GANimorph system and all datasets are available online: https://github.com/brownvc/ganimorph/.
References
Anonymous, Branwen, G., Gokaslan, A.: Danbooru 2017: a large-scale crowdsourced and tagged anime illustration dataset, April 2017. https://www.gwern.net/Danbooru2017
Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: Computer Vision and Pattern Recognition (2017)
Benaim, S., Wolf, L.: One-sided unsupervised domain mapping. In: Advances in Neural Information Processing Systems (2017)
Bradski, G.: The OpenCV library. Dr. Dobb’s J. Softw. Tools 120, 122–125 (2000)
Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)
Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Computer Vision and Pattern Recognition (2018)
Chu, C., Zhmoginov, A., Sandler, M.: CycleGAN: a master of steganography. arXiv preprint arXiv:1712.02950 (2017)
Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style. In: International Conference on Learning Representations (2017)
Elson, J., Douceur, J., Howell, J., Saul, J.: Asirra: a CAPTCHA that exploits interest-aligned manual image categorization. In: Proceedings of the 14th ACM Conference on Computer and Communications Security, CCS 2007 (2007)
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Computer Vision and Pattern Recognition (2016)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition (2016)
He, M., Liao, J., Yuan, L., Sander, P.V.: Neural color transfer between images. arXiv preprint arXiv:1710.00756 (2017)
Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. ACM (2001)
Huang, X., Belongie, S.J.: Arbitrary style transfer in real-time with adaptive instance normalization. In: International Conference on Computer Vision (2017)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (2015)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Computer Vision and Pattern Recognition (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Kim, T., Cha, M., Kim, H., Lee, J.K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: International Conference on Machine Learning (2017)
Laffont, P.Y., Ren, Z., Tao, X., Qian, C., Hays, J.: Transient attributes for high-level understanding and editing of outdoor scenes. ACM Trans. Graph. (TOG) 33(4), 149 (2014)
Li, C., Wand, M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 702–716. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_43
Liang, X., Zhang, H., Xing, E.P.: Generative semantic manipulation with contrasting GAN. arXiv preprint arXiv:1708.00315 (2017)
Liao, J., Yao, Y., Yuan, L., Hua, G., Kang, S.B.: Visual attribute transfer through deep image analogy. ACM Trans. Graph. (2017)
Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Advances in Neural Information Processing Systems (2017)
Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. In: Advances in Neural Information Processing Systems (2016)
Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: International Conference on Computer Vision (2015)
Ma, S., Fu, J., Chen, C.W., Mei, T.: DA-GAN: instance-level image translation by deep attention generative adversarial networks. In: Conference on Computer Vision and Pattern Recognition (2018)
Morishita, M., Ueno, M., Isahara, H.: Classification of doll image dataset based on human experts and computational methods: a comparative analysis. In: International Conference On Advanced Informatics: Concepts, Theory And Application (ICAICTA) (2016)
Nagadomi: lbpcascade\(\_\)animeface (2017). https://github.com/nagadomi/lbpcascade_animeface
Ni, K., et al.: Large-scale deep learning on the YFCC100M dataset. arXiv preprint arXiv:1502.03409 (2015)
Ojala, T., Pietikainen, M., Harwood, D.: Performance evaluation of texture measures with classification based on kullback discrimination of distributions. In: International Conference on Pattern Recognition (1994)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv e-prints, November 2015
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems (2016)
Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200 (2016)
Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional GANs. In: Computer Vision and Pattern Recognition (2018)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. Trans. Image Process. 13(4), 600–612 (2004)
Wolf, L., Taigman, Y., Polyak, A.: Unsupervised creation of parameterized avatars. In: International Conference on Computer Vision (2017)
Yi, Z., Zhang, H.R., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: International Conference on Computer Vision (2017)
Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (2015)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2017)
Zheng, S., et al.: Conditional random fields as recurrent neural networks. In: International Conference on Computer Vision (2015)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networkss. In: International Conference on Computer Vision (2017)
Acknowledgement
Kwang In Kim thanks RCUK EP/M023281/1.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Gokaslan, A., Ramanujan, V., Ritchie, D., Kim, K.I., Tompkin, J. (2018). Improving Shape Deformation in Unsupervised Image-to-Image Translation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science(), vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-01258-8_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01257-1
Online ISBN: 978-3-030-01258-8
eBook Packages: Computer ScienceComputer Science (R0)