Skip to main content
Log in

InvolutionGAN: lightweight GAN with involution for unsupervised image-to-image translation

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The unsupervised image-to-image translation aims to learn a mapping that translates images from one domain to the target domain. Current state-of-the-art generative adversarial network (GAN) models utilize time and space-costly operators to produce impressive translated images. However, further research and model deployment are under restrictions due to the high computational costs of the models. In order to resolve the problem, we enhance the GAN structure by employing a lightweight operator named involution that facilitates extracting both local features and long-range dependencies across channels. Besides, we also notice that previous works attach less importance to feature-level reconstruction discrepancy between original and reconstructed images. Nevertheless, such information is crucial in improving the quality of the synthesized images. Thus, we develop a novel loss term that evaluates the learned perceptual similarity distance to regulate the training process. The qualitative and quantitative experiment results on several prevailing benchmarks demonstrate that our model, dubbed InvolutionGAN, could produce competitive image results while saving computational costs up to 91.9%. In addition, extensive ablation studies are conducted to search for the best model structure and verify that each component we introduced is effective.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability Statements

All data and results included in this study are available upon reasonable request by contact with the corresponding author.

References

  1. Aguinaldo A, Chiang PY, Gain A, et al (2019) Compressing gans using knowledge distillation. arXiv preprint arXiv:1902.00159

  2. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: International conference on machine learning, PMLR, pp 214–223

  3. Bharti V, Biswas B, Shukla KK (2021) Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Comput Appl, pp 1–15

  4. Bińkowski M, Sutherland DJ, Arbel M, et al (2018) Demystifying mmd gans. arXiv preprint arXiv:1801.01401

  5. Cao Y, Zhou Z, Zhang W, et al (2017) Unsupervised diverse colorization via generative adversarial networks. In: Joint European conference on machine learning and knowledge discovery in databases, Springer, pp 151–166

  6. Chen R, Huang W, Huang B, et al (2020) Reusing discriminators for encoding: Towards unsupervised image-to-image translation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8168–8177

  7. Chen X, Duan Y, Houthooft R, et al (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Proceedings of the 30th international conference on neural information processing systems, pp 2180–2188

  8. Choi Y, Choi M, Kim M, et al (2018) Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8789–8797

  9. Choi Y, Uh Y, Yoo J, et al (2020) Stargan v2: diverse image synthesis for multiple domains. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8188–8197

  10. Emami H, Aliabadi MM, Dong M et al (2020) Spa-gan: spatial attention gan for image-to-image translation. IEEE Trans Multimed 23:391–401

    Article  Google Scholar 

  11. Goodfellow I, Pouget-Abadie J, Mirza M, et al (2014) Generative adversarial nets. Adv Neural Inf Process Syst 27

  12. Gulrajani I, Ahmed F, Arjovsky M, et al (2017) Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028

  13. He K, Zhang X, Ren S, et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  14. Heusel M, Ramsauer H, Unterthiner T, et al (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv Neural Inf Process Syst 30

  15. Huang X, Liu MY, Belongie S, et al (2018) Multimodal unsupervised image-to-image translation. In: Proceedings of the European conference on computer vision (ECCV), pp 172–189

  16. Iizuka S, Simo-Serra E, Ishikawa H (2016) Let there be color! joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans Graph 35(4):1–11

    Article  Google Scholar 

  17. Isola P, Zhu JY, Zhou T, et al (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134

  18. Jeong S, Kim Y, Lee E, et al (2021) Memory-guided unsupervised image-to-image translation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6558–6567

  19. Jo Y, Yang S, Kim SJ (2020) Investigating loss functions for extreme super-resolution. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 424–425

  20. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, Springer, pp 694–711

  21. Kang T, Lee KH (2020) Unsupervised image-to-image translation with self-attention networks. In: 2020 IEEE international conference on big data and smart computing (BigComp), IEEE, pp 102–108

  22. Kim J, Kim M, Kang H, et al (2019) U-gat-it: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. arXiv preprint arXiv:1907.10830

  23. Kim T, Cha M, Kim H, et al (2017) Learning to discover cross-domain relations with generative adversarial networks. In: International conference on machine learning, PMLR, pp 1857–1865

  24. Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980

  25. Ledig C, Theis L, Huszár F, et al (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681–4690

  26. Lee HY, Tseng HY, Huang JB, et al (2018) Diverse image-to-image translation via disentangled representations. In: Proceedings of the European conference on computer vision (ECCV), pp 35–51

  27. Lee HY, Tseng HY, Mao Q et al (2020) Drit++: diverse image-to-image translation via disentangled representations. Int J Comput Vis 128(10):2402–2417

    Article  Google Scholar 

  28. Li D, Hu J, Wang C, et al (2021) Involution: inverting the inherence of convolution for visual recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 12321–12330

  29. Li M, Lin J, Ding Y, et al (2020) Gan compression: Efficient architectures for interactive conditional gans. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5284–5294

  30. Liu MY, Breuel T, Kautz J (2017) Unsupervised image-to-image translation networks. In: Advances in neural information processing systems, pp 700–708

  31. Mao Q, Tseng HY, Lee HY et al (2022) Continuous and diverse image-to-image translation via signed attribute vectors. Int J Comput Vis 130(2):517–549

    Article  Google Scholar 

  32. Mao X, Li Q, Xie H, et al (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2794–2802

  33. Mejjati YA, Richardt C, Tompkin J, et al (2018) Unsupervised attention-guided image to image translation. arXiv preprint arXiv:1806.02311

  34. Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784

  35. Peng X, Peng S, Hu Q, et al (2022) Contour-enhanced cyclegan framework for style transfer from scenery photos to Chinese landscape paintings. Neural Comput Appl, pp 1–22

  36. Qi GJ (2020) Loss-sensitive generative adversarial networks on lipschitz densities. Int J Comput Vis 128(5):1118–1140

    Article  MathSciNet  MATH  Google Scholar 

  37. Qu X, Wang X, Wang Z, et al (2018) Perceptual-dualgan: perceptual losses for image to image translation with generative adversarial nets. In: 2018 international joint conference on neural networks (IJCNN), IEEE, pp 1–8

  38. Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434

  39. Shi W, Caballero J, Huszár F, et al (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1874–1883

  40. Tang H, Xu D, Sebe N, et al (2019) Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2417–2426

  41. Ulyanov D, Vedaldi A, Lempitsky V (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022

  42. Wang TC, Liu MY, Zhu JY, et al (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8798–8807

  43. Xiao J, Zhang S, Yao Y et al (2022) Generative adversarial network with hybrid attention and compromised normalization for multi-scene image conversion. Neural Comput Appl 34(9):7209–7225

    Article  Google Scholar 

  44. Xu S, Zhu Q, Wang J (2020) Generative image completion with image-to-image translation. Neural Comput Appl 32(11):7333–7345

    Article  Google Scholar 

  45. Yadav NK, Singh SK, Dubey SR (2022) Csa-gan: cyclic synthesized attention guided generative adversarial network for face synthesis. Appl Intell, pp 1–20

  46. Yi Z, Zhang H, Tan P, et al (2017) Dualgan: Unsupervised dual learning for image-to-image translation. In: Proceedings of the IEEE international conference on computer vision, pp 2849–2857

  47. Zhang R, Isola P, Efros AA (2016) Colorful image colorization. In: European conference on computer vision, Springer, pp 649–666

  48. Zhang Y, Yu L, Sun B, et al (2022) Eng-face: cross-domain heterogeneous face synthesis with enhanced asymmetric cyclegan. Appl Intell pp 1–13

  49. Zhou D, Zhang H, Li Q, et al (2022) Coutfitgan: learning to synthesize compatible outfits supervised by silhouette masks and fashion styles. IEEE Trans Multimed

  50. Zhu JY, Park T, Isola P, et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiuxia Wu.

Ethics declarations

Conflict of interest

Haipeng Deng, Qiuxia Wu, Han Huang, Xiaowei Yang and Zhiyong Wang declare that no conflict of interest could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deng, H., Wu, Q., Huang, H. et al. InvolutionGAN: lightweight GAN with involution for unsupervised image-to-image translation. Neural Comput & Applic 35, 16593–16605 (2023). https://doi.org/10.1007/s00521-023-08530-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08530-z

Keywords

Navigation