Skip to main content

Advertisement

Log in

Unpaired image to image transformation via informative coupled generative adversarial networks

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

We consider image transformation problems, and the objective is to translate images from a source domain to a target one. The problem is challenging since it is difficult to preserve the key properties of the source images, and to make the details of target being as distinguishable as possible. To solve this problem, we propose an informative coupled generative adversarial networks (ICoGAN). For each domain, an adversarial generator-and-discriminator network is constructed. Basically, we make an approximately-shared latent space assumption by a mutual information mechanism, which enables the algorithm to learn representations of both domains in unsupervised setting, and to transform the key properties of images from source to target. Moreover, to further enhance the performance, a weight-sharing constraint between two subnetworks, and different level perceptual losses extracted from the intermediate layers of the networks are combined. With quantitative and visual results presented on the tasks of edge to photo transformation, face attribute transfer, and image inpainting, we demonstrate the ICo-GAN’s effectiveness, as compared with other state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Buades A, Coll B, Morel J M. A non-local algorithm for image denoising. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2005, 60–65

  2. Elad M, Aharon M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 2006, 15(12): 3736–3745

    Article  MathSciNet  Google Scholar 

  3. Pan J, Ren W, Hu Z, Yang M H. Learning to deblur images with exemplars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(6): 1412–1425

    Article  Google Scholar 

  4. Cruz C, Mehta R, Katkovnik V, Egiazarian K O. Single image super-resolution based on wiener filter in similarity domain. IEEE Transactions on Image Processing, 2018, 27(3): 1376–1389

    Article  MathSciNet  Google Scholar 

  5. Huang Y, Li J, Gao X, He L, Lu W. Single image superresolution via multiple mixture prior models. IEEE Transactions on Image Processing, 2018, 27(12): 5904–5917

    Article  MathSciNet  Google Scholar 

  6. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros A A. Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, 2536–2544

  7. Ding D, Ram S, Rodriguez J. Perceptually aware image inpainting. Pattern Recognition, 2018, 1: 174–184

    Article  Google Scholar 

  8. Zhang R, Isola P, Efros A A. Colorful image colorization. In: Proceedings of the European Conference on Computer Vision. 2016, 649–666

  9. Wang C, Xu C, Wang C, Tao D. Perceptual adversarial networks for image-to-image transformation. IEEE Transactions on Image Processing, 2018, 27(8): 4066–4079

    Article  MathSciNet  Google Scholar 

  10. Isola P, Zhu J Y, Zhou T, Efros A A. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 1125–1134

  11. Sangkloy P, Lu J, Fang C, Yu F, Hays J. Scribbler: controlling deep image synthesis with sketch and color. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 5400–5409

  12. Zhu, J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision. 2017, 2223–2232

  13. Liu M Y, Breuel T, Kautz J. Unsupervised image-to-image translation networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 700–708

  14. Kim T, Cha M, Kim H, Lee J K, Kim J. Learning to discover cross-domain relations with generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning. 2017, 1857–1865

  15. Huang X, Liu M Y, Belongie S, Kautz J. Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision. 2018, 172–189

  16. Dong C, Loy C C, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(2): 295–307

    Article  Google Scholar 

  17. Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017, 39(4): 640–651

    Article  Google Scholar 

  18. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. 2015, arXiv preprint arXiv: 1511.06434

  19. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Bengio Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014, 2672–2680

  20. Liu M Y, Tuzel O. Coupled generative adversarial networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 469–477

  21. Lai W S, Huang J B, Ahuja N, Yang M H. Fast and accurate image superresolution with deep laplacian pyramid networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(11): 2599–2613

    Article  Google Scholar 

  22. Dong W, Wang P, Yin W, Shi G. Denoising prior driven deep neural network for image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(10): 2305–2318

    Article  Google Scholar 

  23. Ma L, Sun Q, Georgoulis S, Gool L V, Schiele B, Fritz M. Disentangled person image generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 99–108

  24. Murez Z, Kolouri S, Kriegman D, Ramamoorthi R, Kim K. Image to image translation for domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 4500–4509

  25. Tran L, Yin X, Liu X. Representation learning by rotating your faces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(12): 3007–3021

    Article  Google Scholar 

  26. Lin J, Xia Y, Qin T, Chen Z, Liu T Y. Conditional image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 5524–5532

  27. Li R, Pan J, Li Z, Tang J. Single image dehazing via conditional generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 8202–8211

  28. Wang T C, Liu M Y, Zhu J Y, Tao A, Kautz J, Catanzaro B. High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 8798–8807

  29. Regmi K, Borji A. Cross-view image synthesis using conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3501–3510

  30. Dolhansky B, Ferrer C C. Eye in-painting with exemplar generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 7902–7911

  31. Huang X, Liu M Y, Belongie S, Kautz J. Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision. 2018, 172–189

  32. Lee H Y, Tseng H Y, Huang J B, Singh M, Yang M H. Diverse image-to-image translation via disentangled representations. In: Proceedings of the European Conference on Computer Vision. 2018, 35–51

  33. Ma L, Jia X, Georgoulis S, Tuytelaars T, Van Gool L. Exemplar guided unsupervised image-to-image translation. 2018, arXiv preprint arXiv:1805.11145

  34. Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P. Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 2172–2180

  35. Bruna J, Sprechmann P, LeCun Y. Super-resolution with deep convolutional sufficient statistics. 2015, arXiv preprint arXiv:1511.05666

  36. Johnson J, Alahi A, Li F F. Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of the European Conference on ComputerVision. 2016, 694–711

  37. Gatys L, Ecker A S, Bethge M. Texture synthesis using convolutional neural networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015, 262–270

  38. Donahue J, Krähenbühl P, Darrell T. Adversarial feature learning. 2016, arXiv preprint arXiv:1605.09782

  39. Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600–612

    Article  Google Scholar 

  40. Yu A, Grauman K. Fine-grained visual comparisons with local learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 192–199

  41. Zhu J Y, Krähenbühl P, Shechtman E, Efros A A. Generative visual manipulation on the natural image manifold. In: Proceedings of the European Conference on Computer Vision. 2016, 597–613

  42. Xie S, Tu Z. Holistically-nested edge detection. In: Proceedings of the IEEE Conference on Computer Vision. 2015, 1395–1403

  43. Zhang R, Isola P, Efros A A, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 586–595

  44. Liu Z, Luo P, Wang X, Tang X. Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision. 2015, 3730–3738

Download references

Acknowledgements

The authors are grateful to the support of National Key R&D Program of China (2018YFB1600600), the Natural Science Foundation of Liaoning Province (2019MS045), the Open Fund of Key Laboratory of Electronic Equipment Structure Design (Ministry of Education) in Xidian University (EESD1901), the Fundamental Research Funds for the Central Universities (DUT19JC44), and the Project of the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education in Jilin University (93K172019K10).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Sun.

Additional information

Hongwei Ge received BS and MS degrees in mathematics from Jilin University, China, and the PhD degree in computer application technology from Jilin University, China in 2006. He is currently a professor and a vice dean in the College of Computer Science and Technology, Dalian University of Technology, China. His research interests are machine learning, computational intelligence, optimization and modeling, computer vision, deep learning. He has published more than 80 papers in these areas. His research was featured in the IEEE Transactions on Cybernetics, IEEE Transactions on Evolutionary Computation, IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, Pattern Recognition, Information Science, etc.

Yuxuan Han received the BS degree from Zhengzhou University, China in 2016, and the MS degree in College of Computer Science and Technology, Dalian University of Technology, China. Her main research interests lie in computational intelligence and machine learning methods.

Wenjing Kang received the BS degree from Northeast University, China in 2016, and the MS degree in College of Computer Science and Technology, Dalian University of Technology, China. Her main research interests are deep learning, machine learning applications such as computer vision and large scale optimization.

Liang Sun received the BE degree in computer science and technology from Xidian University, China, and the MS degree in computer application technology from Jilin University, China in 2003 and 2006, respectively. During 2006–2009, as a DE candidate, he was at College of Computer Science and Technology, Jilin University, China. During 2009–2012, as a DE candidate, he was at Kochi University of Technology (KUT), Japan, as an international student of cooperation between KUT and Jilin University. He received double PhD degree from Kochi University and Jilin University in March, 2012 and June 2012, respectively. He is currently with the College of Computer Science and Technology, Dalian university of technology, Dalian, China. His main research interests lie in machine learning and deep learning.

Supporting Information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ge, H., Han, Y., Kang, W. et al. Unpaired image to image transformation via informative coupled generative adversarial networks. Front. Comput. Sci. 15, 154326 (2021). https://doi.org/10.1007/s11704-020-9002-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-020-9002-7

Keywords