Skip to main content

Arbitrary Style Transfer with Adaptive Channel Network

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13141))

Included in the following conference series:

  • 2390 Accesses

Abstract

Arbitrary style transfer aims to obtain a brand new stylized image by adding arbitrary artistic style elements to the original content image. It is difficult for recent arbitrary style transfer algorithms to recover enough content information while maintaining good stylization characteristics. The balance between style information and content information is the main difficulty. Moreover, these algorithms tend to generate fuzzy blocks, color spots and other defects in the image. In this paper, we propose an arbitrary style transfer algorithm based on adaptive channel network (AdaCNet), which can flexibly select specific channels for style conversion to generate stylized images. In our algorithm, we introduce a content reconstruction loss to maintain local structure invariance, and a new style consistency loss that improves the stylization effect and style generalization ability. Experimental results show that, compared with other advanced methods, our algorithm maintains the balance between style information and content information, eliminates some defects such as blurry blocks, and also achieves good performance on the task of style generalization and transferring high-resolution images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Azadi, S., Fisher, M., Kim, V., Wang, Z., Shechtman, E., Darrell, T.: Multi-content gan for few-shot font style transfer. In: Conference on Computer Vision and Pattern Recognition, pp. 7564–7573 (2018)

    Google Scholar 

  2. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 95–104 (2017)

    Google Scholar 

  3. Chen, D., Yuan, L., Liao, J., Yu, N., Hua, G.: Stylebank: an explicit representation for neural image style transfer. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2770–2779 (2017)

    Google Scholar 

  4. Chen, T., Schmidt, M.: Fast patch-based style transfer of arbitrary style. In: NeurIPS (2016)

    Google Scholar 

  5. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: NeurIPS (2016)

    Google Scholar 

  6. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

    Google Scholar 

  7. Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style. ICLR (2017)

    Google Scholar 

  8. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423 (2016)

    Google Scholar 

  9. Goodfellow, I.J., et al.: Generative adversarial nets. In: NeurIPS (2014)

    Google Scholar 

  10. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: International Conference on Computer Vision (ICCV), pp. 1510–1519 (2017)

    Google Scholar 

  11. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976 (2017)

    Google Scholar 

  12. Jimenez-Arredondo, V.H., Cepeda-Negrete, J., Sanchez-Yanez, R.E.: Multilevel color transfer on images for providing an artistic sight of the world. In: IEEE Access 5, pp. 15390–15399 (2017)

    Google Scholar 

  13. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  14. Kotovenko, D., Sanakoyeu, A., Lang, S., Ommer, B.: Content and style disentanglement for artistic style transfer. In: International Conference on Computer Vision (ICCV), pp. 4421–4430 (2019)

    Google Scholar 

  15. Kunfeng, W., Yue, L., Yutong, W., Fei-Yue, W.: Parallel imaging: a unified theoretical framework for image generation. In: 2017 Chinese Automation Congress (CAC), pp. 7687–7692 (2017)

    Google Scholar 

  16. Li, N., Zheng, Z., Zhang, S., Yu, Z., Zheng, H., Zheng, B.: The synthesis of unpaired underwater images using a multistyle generative adversarial network. IEEE Access 6, 54241–54257 (2018)

    Article  Google Scholar 

  17. Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.: Diversified texture synthesis with feed-forward networks. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 266–274 (2017)

    Google Scholar 

  18. Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.: Universal style transfer via feature transforms. In: Neural Information Processing Systems, vol. 30, pp. 386–396. Curran Associates, Inc. (2017)

    Google Scholar 

  19. Li, Y., Wang, N., Liu, J. and Hou, X.: Demystifying neural style transfer. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 2230–2236 (2017)

    Google Scholar 

  20. Liu, M., et al.: Few-shot unsupervised image-to-image translation. In: International Conference on Computer Vision (ICCV), pp. 10550–10559 (2019)

    Google Scholar 

  21. Park, D.Y., Lee, K.H.: Arbitrary style transfer with style-attentional networks. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5873–5881 (2019)

    Google Scholar 

  22. Phillips, F., Mackintosh, B.: Wiki Art Gallery Inc: a case for critical thinking. Issues Account. Educ. 26(3), 593–608 (2011)

    Google Scholar 

  23. Sheng, L., Lin, Z., Shao, J., Wang, X.: Avatar-net: multi-scale zero-shot style transfer by feature decoration. In: Conference on Computer Vision and Pattern Recognition, pp. 8242–8250 (2018)

    Google Scholar 

  24. Shiri, F., Porikli, F., Hartley, R., Koniusz, P.: Identity-preserving face recovery from portraits. In: Winter Conference on Applications of Computer Vision (WACV), pp. 102–111 (2018)

    Google Scholar 

  25. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2015)

    Google Scholar 

  26. Song, C., Wu, Z., Zhou, Y., Gong, M., Huang, H.: Etnet: error transition network for arbitrary style transfer. In: NeurIPS (2019)

    Google Scholar 

  27. Svoboda, J., Anoosheh, A., Osendorfer, C., Masci, J.: Two-stage peer-regularized feature recombination for arbitrary image style transfer. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13813–13822 (2020)

    Google Scholar 

  28. Vo, D.M., Le, T., Sugimoto, A.: Balancing content and style with two-stream fcns for style transfer. In: Winter Conference on Applications of Computer Vision (WACV), pp. 1350–1358 (2018)

    Google Scholar 

  29. Wang, W., Shen, W., Guo, S., Zhu, R., Chen, B., Sun, Y.: Image artistic style migration based on convolutional neural network. In: 2018 5th International Conference on Systems and Informatics (ICSAI), pp. 967–972 (2018)

    Google Scholar 

  30. Wu, J., Huang, Z., Thoma, J., Acharya, D., Van Gool, L.: Wasserstein divergence for GANs. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 673–688. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01228-1_40

    Chapter  Google Scholar 

  31. Sheng, L., Lin, Z., Shao, J., Wang, X.: Separating style and content for generalized style transfer. In: Conference on Computer Vision and Pattern Recognition, pp. 8447–8455 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanlin Geng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y., Geng, Y. (2022). Arbitrary Style Transfer with Adaptive Channel Network. In: Þór Jónsson, B., et al. MultiMedia Modeling. MMM 2022. Lecture Notes in Computer Science, vol 13141. Springer, Cham. https://doi.org/10.1007/978-3-030-98358-1_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98358-1_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98357-4

  • Online ISBN: 978-3-030-98358-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics