Skip to main content
Log in

A Comparative Study of CNN- and Transformer-Based Visual Style Transfer

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Vision Transformer has shown impressive performance on the image classification tasks. Observing that most existing visual style transfer (VST) algorithms are based on the texture-biased convolution neural network (CNN), here raises the question of whether the shape-biased Vision Transformer can perform style transfer as CNN. In this work, we focus on comparing and analyzing the shape bias between CNN- and transformer-based models from the view of VST tasks. For comprehensive comparisons, we propose three kinds of transformer-based visual style transfer (Tr-VST) methods (Tr-NST for optimization-based VST, Tr-WCT for reconstruction-based VST and Tr-AdaIN for perceptual-based VST). By engaging three mainstream VST methods in the transformer pipeline, we show that transformer-based models pre-trained on ImageNet are not proper for style transfer methods. Due to the strong shape bias of the transformer-based models, these Tr-VST methods cannot render style patterns. We further analyze the shape bias by considering the inuence of the learned parameters and the structure design. Results prove that with proper style supervision, the transformer can learn similar texture-biased features as CNN does. With the reduced shape bias in the transformer encoder, Tr-VST methods can generate higher-quality results compared with state-of-the-art VST methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.2414-2423. https://doi.org/10.1109/CVPR.2016.265.

  2. Kolkin N, Salavon J, Shakhnarovich G. Style transfer by relaxed optimal transport and self-similarity. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp.10051-10060. https://doi.org/10.1109/CVPR.2019.01029.

  3. Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.1501-1510. https://doi.org/10.1109/ICCV.2017.167.

  4. Li Y, Fang C, Yang J, Wang Z, Lu X, Yang M H. Universal style transfer via feature transforms. In Proc. the 31st International Conference on Neural Information Processing Systems, December 2017, pp.385-395.

  5. Deng Y, Tang F, Dong W, Sun W, Huang F, Xu C. Arbitrary style transfer via multi-adaptation network. In Proc. the 28th ACM International Conference on Multimedia, Oct. 2020, pp.2719-2727. https://doi.org/10.1145/3394171.3414015.

  6. Deng Y, Tang F, Dong W, Huang H, Ma C, Xu C. Arbitrary video style transfer via multi-channel correlation. In Proc. the 35th AAAI Conference on Artificial Intelligence, February 2021, pp.1210-1217.

  7. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser L U, Polosukhin I. Attention is all you need. In Proc. the 31st International Conference on Neural Information Processing Systems, December 2017, pp.6000-6010.

  8. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N. An image is worth 16x16 words: Transformers for image recognition at scale. In Proc. the 9th International Conference on Learning Representations, May 2021.

  9. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S. End-to-end object detection with transformers. In Proc. the 16th European Conference on Computer Vision, August 2020, pp.213-229. https://doi.org/10.1007/978-3-030-58452-8_13.

  10. Yang F, Yang H, Fu J, Lu H, Guo B. Learning texture transformer network for image super-resolution. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp.5790-5799. https://doi.org/10.1109/CVPR42600.2020.00583.

  11. Lee K, Chang H, Jiang L, Zhang H, Tu Z, Liu C. ViTGAN: Training GANs with vision transformers. arXiv:2107.04589, 2021. https://arxiv.org/abs/2107.04589, Jan. 2022.

  12. Guo M H, Cai J X, Liu Z N, Mu T J, Martin R R, Hu S M. PCT: Point cloud transformer. Computational Visual Media, June 2021, 7(2): 187-199. https://doi.org/10.1007/s41095-021-0229-5.

  13. Tuli S, Dasgupta I, Grant E, Griffiths T L. Are convolutional neural networks or transformers more like human vision? arXiv:2105.07197, 2021. https://arxiv.org/abs/2105.07197, Jan. 2022.

  14. Naseer M, Ranasinghe K, Khan S, Hayat M, Khan F, Yang M H. Intriguing properties of vision transformers. In Proc. the 35th Conference on Neural Information Processing Systems, December 2021.

  15. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2016, pp.2921-2929. https://doi.org/10.1109/CVPR.2016.319

  16. Jing Y, Yang Y, Feng Z, Ye J, Yu Y, Song M. Neural style transfer: A review. IEEE Trans. Visualization and Computer Graphics, 2020, 26(11): 3365-3385. https://doi.org/10.1109/TVCG.2019.2921336.

    Article  Google Scholar 

  17. Johnson J, Alahi A, Li F F. Perceptual losses for real-time style transfer and super-resolution. In Proc. the 14th European Conference on Computer Vision, Oct. 2016, pp.694-711. https://doi.org/10.1007/978-3-319-46475-6_43.

  18. Ulyanov D, Vedaldi A, Lempitsky V. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, July 2017, pp.4105-4113. https://doi.org/10.1109/CVPR.2017.437.

  19. An J, Huang S, Song Y, Dou D, Liu W, Luo J. Art-Flow: Unbiased image style transfer via reversible neural flows. In Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp.862-871. https://doi.org/10.1109/CVPR46437.2021.00092.

  20. Park D Y, Lee K H. Arbitrary style transfer with style-attentional networks. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp.5880-5888. https://doi.org/10.1109/CVPR.2019.00603.

  21. Li X, Liu S, Kautz J, Yang M H. Learning linear transformations for fast image and video style transfer. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp.3809-3817. https://doi.org/10.1109/CVPR.2019.00393.

  22. Wang Z, Zhao L, Chen H, Qiu L, Mo Q, Lin S, Xing W, Lu D. Diversified arbitrary style transfer via deep feature perturbation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp.7786-7795. https://doi.org/10.1109/CVPR42600.2020.00781.

  23. Wu X, Hu Z, Sheng L, Xu D. StyleFormer: Real-time arbitrary style transfer via parametric style composition. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, October 2021, pp.14618-14627. https://doi.org/10.1109/ICCV48922.2021.01435.

  24. Chen M, Radford A, Child R, Wu J, Jun H, Luan D, Sutskever I. Generative pretraining from pixels. In Proc. the 37th International Conference on Machine Learning, July 2020, pp.1691-1703.

  25. Xu Y, Wei H, Lin M, Deng Y, Sheng K, Zhang M, Tang F, Dong W, Huang F, Xu C. Transformers in computational visual media: A survey. Computational Visual Media, 2022, 8(1): 33-62. https://doi.org/10.1007/s41095-021-0247-3.

    Article  Google Scholar 

  26. Wang Y, Xu Z, Wang X, Shen C, Cheng B, Shen H, Xia H. End-to-end video instance segmentation with transformers. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp.8741-8750. https://doi.org/10.1109/CVPR46437.2021.00863.

  27. Chen H, Wang Y, Guo T, Xu C, Deng Y, Liu Z, Ma S, Xu C, Xu C, Gao W. Pre-trained image processing transformer. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp.12299-12310. https://doi.org/10.1109/CVPR46437.2021.01212.

  28. Kumar M, Weissenborn D, Kalchbrenner N. Colorization transformer. In Proc. the 9th International Conference on Learning Representations, May 2021.

  29. Liu S, Lin T, He D, Li F, Deng R, Li X, Ding E, Wang H. Paint transformer: Feed forward neural painting with stroke prediction. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, October 2021, pp.6598-6607. https://doi.org/10.1109/ICCV48922.2021.00653.

  30. Jiang Y, Chang S, Wang Z. TransGAN: Two pure transformers can make one strong GAN, and that can scale up. In Proc. the 35th Conference on Neural Information Processing Systems, Dec. 2021.

  31. Cordonnier J B, Loukas A, Jaggi M. On the relationship between self-attention and convolutional layers. In Proc. the 8th International Conference on Learning Representations, April 2020.

  32. Xiong R, Yang Y, He D, Zheng K, Zheng S, Xing C, Zhang H, Lan Y, Wang L, Liu T. On layer normalization in the transformer architecture. In Proc. the 37th International Conference on Machine Learning, July 2020, pp.10524-10533.

  33. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In Proc. the 3rd International Conference on Learning Representations, May 2015.

  34. Dosovitskiy A, Brox T. Generating images with perceptual similarity metrics based on deep networks. In Proc. the 30th International Conference on Neural Information Processing Systems, December 2016, pp.658-666.

  35. Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick C L. Microsoft COCO: Common objects in context. In Proc. the 13th European Conference on Computer Vision, September 2014, pp.740-755. https://doi.org/10.1007/978-3-319-10602-1_48.

  36. Phillips F, Mackintosh B. Wiki Art Gallery, Inc.: A case for critical thinking. Issues in Accounting Education, 2011, 26(3): 593-608. https://doi.org/10.2308/iace-50038.

  37. Kingma D P, Ba J. Adam: A method for stochastic optimization. In Proc. the 3rd International Conference on Learning Representations, May 2015.

  38. Baker N, Lu H, Erlikhman G, Kellman P J. Deep convolutional networks do not classify based on global object shape. PLoS Computational Biology, 2018, 14(12): Article No. e1006613. https://doi.org/10.1371/journal.pcbi.1006613.

  39. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.770-778. https://doi.org/10.1109/CVPR.2016.90.

  40. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg A C, Li F F. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211-252. https://doi.org/10.1007/s11263-015-0816-y.

    Article  MathSciNet  Google Scholar 

  41. Geirhos R, Rubisch P, Michaelis C, Bethge M, Wichmann F A, Brendel W. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In Proc. the 7th International Conference on Learning Representations, May 2019.

  42. Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers distillation through attention. In Proc. the 38th International Conference on Machine Learning, July 2021, pp.10347-10357.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fan Tang.

Supplementary Information

ESM 1

(PDF 479 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, HP., Deng, YY., Tang, F. et al. A Comparative Study of CNN- and Transformer-Based Visual Style Transfer. J. Comput. Sci. Technol. 37, 601–614 (2022). https://doi.org/10.1007/s11390-022-2140-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-022-2140-7

Keywords

Navigation