Skip to main content
Log in

Accelerate neural style transfer with super-resolution

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Style transfer is a task of migrating a style from one image to another. Recently, Full Convolutional Network (FCN) is adopted to create stylized images and make it possible to perform style transfer in real-time on advanced GPUs. However, problems are still existing in memory usage and time-consumption when processing high-resolution images. In this work, we analyze the architecture of the style transfer network and divide it into three parts: feature extraction, style transfer, and image reconstruction. And a novel way is proposed to accelerate the style transfer operation and reduce the memory usage at run-time by conducting the super-resolution style transfer network (SRSTN), which can generate super-resolution stylized images. Compared with other style transfer networks, SRSTN can produce competitive quality resulting images with a faster speed as well as less memory usage.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Aly HA, Dubois E (2005) Image up-sampling using total-variation regularization with a new observation model. Trans Img Proc 14(10):1647–1659

    Article  Google Scholar 

  2. Bell S, Lawrence Zitnick C, Bala K, Girshick R (2016) Inside-outside net: detecting objects in context with skip pooling and recurrent neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2874–2883

  3. Chang H, Yeung DY, Xiong Y (2004) Super-resolution through neighbor embedding. In: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition (CVPR), vol 1, pp I–I

  4. Chen TQ, Schmidt M (2016) Fast patch-based style transfer of arbitrary style. CoRR arXiv:1612.04337

  5. Chen D, Yuan L, Liao J, Yu N, Hua G (2017) Stylebank: an explicit representation for neural image style transfer. In: The IEEE conference on computer vision and pattern recognition (CVPR)

  6. Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. In: NIPS BigLearn Workshop

  7. Dong C, Loy CC, He K, Tang X (2016) Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 38(2):295–307

    Article  Google Scholar 

  8. Duchon CE (1979) Lanczos filtering in one and two dimensions. J Appl Meteorol 18(8):1016–1022

    Article  Google Scholar 

  9. Dumoulin V, Shlens J, Kudlur M (2017) A learned representation for artistic style. In: ICLR

  10. Fattal R Image upsampling via imposed edge statistics. ACM Trans Graph 26(3). https://doi.org/10.1145/1276377.1276496

    Article  Google Scholar 

  11. Freedman G, Fattal R (2011) Image and video upscaling from local self-examples. ACM Trans Graph 30(2):12:1–12:11

    Article  Google Scholar 

  12. Freeman WT, Jones TR, Pasztor EC (2002) Example-based super-resolution. IEEE Comput Graph Appl 22(2):56–65

    Article  Google Scholar 

  13. Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2414–2423

  14. Glasner D, Bagon S, Irani M (2009) Super-resolution from a single image. In: International conference on computer vision (ICCV), pp 349–356

  15. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems 27. Curran Associates, Inc., pp 2672– 2680

  16. Hertzmann A, Jacobs CE, Oliver N, Curless B, Salesin DH (2001) Image analogies. In: Proceedings of the 28th annual conference on computer graphics and interactive techniques. ACM, pp 327–340

  17. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR arXiv:1704.04861

  18. Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: The IEEE international conference on computer vision (ICCV)

  19. Jing Y, Yang Y, Feng Z, Ye J, Song M (2017) Neural style transfer: a review. arXiv preprint arXiv:1705.04058

  20. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision (ECCV)

  21. Kim KI, Kwon Y (2010) Single-image super-resolution using sparse regression and natural image prior. IEEE Trans Pattern Anal Mach Intell 32(6):1127–1133

    Article  Google Scholar 

  22. Kim J, Lee JK, Lee KM (2016) Accurate image super-resolution using very deep convolutional networks. In: The IEEE conference on computer vision and pattern recognition (CVPR)

  23. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. CoRR, arXiv:1412.6980

  24. Kong T, Yao A, Chen Y, Sun F (2016) Hypernet: towards accurate region proposal generation and joint object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 845–853

  25. Ledig C, Theis L, Huszar F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, Shi W (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: The IEEE conference on computer vision and pattern recognition (CVPR)

  26. Li Z, Zhou F (2017) FSSD: feature fusion single shot multibox detector. CoRR arXiv:1712.00960

  27. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision (ECCV). Springer, Berlin, pp 740–755

    Chapter  Google Scholar 

  28. Liu W, Rabinovich A, Berg AC, Liu W, Rabinovich A, Berg AC (2016) Parsenet: looking wider to see better. In: ICLR

  29. Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 21(12):4695–4708. https://doi.org/10.1109/TIP.2012.2214050

    Article  MathSciNet  MATH  Google Scholar 

  30. Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML), pp 807–814

  31. Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. Distill. https://doi.org/10.23915/distill.00003 , http://distill.pub/2016/deconv-checkerboard

  32. Qi N, Shi Y, Sun X, Ding W, Yin B (2015) Single image super-resolution via 2d nonlocal sparse representation. In: 2015 visual communications and image processing (VCIP), pp 1–4

  33. Ruder M, Dosovitskiy A, Brox T (2016) Artistic style transfer for videos. In: German conference on pattern recognition. Springer, pp 26–36

  34. Sandler M, Howard AG, Zhu M, Zhmoginov A, Chen L (2018) Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR, arXiv:1801.04381

  35. Shan Q, Li Z, Jia J, Tang CK (2008) Fast image/video upsampling. In: ACM transactions on graphics (SIGGRAPH ASIA)

  36. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. CoRR, arXiv:1409.1556

  37. Sun J, Xu Z, Shum HY (2008) Image super-resolution using gradient profile prior. In: Computer vision and pattern recognition CVPR. IEEE, pp 1–8

  38. Ulyanov D, Lebedev V, Vedaldi A, Lempitsky V (2016) Texture networks: feed-forward synthesis of textures and stylized images. In: Proceedings of the 33rd international conference on international conference on machine learning, vol 48. JMLR.org, pp 1349–1357

  39. Ulyanov D, Vedaldi A, Lempitsky V (2017) Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: The IEEE conference on computer vision and pattern recognition (CVPR)

  40. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

  41. Xiong Z, Sun X, Wu F (2010) Robust web image/video super-resolution. IEEE Trans Image Process 19(8):2017–2028

    Article  MathSciNet  Google Scholar 

  42. Yang J, Wright J, Huang T, Ma Y (2008) Image super-resolution as sparse representation of raw image patches. In: 2008 IEEE conference on computer vision and pattern recognition (CVPR), pp 1–8

  43. Yang J, Wright J, Huang TS, Ma Y (2010) Image super-resolution via sparse representation. IEEE Trans Image Process 19(11):2861–2873

    Article  MathSciNet  Google Scholar 

  44. Yang J, Lin Z, Cohen S (2013) Fast image super-resolution based on in-place example regression. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1059–1066

  45. Yang CY, Ma C, Yang MH (2014) Single-image super-resolution: a benchmark. In: Proceedings of European conference on computer vision (ECCV)

  46. Zhang H, Dana KJ (2017) Multi-style generative network for real-time transfer. CoRR, arXiv:1703.06953

  47. Zhihui Z, Bo W, Kang S (2011) Single remote sensing image super-resolution and denoising via sparse representation. In: 2011 international workshop on multi-platform/multi-sensor remote sensing and mapping, pp 1–5

  48. Zhou F, Li X, Li Z (2018) High-frequency details enhancing DenseNet for super-resolution. Neurocomputing 290:34–42. https://doi.org/10.1016/j.neucom.2018.02.027, http://www.sciencedirect.com/science/article/pii/S0925231218301620

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61372177).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fuqiang Zhou.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Z., Zhou, F., Yang, L. et al. Accelerate neural style transfer with super-resolution. Multimed Tools Appl 79, 4347–4364 (2020). https://doi.org/10.1007/s11042-018-6929-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6929-x

Keywords

Navigation