Abstract
The general framework for fast universal style transfer consists of an autoencoder and a feature transformation at the bottleneck. We propose a new transformation that iteratively stylizes features with analytical gradient descent (Implementation is open-sourced at https://github.com/chiutaiyin/Iterative-feature-transformation-for-style-transfer). Experiments show this transformation is advantageous in part because it is fast. With control knobs to balance content preservation and style effect transferal, we also show this method can switch between artistic and photo-realistic style transfers and reduce distortion and artifacts. Finally, we show it can be used for applications requiring spatial control and multiple-style transfer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Previous analysis [3] shows that the feature \(\mathbf {F}_{wct}\) derived by applying WCT to \(\mathbf {F}_{N,c}\) and \(\mathbf {F}_{N,s}\) makes the value of the style loss in Eq. 4 go to zero, and hence could serve as an approximate solution. However, WCT does not consider the balance between soft proximity loss and style loss.
- 2.
We found \(n_{iter}=3\) is sufficient for convergence with little difference from \(n_{iter}=2\).
- 3.
- 4.
While a fancy autoencoder such as WCT2 [25] using wavelet pooling can further prevent distortion, that is beyond the scope of this paper.
- 5.
Our aim was to capture the distortion level caused by different transformations. While a user study can reflect aesthetics, it is hard for users to notice all distortions in an image and the score scale can be only coarse-grained (eg, 0 = no distortion, 5 = worst distortion). That motivated our choice to use quantitative metrics.
- 6.
We use the MATLAB implementation, setting the luminance, contrast, and structural exponents to 1 and regularization constants to \(0.01^2\), \(0.03^2\), and \(0.03^2/2\).
- 7.
We adopt the official implementation with default hyper-parameters, which computes phase congruency with Kovesi’s method and log-Gabor filters and the gradient magnitude based on the Scharr operator.
- 8.
Due to limited space and a similar trend of linear transition from one style to the other, we show results from AdaIN and Avatar-net in the Supplementary Materials.
References
Champandard, A.J.: Semantic style transfer and turning two-bit doodles into fine artworks. arXiv preprint arXiv:1603.01768 (2016)
Chen, T.Q., Schmidt, M.: Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337 (2016)
Chiu, T.Y.: Understanding generalized whitening and coloring transform for universal style transfer. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4452–4460 (2019)
Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Conference Track Proceedings. OpenReview.net (2017). https://openreview.net/forum?id=BJO-BuT1g
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)
Gatys, L.A., Ecker, A.S., Bethge, M., Hertzmann, A., Shechtman, E.: Controlling perceptual factors in neural style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3985–3993 (2017)
Ghiasi, G., Lee, H., Kudlur, M., Dumoulin, V., Shlens, J.: Exploring the structure of a real-time, arbitrary neural artistic stylization network. In: Kim, T.K., Zafeiriou, S., Brostow, G., Mikolajczyk, K. (eds.) Proceedings of the British Machine Vision Conference (BMVC), pp. 114.1–114.12. BMVA Press, September 2017. http://doi.org/10.5244/C.31.114
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Li, C., Wand, M.: Combining Markov random fields and convolutional neural networks for image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2479–2486 (2016)
Li, C., Wand, M.: Precomputed real-time texture synthesis with Markovian generative adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 702–716. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_43
Li, P., Zhao, L., Xu, D., Lu, D.: Optimal transport of deep feature for image style transfer. In: Proceedings of the 2019 4th International Conference on Multimedia Systems and Signal Processing, pp. 167–171 (2019)
Li, X., Liu, S., Kautz, J., Yang, M.H.: Learning linear transformations for fast image and video style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3809–3817 (2019)
Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.H.: Diversified texture synthesis with feed-forward networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3920–3928 (2017)
Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.H.: Universal style transfer via feature transforms. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, pp. 386–396 (2017)
Li, Y., Liu, M.-Y., Li, X., Yang, M.-H., Kautz, J.: A closed-form solution to photorealistic image stylization. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 468–483. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_28
Lu, M., Zhao, H., Yao, A., Chen, Y., Xu, F., Zhang, L.: A closed-form solution to universal style transfer. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5952–5961 (2019)
Luan, F., Paris, S., Shechtman, E., Bala, K.: Deep photo style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4990–4998 (2017)
Mroueh, Y.: Wasserstein style transfer. arXiv preprint arXiv:1905.12828 (2019)
Risser, E., Wilmot, P., Barnes, C.: Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893 (2017)
Sheng, L., Lin, Z., Shao, J., Wang, X.: Avatar-Net: multi-scale zero-shot style transfer by feature decoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8242–8250 (2018)
Ulyanov, D., Lebedev, V., Vedaldi, A., Lempitsky, V.S.: Texture networks: feed-forward synthesis of textures and stylized images. In: Balcan, M.F., Weinberger, K.Q. (eds.) ICML, vol. 1, p. 4 (2016)
Ulyanov, D., Vedaldi, A., Lempitsky, V.: Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6924–6932 (2017)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Yoo, J., Uh, Y., Chun, S., Kang, B., Ha, J.W.: Photorealistic style transfer via wavelet transforms. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9036–9045 (2019)
Zhang, H., Dana, K.: Multi-style generative network for real-time transfer. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11132, pp. 349–365. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11018-5_32
Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Chiu, TY., Gurari, D. (2020). Iterative Feature Transformation for Fast and Versatile Universal Style Transfer. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12364. Springer, Cham. https://doi.org/10.1007/978-3-030-58529-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-58529-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58528-0
Online ISBN: 978-3-030-58529-7
eBook Packages: Computer ScienceComputer Science (R0)