Abstract
A style-transfer generates against network based on Markov random field is proposed in this paper. Based on the original image, a new image is generated by generate network, and then the error between the original image and the style image is calculated using the discriminant network and backward propagation to the generate network, high-quality style transfer images are generated through the continuous confrontation of the two networks. In the quantification of style loss and content loss, we have introduced Markov random field, which uses its limitation on the spatial layout to reduce the distorted distortion of the generated image and improve the quality of the generated image. Experiments show that the network can quickly generate high-quality style transition images in a short time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Computer Vision and Pattern Recognition, pp. 2414–2423. IEEE (2016)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Luan, F., Paris, S., Shechtman, E., et al.: Deep photo style transfer, 6997–7005 (2017)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. JMLR.org (2015)
Li, M., Zuo, W., Zhang, D.: Deep identity-aware transfer of facial attributes (2016)
Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. Comput. Sci. (2015)
Ignatov, A., Kobyshev, N., Vanhoey, K., et al.: DSLR-quality photos on mobile devices with deep convolutional networks, 3297–3305 (2017)
Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. Comput. Sci. (2015)
Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial networks. In: Advances in Neural Information Processing Systems, vol. 3, pp. 2672–2680 (2014)
Denton, E.L., Chintala, S., Szlam, A., Fergus, R.: Deep generative image models using a laplacian pyramid of adversarial networks. In: International Conference on Neural Information Processing Systems, pp. 1486–1494. MIT Press (2015)
Jing, Y., Yang, Y., Feng, Z., et al.: Neural style transfer: a review (2017)
Acknowledgments
We thank the Fujian Science and Technology Agency for funding support of this research through Soft Science Project with project codes 2017R01010181. We would like to thank the support of the Fujian Young and Middle-aged Teacher Education Research Project with project codes JAT160472.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer-Verlag GmbH Germany, part of Springer Nature
About this chapter
Cite this chapter
Qiu, G., Song, J., Chen, L. (2019). A Style Image Confrontation Generation Network Based on Markov Random Field. In: Pan, Z., Cheok, A., Müller, W., Zhang, M., El Rhalibi, A., Kifayat, K. (eds) Transactions on Edutainment XV. Lecture Notes in Computer Science(), vol 11345. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-59351-6_2
Download citation
DOI: https://doi.org/10.1007/978-3-662-59351-6_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-59350-9
Online ISBN: 978-3-662-59351-6
eBook Packages: Computer ScienceComputer Science (R0)