Skip to main content

Font Style Transfer Using Neural Style Transfer and Unsupervised Cross-domain Transfer

  • Conference paper
  • First Online:
Computer Vision – ACCV 2018 Workshops (ACCV 2018)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11367))

Included in the following conference series:

Abstract

In this paper, we study about font generation and conversion. The previous methods dealt with characters as ones made of strokes. On the contrary, we extract features, which are equivalent to the strokes, from font images and texture or pattern images using deep learning, and transform the design pattern of font images. We expect that generation of original font such as hand written characters will be generated automatically by the proposed approach. In the experiments, we have created unique datasets such as a ketchup character image dataset and improve image generation quality and readability of character by combining neural style transfer with unsupervised cross-domain learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/kaonashi-tyc/Rewrite.

  2. 2.

    https://kaonashi-tyc.github.io/2017/04/06/zi2zi.html.

References

  1. Ulyanov, D., Lebedev, V., Lempitsky, A.V.: Texture networks: feed-forward synthesis of textures and stylized images. In: arXiv:1603.03417v1 (2016)

  2. Gantugs, A., Iwana, B.K., Narusawa, A., Yanai, K., Uchida, S.: Neural font style transfer. In: International Conference on Document Analysis and Recognition (2017)

    Google Scholar 

  3. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of IEEE Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  4. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of European Conference on Computer Vision (2016)

    Google Scholar 

  5. Jun-Yan, Z., Taesung, P., Phillip, I., Alexei, E.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  6. Kim, T., Cha, M., Kim, M., Lee, J.K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: International Conference on Machine Learning (2017). http://arxiv.org/abs/1703.05192

  7. Li, C., Wand, M.: Combining Markov random fields and convolutional neural networks for image synthesis. In: Proceedings of IEEE Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  8. Lin, J., Hong, C., Chang, R., Wang, Y., Lin, S., Ho, J.: Complete font generation of Chinese characters in personal handwriting style. In: 2015 IEEE 34th International Performance Computing and Communications Conference (IPCCC), pp. 1–5. IEEE (2015)

    Google Scholar 

  9. Miyazaki, T., et al.: Automatic generation of typographic font from a small font subset. In: arXiv preprint arXiv:1701.05703 (2017)

  10. Phillip, I., Jun-Yan, Z., Tinghui, Z., Alexei, E.: Image-to-image translation with conditional adversarial networks. In: Proceedings of IEEE Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  11. Shuai, Y., Jiaying, L., Zhouhui, L., Zongming, G.: Awesome typography: statistics-based text effects transfer. In: Proceedings of IEEE Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  12. Songhua, X., Lau, F.C.M., Kwok-Wai, C., Yunhe, P.: Automatic generation of artistic Chinese calligraphy. In: IEEE Intelligent Systems (2005)

    Google Scholar 

  13. Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. In: International Conference on Learning Representation (2017)

    Google Scholar 

  14. Yi, Z., Zhang, H., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2849–2857 (2017)

    Google Scholar 

  15. Zong, A., Zhu, Y.: StrokeBank: automating personalized Chinese handwriting generation. In: Advancement of Artificial Intelligence, pp. 3024–3030 (2014)

    Google Scholar 

Download references

Acknowledgments

We would like to express great thanks to Prof. Seichi Uchida, Kyushu University, for the insightful and helpful comments. This work was supported by JSPS KAKENHI Grant Number 15H05915, 17H01745, 17H05972, 17H06026 and 17H06100.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keiji Yanai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Narusawa, A., Shimoda, W., Yanai, K. (2019). Font Style Transfer Using Neural Style Transfer and Unsupervised Cross-domain Transfer. In: Carneiro, G., You, S. (eds) Computer Vision – ACCV 2018 Workshops. ACCV 2018. Lecture Notes in Computer Science(), vol 11367. Springer, Cham. https://doi.org/10.1007/978-3-030-21074-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-21074-8_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-21073-1

  • Online ISBN: 978-3-030-21074-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics