Skip to main content

Generating Synthetic Styled Chu Nom Characters

  • Conference paper
  • First Online:
Frontiers in Handwriting Recognition (ICFHR 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13639))

Included in the following conference series:

  • 1062 Accesses

Abstract

Images of historical Vietnamese steles allow historians to discover invaluable information regarding the past of the country, especially about the life of people in rural villages. Due to the sheer amount of available stone engravings and their diverseness, manual examination is difficult and time-consuming. Therefore, automatic document analysis methods based on machine learning could immensely facilitate this laborious work. However, creating ground truth for machine learning is also complex and time-consuming for human experts, which is why synthetic training samples greatly support learning while reducing human effort. In particular, they can be used to train deep neural networks for character detection and recognition. In this paper, we present a method for creating synthetic engravings and use it to create a new database composed of 26,901 synthetic Chu Nom characters in 21 different styles. Using a machine learning model for unpaired image-to-image translation, our approach is annotation-free, i.e. there is no need for human experts to label character images. A user study demonstrates that the synthetic engravings look realistic to the human eye.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://vietnamica.hypotheses.org/.

  2. 2.

    https://github.com/asciusb/21SyntheticStylesNom-Database/.

  3. 3.

    http://www.nomfoundation.org.

References

  1. Cai, J., Peng, L., Tang, Y., Liu, C., Li, P.: TH-GAN: generative adversarial network based transfer learning for historical Chinese character recognition. In: International Conference on Document Analysis and Recognition (ICDAR), pp. 178–183 (2019)

    Google Scholar 

  2. Cha, J., Chun, S., Lee, G., Lee, B., Kim, S., Lee, H.: Few-shot compositional font generation with dual memory. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 735–751. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_43

    Chapter  Google Scholar 

  3. Gan, J., Wang, W.: HiGAN: handwriting imitation conditioned on arbitrary-length texts and disentangled styles. AAAI Conf. Artif. Intell. 35(9), 7484–7492 (2021)

    Google Scholar 

  4. Goodfellow, I., et al.: Generative adversarial nets. In: International Conference on Neural Information Processing Systems (NIPS), pp. 2672–2680 (2014)

    Google Scholar 

  5. Guan, M., Ding, H., Chen, K., Huo, Q.: Improving handwritten OCR with augmented text line images synthesized from online handwriting samples by style-conditioned GAN. In: International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 151–156 (2020)

    Google Scholar 

  6. Gui, J., Sun, Z., Wen, Y., Tao, D., Ye, J.: A review on generative adversarial networks: Algorithms, theory, and applications. IEEE Trans. Knowl. Data Eng. pp. 1–20 (2021)

    Google Scholar 

  7. Hayashi, H., Abe, K., Uchida, S.: GlyphGAN: style-consistent font generation based on generative adversarial networks. Knowl. Based Syst. 186, 1–13 (2019)

    Article  Google Scholar 

  8. Hong, Y., Hwang, U., Yoo, J., Yoon, S.: How generative adversarial networks and their variants work. ACM Comput. Surv. 52(1), 1–43 (2019)

    Article  Google Scholar 

  9. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976 (2017)

    Google Scholar 

  10. Jiang, Y., Lian, Z., Tang, Y., Xiao, J.: SCFont: structure-guided Chinese font generation via deep stacked networks. AAAI Conf. Artif. Intell. 33(01), 4015–4022 (2019)

    Google Scholar 

  11. Kang, L., Riba, P., Wang, Y., Rusiñol, M., Fornés, A., Villegas, M.: GANwriting: content-conditioned generation of styled handwritten word images. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12368, pp. 273–289. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_17

    Chapter  Google Scholar 

  12. Lin, X., Li, J., Zeng, H., Ji, R.: Font generation based on least squares conditional generative adversarial nets. Multimedia Tools Appl. 78(1), 783–797 (2018). https://doi.org/10.1007/s11042-017-5457-4

    Article  Google Scholar 

  13. Liu, J., Gu, C., Wang, J., Youn, G., Kim, J.-U.: Multi-scale multi-class conditional generative adversarial network for handwritten character generation. J. Supercomput. 75(4), 1922–1940 (2017). https://doi.org/10.1007/s11227-017-2218-0

    Article  Google Scholar 

  14. Liu, X., Meng, G., Xiang, S., Pan, C.: FontGAN: a unified generative framework for Chinese character stylization and de-stylization. CoRR abs/1910.12604 (2019)

    Google Scholar 

  15. Nishat, Z.K., Shopon, M.: Synthetic class specific Bangla handwritten character generation using conditional generative adversarial networks. In: International Conference on Bangla Speech and Language Processing (ICBSLP), pp. 1–5 (2019)

    Google Scholar 

  16. van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. CoRR abs/1807.03748 (2019)

    Google Scholar 

  17. Papin, P.: Aperçu sur le programme « publication de l’inventaire et du corpus complet des inscriptions sur stèles du viêt-nam ». Bulletin de l’Ecole française d’Extrême-Orient 90(1), 465–472 (2003)

    Article  Google Scholar 

  18. Papin, P.: Les inscriptions anciennes du viêt-nam, source d’une nouvelle vision des xviie et xviiie siêcles. Good Morning 105 (2010)

    Google Scholar 

  19. Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19

    Chapter  Google Scholar 

  20. Qin, M., Chen, X.: Restore the incomplete calligraphy based on style transfer. In: Chinese Control Conference (CCC), pp. 8812–8817 (2019)

    Google Scholar 

  21. Scius-Bertrand, A., Voegtlin, L., Alberti, M., Fischer, A., Bui, M.: Layout analysis and text column segmentation for historical Vietnamese steles. In: Proceedings of 5th International Workshop on Historical Document Imaging and Processing (HIP), pp. 84–89 (2019)

    Google Scholar 

  22. Scius-Bertrand, A., Jungo, M., Wolf, B., Fischer, A., Bui, M.: Annotation-free character detection in historical Vietnamese stele images. In: International Conference on Document Analysis and Recognition (ICDAR), pp. 432–447 (2021)

    Google Scholar 

  23. Tian, Y.: Master Chinese calligraphy with conditional adversarial networks (2017). https://github.com/kaonashi-tyc/zi2zi

  24. Wen, C., et al.: Handwritten Chinese font generation with collaborative stroke refinement. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3882–3891 (2021)

    Google Scholar 

  25. Wu, L., Chen, X., Meng, L., Meng, X.: Multitask adversarial learning for Chinese font style transfer. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2020)

    Google Scholar 

  26. Wu, S.J., Yang, C.Y., Hsu, J.Y.J.: CalliGAN: style and structure-aware Chinese calligraphy character generator. CoRR abs/2005.12500 (2020)

    Google Scholar 

  27. Xi, Y., Yan, G., Hua, J., Zhong, Z.: JointFontGAN: joint geometry-content GAN for font generation via few-shot learning. ACM Int. Conf. Multimedia, pp. 4309–4317 (2020)

    Google Scholar 

  28. Zeng, J., Chen, Q., Liu, Y., Wang, M., Yao, Y.: StrokeGAN: reducing mode collapse in Chinese font generation via stroke encoding. CoRR abs/2012.08687 (2021)

    Google Scholar 

  29. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision (ICCV), pp. 2242–2251 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna Scius-Bertrand .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Diesbach, J., Fischer, A., Bui, M., Scius-Bertrand, A. (2022). Generating Synthetic Styled Chu Nom Characters. In: Porwal, U., Fornés, A., Shafait, F. (eds) Frontiers in Handwriting Recognition. ICFHR 2022. Lecture Notes in Computer Science, vol 13639. Springer, Cham. https://doi.org/10.1007/978-3-031-21648-0_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21648-0_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21647-3

  • Online ISBN: 978-3-031-21648-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics