Abstract
Some applications can be based on the image transfer method in the development of contemporary related fields, which can make the incomplete Chinese font complete. These methods have the ground truth and train on paired calligraphy image from different fonts. We propose a new method called mode averaging generative adversarial network (MA-GAN) in order to generate some variations of a single Chinese character. As the data set of calligraphy font creation through samples of a given single character is usually small, it is not suitable to use conventional generative models. Therefore, we designed a special pyramid generative adversarial network, using the weighted mean loss in the loss function to average the image and the adversarial loss to correct the topology. The pyramid structure allows the generator to control the rendering of the topology at low resolution layers, while supplementing and changing details at high resolution layers without destroying the image topology resulting from the diversity requirements. We compared MA-GAN with other generation models and proved its good performance in the task of creating calligraphy fonts based on the small data sets.
Supported by Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Goodfellow, I.J., et al.: Generative adversarial networks. arXiv preprint arXiv: 1406.2661 (10 June 2014)
Gao, Y., Wu, J.: CalliGAN: unpaired mutli-chirography Chinese calligraphy image translation. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11362, pp. 334–348. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20890-5_22
Jiang, Y., Lian, Z., Tang, Y., Xiao, J.: Scfont: structure-guided Chinese font generation via deep stacked networks. In: Proceedings of the AAAI Conference on Artificial Intelligence 2019, Honolulu, pp 4015–4022. AAAI (2019)
Zhang, G., Huang, W., Chen, R., et al.: Calligraphy fonts generation based on generative adversarial networks. ICIC Express Lett. Part B Appl. Int. J. Res. Surv. 10(3), 203–209 (2019)
Lyu, P., Bai, X., Yao, C., Zhu, Z., Huang, T., Liu, W.: Auto-encoder guided GAN for Chinese calligraphy synthesis. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, pp 1095–1100. IEEE (2017)
Zong, A., Zhu, Y.: Strokebank: automating personalized Chinese handwriting generation. In: Twenty-Sixth IAAI Conference 2014, Quebec, pp 3024–3029. AAAI (2014)
Li, M., Wang, J., Yang, Y., Huang, W., Du, W.: Improving GAN-based calligraphy character generation using graph matching. In: 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C), Sofia, pp 291–295. IEEE (2019)
Sun, D., Ren, T., Li, C., Su, H., Zhu, J.: Learning to write stylized Chinese characters by reading a handful of examples. In: 2018 International Joint Conference on Artificial Intelligence, Stockholm, pp 920–927. IJCAI (2018)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434. Nov 19 (2015)
Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017)
Denton, E., Chintala, S., Szlam, A., Fergus, R.: Deep generative image models using a laplacian pyramid of adversarial networks. In: Advances in Neural Information Processing Systems 2015, Cambridge, pp. 1486–1494. MIT Press (2015)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, pp 1125–1134. IEEE (2017)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision 2017, Venice, pp. 2223–2232. IEEE (2017)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of wasserstein gans. In: Conference on Neural Information Processing Systems 2017, Long Beach, pp. 5768–5778. MIT Press (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, pp 770–778. IEEE (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, J., Zhang, Y., Ma, X., Yang, D., Shen, Y., Jiang, H. (2021). MA-GAN: A Method Based on Generative Adversarial Network for Calligraphy Morphing. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13108. Springer, Cham. https://doi.org/10.1007/978-3-030-92185-9_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-92185-9_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92184-2
Online ISBN: 978-3-030-92185-9
eBook Packages: Computer ScienceComputer Science (R0)