Abstract
Recently, the generation of Chinese characters attracts many researchers. Many excellent works only focus on Chinese font transformation, which is, Chinese character can be transformed into another font style. However, there is no research to generate a Chinese character of new category, which is an interesting and important topic. This paper introduces a radical combination network, called RCN, to generate new Chinese character categories by integrating radicals according to the caption which describes the radicals and spatial relationship between them. The proposed RCN first splits the caption into pieces. A self-recurrent network is employed as an encoder, aiming at integrating these caption pieces and pictures of radicals into a vector. Then a vector which represents font/writing style is connected with the output from encoder. Finally a decoder, based on deconvolution network, using the vector to synthesize the picture of a Chinese character. The key idea of the proposed approach is to treat a Chinese character as a composition of radicals rather than a single character class, which makes the machine play the role of Cangjie who invents Chinese characters in ancient legend. As a kind of important resource, the generated characters can be reused in recognition tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahmed, M., Samee, M.R., Mercer, R.E.: Improving tree-LSTM with tree attention. In: 2019 IEEE 13th International Conference on Semantic Computing (ICSC), pp. 247–254. IEEE (2019)
Azadi, S., Fisher, M., Kim, V.G., Wang, Z., Shechtman, E., Darrell, T.: Multi-content gan for few-shot font style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7564–7573 (2018)
Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)
Cho, S., Wang, J., Lee, S.: Handling outliers in non-blind image deconvolution. In: 2011 International Conference on Computer Vision, pp. 495–502. IEEE (2011)
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Larsson, G., Maire, M., Shakhnarovich, G.: FractalNet: ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648 (2016)
Li, J., Luong, M., Jurafsky, D., Hovy, E.: When are tree structures necessary for deep learning of representations. Artificial Intelligence. arXiv (2015)
Li, X., Zhang, X.: The writing order of modern Chinese character components. J. Modernization Chin. Lang. Educ. 2, 26–41 (2013)
Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation (2015)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)
Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642 (2013)
Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. Computation and Language. arXiv (2015)
Tian, Y.: zi2zi: Master Chinese calligraphy with conditional adversarial networks (2017)
Wang, W., Zhang, J., Du, J., Wang, Z.R., Zhu, Y.: DenseRAN for offline handwritten Chinese character recognition. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 104–109. IEEE (2018)
Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
Zhang, J., Zhu, Y., Du, J., Dai, L.: Radical analysis network for zero-shot learning in printed Chinese character recognition. In: 2018 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2018)
Zhu, X., Sobihani, P., Guo, H.: Long short-term memory over recursive structures. In: International Conference on Machine Learning, pp. 1604–1612. PMLR (2015)
Acknowledgements
This work was supported in part by the MOE-Microsoft Key Laboratory of USTC, and Youtu Lab of Tencent.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Xue, M., Du, J., Zhang, J., Wang, ZR., Wang, B., Ren, B. (2021). Radical Composition Network for Chinese Character Generation. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12821. Springer, Cham. https://doi.org/10.1007/978-3-030-86549-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-86549-8_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86548-1
Online ISBN: 978-3-030-86549-8
eBook Packages: Computer ScienceComputer Science (R0)