skip to main content
10.1145/3445815.3445855acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsaiConference Proceedingsconference-collections
research-article

Font Generation Method based on U-net

Published: 17 March 2021 Publication History

Abstract

The main task of the font design is to design a suitable font according to the actual application scenario, which has extremely wide commercial application value. Generally the traditional font design requires professionals to design, with longer design time, lower work efficiency, and higher labor costs. Font design is essentially a problem of image synthesis. U-net is a deep learning network structure, which has been widely used in image synthesis, but the images synthesized by U-net have the disadvantages of low image quality and poor visual effects. In order to improve the shortcomings of U-net image synthesis effectively, this paper provides an improved U-net method for better font design. The new method is called Swish-gated residual dilated U-net (Swish-gated residual dilated U-net, SGRDU). In SGRDU, the proposed swish layer and swish-gated residual block can effectively control the information transmitted by each horizontal and vertical layer in U-net, and accelerate the network convergence. Dilated convolution is used to improve the perception of the network. Experimental results show that, compared with other residual U-net, the font synthesized by SGRDU has better visual effect and quality.

References

[1]
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
[2]
Liang Peijun, Liu Yijun. Coloring method of hand-drawn cartoons based on conditional generation of confrontation networks [J]. Computer Application Research, 2019.36(01): 308-311.
[3]
Wu Xiaoqi. Research on manga style transfer method based on deep learning [D]. Xi'an University of Technology, 2019.
[4]
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).
[5]
Dong, H., Yang, G., Liu, F., Mo, Y., Guo, Y.: Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks. In: Vald´esHern´andez, M., Gonz´alez-Castro, V. (eds.) MIUA 2017. CCIS, vol. 723, pp. 506– 517. Springer, Cham (2017).
[6]
Liu G., Chen X., Hu Y. (2019) Anime Sketch Coloring with Swish-Gated Residual U-Net. In: Peng H., Deng C., Wu Z., Liu Y. (eds) Computational Intelligence and Intelligent Systems. ISICA 2018. Communications in Computer and Information Science, vol 986. Springer, Singapore
[7]
Fisher Y.Vladlen K.:Multi-Scale Context Aggregation by Dilated Convolutions .arXiv:1511.07122[cs.CV](2016)
[8]
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016)
[9]
.Lin, B.S., Michael, K., Kalra, S., Tizhoosh, H.R.: Skin lesion segmentation: U-nets versus clustering. In: Proceedings of 2017 IEEE Symposium Series on Computational Intelligence (SSCI 2017), Honolulu, HI, United States, pp. 1–7, November 2017
[10]
Zhao, H., Sun, N.: Improved U-net model for nerve segmentation. In: Zhao, Y., Kong, X., Taubman, D. (eds.) ICIG 2017. LNCS, vol. 10667, pp. 496–504. Springer, Cham (2017).
[11]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, United states, pp. 770–778 (2016)
[12]
Ramachandran, P., Zoph, B., Le, Q.V.: Searching for activation functions. CoRR abs/1710.05941 (2017).
[13]
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectififiers: surpassing humanlevel performance on imagenet classifification. In: Proceedings of 2015 IEEE International Conference on Computer Vision (ICCV 2015), Santiago, Chile, pp. 1026– 1034 (2015)
[14]
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. CoRR abs/1607.06450 (2016).
[15]
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014).
[16]
Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual U-net. IEEE Geosci. Remote Sens. Lett. 15(5), 749–753 (2018)
[17]
Huang Jiaheng, Li Xiaowei, Chen Benhui, Yang Dengqi. A comparative study of image similarity algorithms based on hash [J]. Journal of Dali University, 2017, 2(12): 32-37.
[18]
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CSAI '20: Proceedings of the 2020 4th International Conference on Computer Science and Artificial Intelligence
December 2020
294 pages
ISBN:9781450388436
DOI:10.1145/3445815
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 March 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Font design
  2. Image synthesis
  3. Swish-gated residual blocks
  4. U-net

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CSAI 2020

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 89
    Total Downloads
  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media