Skip to main content
Log in

SCCGAN: Style and Characters Inpainting Based on CGAN

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

With the development of deep learning technology, many deep learning methods have been applied to font recognition and generation. However, few studies focus on font inpainting problems. This paper is dedicated to repairing damaged fonts based on style to repair damaged fonts in a better way. In this paper, we propose a CGAN (Conditional Generative Adversarial Nets)-based font repair method. This paper uses the content accuracy and style similarity of the repaired image as an evaluation index to evaluate the accuracy of the restored style font. The font content proposed by the paper based on CGAN network repair style is similar with the correct content.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Shen J, Chan TF (2002) Mathematical models for local nontexture inpaintings. SIAM J Appl Math 62(3):1019–1043

    Article  MathSciNet  Google Scholar 

  2. Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. In: ACM Transactions on Graphics (Proc. SIGGRAPH) 28(3)

  3. Chen B, Wang D, Li P, Wang S, Lu H (2018) Real-time ’actor-Critic’ tracking. In: proceedings of the European Conference on Computer Vision (ECCV). Pp 318–334

  4. Zhou Q, Wang Y, Liu J, Jin X, Latecki LJ (2019) An open-source project for real-time image semantic segmentation. Science China Inf Sci 62(12):227101

    Article  Google Scholar 

  5. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros A (2016) A context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).. pp 2536–2544

  6. Yang C, Lu X, Lin Z, Shechtman E, Wang O, Li H (2017) High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 6721–6729

  7. Wu R, Feng M, Guan W, Wang D, Lu H, Ding E (2019) A mutual learning method for salient object detection with intertwined multi-supervision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 8150–8159

  8. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems (NIPS). pp 2672–2680

  9. Li Y, Liu S, Yang J, Yang M-H (2017) Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 3911–3919

  10. Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:151106434

  11. Yeh R, Chen C, Lim TY, Hasegawa-Johnson M, Do MN (2016) Semantic image inpainting with perceptual and contextual losses. arXiv preprint arXiv:160707539

  12. Lu H, Li B, Zhu J, Li Y, Li Y, Xu X, He L, Li X, Li J, Serikawa S (2017) Wound intensity correction and segmentation with convolutional neural networks. Concurrency Comput Pract Exp 29(6)

  13. Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv preprint arXiv:14111784

  14. Zhou Q, Yang W, Gao G, Ou W, Lu H, Chen J, Latecki LJ (2019) Multi-scale deep context convolutional neural networks for semantic segmentation. World Wide Web 22(2):555–570

    Article  Google Scholar 

  15. Taigman Y, Polyak A, Wolf L (2016) Unsupervised cross-domain image generation. arXiv preprint arXiv:161102200

  16. Purkaystha B, Datta T, Islam MS (2017) Bengali handwritten character recognition using deep convolutional neural network. In: 2017 20th international conference of computer and information technology (ICCIT), 2017. IEEE, pp 1-5

  17. Tan B-R, Yin F, Wu Y-C, Liu C-L (2017) Chinese handwriting generation by neural network based style transformation. In: International Conference on Image and Graphics (ICIG). pp 408–419

  18. Xu S, Liu D, Xiong Z (2017) Edge-guided generative adversarial network for image inpainting. In: 2017 IEEE visual communications and image processing (VCIP). IEEE, pp 1–4

  19. Gatys LA, Ecker AS, Bethge M (2015) A neural algorithm of artistic style. arXiv preprint arXiv:150806576

  20. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision (ECCV). pp 694-711

  21. Shen F, Yan S, Zeng G (2018) Neural style transfer via meta networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 8061–8069

  22. Liu R, Yang R, Li S, Shi Y, Jin X (2018) Painting completion with generative translation models. Multimed Tools Appl 79:14375–14388

    Article  Google Scholar 

  23. Zheng Z, Zheng L, Yang Y (2017) Unlabeled samples generated by Gan improve the person re-identification baseline in vitro. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). pp 3754–3762

  24. Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H (2016) Generative adversarial text to image synthesis. arXiv preprint arXiv:160505396

  25. Zhu J-Y, Park T, Isola P, Efros A (2017) A unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision (ICCV). pp 2223–2232

  26. Tran L, Yin X, Liu X (2017) Disentangled representation learning Gan for pose-invariant face recognition. In: Proceedings of the IEEE Conference on computer vision and pattern recognition (CVPR). pp 1415–1424

  27. Isola P, Zhu J-Y, Zhou T, Efros A (2017) A image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1125–1134

  28. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention (MICCAI). pp 234–241

  29. Jégou S, Drozdzal M, Vazquez D, Romero A, Bengio Y (2017) The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) workshops. pp 11–19

  30. Wang J, Li X, Yang J (2018) Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). pp 1788–1797

  31. Sanakoyeu A, Kotovenko D, Lang S, Ommer B (2018) A style-aware content loss for real-time hd style transfer. In: proceedings of the European conference on computer vision (ECCV). Pp 698–714

  32. Liao J, Yao Y, Yuan L, Hua G, Kang SB (2017) Visual attribute transfer through deep image analogy. arXiv preprint arXiv:170501088

  33. Li Y, Fang C, Yang J, Wang Z, Lu X, Yang M-H (2017) Diversified texture synthesis with feed-forward networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 3920–3928

  34. Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. In: Conference on Neural Information Processing Systems (NIPS) workshop. no. EPFL-CONF-192376

  35. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:14126980

  36. Lu H, Li Y, Chen M, Kim H, Serikawa S (2018) Brain intelligence: go beyond artificial intelligence. Mob Netw Appl 23(2):368–375

    Article  Google Scholar 

  37. Serikawa S, Lu H (2014) Underwater image dehazing using joint trilateral filter. Comput Electr Eng 40(1):41–50

    Article  Google Scholar 

  38. Lu H, Li Y, Mu S, Wang D, Kim H, Serikawa S (2017) Motor anomaly detection for unmanned aerial vehicles using reinforcement learning. IEEE Internet Things J 5(4):2315–2322

    Article  Google Scholar 

  39. Lu H, Wang D, Li Y, Li J, Li X, Kim H, Serikawa S, Humar I (2019) CONet: a cognitive ocean network. IEEE Wirel Commun 26(3):90–96

    Article  Google Scholar 

  40. Zhang Y, Gravina R, Lu H, Villari M, Fortino G (2018) PEA: parallel electrocardiogram-based authentication for smart healthcare systems. J Netw Comput Appl 117:10–16

    Article  Google Scholar 

  41. Xu X, Lu H, Song J, Yang Y, Shen HT, Li X (2019) Ternary adversarial networks with self-supervision for zero-shot cross-modal retrieval. IEEE Trans Cybern 50(6):2400–2413

    Article  Google Scholar 

  42. Liu R, Chen Y, Zhu X, Hou K (2016) Image classification using label constrained sparse coding. Multimed Tools Appl 75(23):15619–15633

    Article  Google Scholar 

Download references

Acknowledgements

This research was funded by Natural Science Foundation of Beijing Municipality grant number (4202016), National Natural Science Foundation of China grant number (62076012, 61877002, 61972010).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruijun Liu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, R., Wang, X., Lu, H. et al. SCCGAN: Style and Characters Inpainting Based on CGAN. Mobile Netw Appl 26, 3–12 (2021). https://doi.org/10.1007/s11036-020-01717-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-020-01717-x

Keywords

Navigation