Abstract
With the development of deep learning technology, many deep learning methods have been applied to font recognition and generation. However, few studies focus on font inpainting problems. This paper is dedicated to repairing damaged fonts based on style to repair damaged fonts in a better way. In this paper, we propose a CGAN (Conditional Generative Adversarial Nets)-based font repair method. This paper uses the content accuracy and style similarity of the repaired image as an evaluation index to evaluate the accuracy of the restored style font. The font content proposed by the paper based on CGAN network repair style is similar with the correct content.
Similar content being viewed by others
References
Shen J, Chan TF (2002) Mathematical models for local nontexture inpaintings. SIAM J Appl Math 62(3):1019–1043
Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. In: ACM Transactions on Graphics (Proc. SIGGRAPH) 28(3)
Chen B, Wang D, Li P, Wang S, Lu H (2018) Real-time ’actor-Critic’ tracking. In: proceedings of the European Conference on Computer Vision (ECCV). Pp 318–334
Zhou Q, Wang Y, Liu J, Jin X, Latecki LJ (2019) An open-source project for real-time image semantic segmentation. Science China Inf Sci 62(12):227101
Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros A (2016) A context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).. pp 2536–2544
Yang C, Lu X, Lin Z, Shechtman E, Wang O, Li H (2017) High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 6721–6729
Wu R, Feng M, Guan W, Wang D, Lu H, Ding E (2019) A mutual learning method for salient object detection with intertwined multi-supervision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 8150–8159
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems (NIPS). pp 2672–2680
Li Y, Liu S, Yang J, Yang M-H (2017) Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 3911–3919
Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:151106434
Yeh R, Chen C, Lim TY, Hasegawa-Johnson M, Do MN (2016) Semantic image inpainting with perceptual and contextual losses. arXiv preprint arXiv:160707539
Lu H, Li B, Zhu J, Li Y, Li Y, Xu X, He L, Li X, Li J, Serikawa S (2017) Wound intensity correction and segmentation with convolutional neural networks. Concurrency Comput Pract Exp 29(6)
Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv preprint arXiv:14111784
Zhou Q, Yang W, Gao G, Ou W, Lu H, Chen J, Latecki LJ (2019) Multi-scale deep context convolutional neural networks for semantic segmentation. World Wide Web 22(2):555–570
Taigman Y, Polyak A, Wolf L (2016) Unsupervised cross-domain image generation. arXiv preprint arXiv:161102200
Purkaystha B, Datta T, Islam MS (2017) Bengali handwritten character recognition using deep convolutional neural network. In: 2017 20th international conference of computer and information technology (ICCIT), 2017. IEEE, pp 1-5
Tan B-R, Yin F, Wu Y-C, Liu C-L (2017) Chinese handwriting generation by neural network based style transformation. In: International Conference on Image and Graphics (ICIG). pp 408–419
Xu S, Liu D, Xiong Z (2017) Edge-guided generative adversarial network for image inpainting. In: 2017 IEEE visual communications and image processing (VCIP). IEEE, pp 1–4
Gatys LA, Ecker AS, Bethge M (2015) A neural algorithm of artistic style. arXiv preprint arXiv:150806576
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision (ECCV). pp 694-711
Shen F, Yan S, Zeng G (2018) Neural style transfer via meta networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 8061–8069
Liu R, Yang R, Li S, Shi Y, Jin X (2018) Painting completion with generative translation models. Multimed Tools Appl 79:14375–14388
Zheng Z, Zheng L, Yang Y (2017) Unlabeled samples generated by Gan improve the person re-identification baseline in vitro. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). pp 3754–3762
Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H (2016) Generative adversarial text to image synthesis. arXiv preprint arXiv:160505396
Zhu J-Y, Park T, Isola P, Efros A (2017) A unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision (ICCV). pp 2223–2232
Tran L, Yin X, Liu X (2017) Disentangled representation learning Gan for pose-invariant face recognition. In: Proceedings of the IEEE Conference on computer vision and pattern recognition (CVPR). pp 1415–1424
Isola P, Zhu J-Y, Zhou T, Efros A (2017) A image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1125–1134
Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention (MICCAI). pp 234–241
Jégou S, Drozdzal M, Vazquez D, Romero A, Bengio Y (2017) The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) workshops. pp 11–19
Wang J, Li X, Yang J (2018) Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). pp 1788–1797
Sanakoyeu A, Kotovenko D, Lang S, Ommer B (2018) A style-aware content loss for real-time hd style transfer. In: proceedings of the European conference on computer vision (ECCV). Pp 698–714
Liao J, Yao Y, Yuan L, Hua G, Kang SB (2017) Visual attribute transfer through deep image analogy. arXiv preprint arXiv:170501088
Li Y, Fang C, Yang J, Wang Z, Lu X, Yang M-H (2017) Diversified texture synthesis with feed-forward networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp 3920–3928
Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. In: Conference on Neural Information Processing Systems (NIPS) workshop. no. EPFL-CONF-192376
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:14126980
Lu H, Li Y, Chen M, Kim H, Serikawa S (2018) Brain intelligence: go beyond artificial intelligence. Mob Netw Appl 23(2):368–375
Serikawa S, Lu H (2014) Underwater image dehazing using joint trilateral filter. Comput Electr Eng 40(1):41–50
Lu H, Li Y, Mu S, Wang D, Kim H, Serikawa S (2017) Motor anomaly detection for unmanned aerial vehicles using reinforcement learning. IEEE Internet Things J 5(4):2315–2322
Lu H, Wang D, Li Y, Li J, Li X, Kim H, Serikawa S, Humar I (2019) CONet: a cognitive ocean network. IEEE Wirel Commun 26(3):90–96
Zhang Y, Gravina R, Lu H, Villari M, Fortino G (2018) PEA: parallel electrocardiogram-based authentication for smart healthcare systems. J Netw Comput Appl 117:10–16
Xu X, Lu H, Song J, Yang Y, Shen HT, Li X (2019) Ternary adversarial networks with self-supervision for zero-shot cross-modal retrieval. IEEE Trans Cybern 50(6):2400–2413
Liu R, Chen Y, Zhu X, Hou K (2016) Image classification using label constrained sparse coding. Multimed Tools Appl 75(23):15619–15633
Acknowledgements
This research was funded by Natural Science Foundation of Beijing Municipality grant number (4202016), National Natural Science Foundation of China grant number (62076012, 61877002, 61972010).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Liu, R., Wang, X., Lu, H. et al. SCCGAN: Style and Characters Inpainting Based on CGAN. Mobile Netw Appl 26, 3–12 (2021). https://doi.org/10.1007/s11036-020-01717-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11036-020-01717-x