skip to main content
research-article

Two-stage sketch colorization

Published: 04 December 2018 Publication History

Abstract

Sketch or line art colorization is a research field with significant market demand. Different from photo colorization which strongly relies on texture information, sketch colorization is more challenging as sketches may not have texture. Even worse, color, texture, and gradient have to be generated from the abstract sketch lines. In this paper, we propose a semi-automatic learning-based framework to colorize sketches with proper color, texture as well as gradient. Our framework consists of two stages. In the first drafting stage, our model guesses color regions and splashes a rich variety of colors over the sketch to obtain a color draft. In the second refinement stage, it detects the unnatural colors and artifacts, and try to fix and refine the result. Comparing to existing approaches, this two-stage design effectively divides the complex colorization task into two simpler and goal-clearer subtasks. This eases the learning and raises the quality of colorization. Our model resolves the artifacts such as water-color blurring, color distortion, and dull textures.
We build an interactive software based on our model for evaluation. Users can iteratively edit and refine the colorization. We evaluate our learning model and the interactive system through an extensive user study. Statistics shows that our method outperforms the state-of-art techniques and industrial applications in several aspects including, the visual quality, the ability of user control, user experience, and other metrics.

Supplementary Material

ZIP File (a261-zhang.zip)
Supplemental files.
MP4 File (a261-zhang.mp4)

References

[1]
Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. 2017. Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks. CVPR (2017).
[2]
Qifeng Chen and Vladlen Koltun. 2017. Photographic Image Synthesis with Cascaded Refinement Networks. ICCV (2017).
[3]
DanbooruCommunity. 2018. Danbooru2017: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset. (2018).
[4]
Jeff Donahue, Philipp KrÃd'henbÃijhl, and Trevor Darrell. 2017. Adversarial Feature Learning. ICLR (2017).
[5]
L. Fei-Fei. 2010. ImageNet: crowdsourcing, benchmarking & other cool things. CMU VASC Seminar (2010).
[6]
Kevin Frans. 2017. Outline Colorization through Tandem Adversarial Networks. In Arxiv (2017).
[7]
Chie Furusawa, Kazuyuki Hiroshiba, Keisuke Ogaki, and Yuri Odagiri. 2017. Comicolorization. In SIGGRAPH Asia 2017 Technical Briefs.
[8]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In CVPR. 2414--2423.
[9]
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Networks. NIPS 3 (2014), 2672--2680.
[10]
Christopher Hart. 2015. The Master Guide to Drawing Anime: How to Draw Original Characters from Simple Templates. Paperback.
[11]
Paulina Hensman and Kiyoharu Aizawa. 2017. cGAN-based Manga Colorization Using a Single Training Image. In Arxiv (2017).
[12]
Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2016. Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification. ACM Transactions on Graphics 35, 4 (2016).
[13]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. CVPR (2017).
[14]
Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. 2015. Spatial Transformer Networks. NIPS (2015).
[15]
Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. 2017. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. ICML (2017).
[16]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. Computer Science (2014).
[17]
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. 2016. Learning Representations for Automatic Colorization. In ECCV. Springer, 577--593.
[18]
Anat Levin, Dani Lischinski, and Yair Weiss. 2004. Colorization using optimization. In ACM Transactions on Graphics. ACM Press. Chengze Li, Xueting Liu, and Tien-Tsin Wong. 2017. Deep Extraction of Manga Structural Lines. ACM Transactions on Graphics 36, 4 (2017).
[19]
Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. 2017. Visual attribute transfer through deep image analogy. ACM Transactions on Graphics 36, 4 (jul 2017), 1--15.
[20]
Ming-Yu Liu, Thomas Breuel, and Jan Kautz. 2017a. Unsupervised Image-to-Image Translation Networks. NIPS (2017).
[21]
Yifan Liu, Zengchang Qin, Zhenbo Luo, and Hua Wang. 2017b. Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks. In Arxiv (2017).
[22]
Vinod Nair and Geoffrey E. Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Machines. ICML (2010), 807--814.
[23]
Yingge Qu, Tien-Tsin Wong, and Pheng-Ann Heng. 2006. Manga Colorization. ACM Transactions on Graphics 25, 3 (July 2006), 1214--1220.
[24]
Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. 2017. Scribbler: Controlling Deep Image Synthesis with Sketch and Color. CVPR (2017).
[25]
Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2018a. Mastering Sketching: Adversarial Augmentation for Structured Prediction. ACM Transactions on Graphics 37, 1 (2018).
[26]
Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2018b. Real-Time Data-Driven Interactive Rough Sketch Inking. ACM Transactions on Graphics (2018).
[27]
Edgar Simo-Serra, Satoshi Iizuka, Kazuma Sasaki, and Hiroshi Ishikawa. 2016. Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup. ACM Transactions on Graphics 35, 4 (2016).
[28]
Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. Computer Science (2014).
[29]
Daniel Sykora, John Dingliana, and Steven Collins. 2009. LazyBrush: Flexible Painting Tool for Hand-drawn Cartoons. Computer Graphics Forum 28, 2 (2009).
[30]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going Deeper with Convolutions. CVPR (2015).
[31]
TaiZan. 2016. PaintsChainer Tanpopo. PreferredNetwork (2016).
[32]
TaiZan. 2017a. PaintsChainer Canna. PreferredNetwork (2017).
[33]
TaiZan. 2017b. PaintsChainer Satsuki. PreferredNetwork (2017).
[34]
Chao Wang, Haiyong Zheng, Zhibin Yu, Ziqiang Zheng, Zhaorui Gu, and Bing Zheng. 2018b. Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation. ECCV (2018).
[35]
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018a. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. CVPR (2018).
[36]
Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. 2017. DualGAN: Unsupervised Dual Learning for Image-to-Image Translation. ICCV (2017).
[37]
Lvmin Zhang, Yi Ji, and Xin Lin. 2017a. Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN. ACPR (2017).
[38]
Richard Zhang, Phillip Isola, and Alexei A Efros. 2016. Colorful Image Colorization. In ECCV.
[39]
Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, and Alexei A Efros. 2017b. Real-Time User-Guided Image Colorization with Learned Deep Priors. ACM Transactions on Graphics 9, 4 (2017).
[40]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. ICCV (2017).

Cited By

View all
  • (2025)Intelligent Generation of Graphical Game Assets: A Conceptual Framework and Systematic Review of the State of the ArtACM Computing Surveys10.1145/370849957:5(1-38)Online publication date: 9-Jan-2025
  • (2025)Diverse Motion In-Betweening From Sparse Keyframes With Dual Posture StitchingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.336345731:2(1402-1413)Online publication date: 1-Feb-2025
  • (2025)Advancing Sequential Manga Colorization for AR Through Data SynthesisIEEE Access10.1109/ACCESS.2025.352688213(7526-7537)Online publication date: 2025
  • Show More Cited By

Index Terms

  1. Two-stage sketch colorization

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Graphics
      ACM Transactions on Graphics  Volume 37, Issue 6
      December 2018
      1401 pages
      ISSN:0730-0301
      EISSN:1557-7368
      DOI:10.1145/3272127
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 December 2018
      Published in TOG Volume 37, Issue 6

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. colorization
      2. line arts
      3. sketch

      Qualifiers

      • Research-article

      Funding Sources

      • RGC General Research Fund

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)213
      • Downloads (Last 6 weeks)22
      Reflects downloads up to 20 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Intelligent Generation of Graphical Game Assets: A Conceptual Framework and Systematic Review of the State of the ArtACM Computing Surveys10.1145/370849957:5(1-38)Online publication date: 9-Jan-2025
      • (2025)Diverse Motion In-Betweening From Sparse Keyframes With Dual Posture StitchingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.336345731:2(1402-1413)Online publication date: 1-Feb-2025
      • (2025)Advancing Sequential Manga Colorization for AR Through Data SynthesisIEEE Access10.1109/ACCESS.2025.352688213(7526-7537)Online publication date: 2025
      • (2025)Motion-guided semantic alignment for line art animation colorizationPattern Recognition10.1016/j.patcog.2024.111055158(111055)Online publication date: Feb-2025
      • (2024)StripeRust-Pocket: A Mobile-Based Deep Learning Application for Efficient Disease Severity Assessment of Wheat Stripe RustPlant Phenomics10.34133/plantphenomics.02016(0201)Online publication date: 2024
      • (2024)[Paper] LineGAN: A Line Drawing Rendering Model with a Focus on Line Density DistributionITE Transactions on Media Technology and Applications10.3169/mta.12.15312:1(153-166)Online publication date: 2024
      • (2024)Research on Color Intelligent Matching and Optimization of Traditional Art Colors in Modern Visual Communication DesignsApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-32969:1Online publication date: 18-Nov-2024
      • (2024)VRCreatIn: Taking In-Situ Pen and Tablet Interaction Beyond Ideation to 3D Modeling Lighting and TexturingProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701580(24-35)Online publication date: 1-Dec-2024
      • (2024)LVCD: Reference-based Lineart Video Colorization with Diffusion ModelsACM Transactions on Graphics10.1145/368791043:6(1-11)Online publication date: 19-Dec-2024
      • (2024)Real-time Diverse Motion In-betweening with Space-time ControlProceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games10.1145/3677388.3696327(1-8)Online publication date: 21-Nov-2024
      • Show More Cited By

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media