skip to main content
research-article

Reference Based Sketch Extraction via Attention Mechanism

Published: 30 November 2022 Publication History

Abstract

We propose a model that extracts a sketch from a colorized image in such a way that the extracted sketch has a line style similar to a given reference sketch while preserving the visual content identically to the colorized image. Authentic sketches drawn by artists have various sketch styles to add visual interest and contribute feeling to the sketch. However, existing sketch-extraction methods generate sketches with only one style. Moreover, existing style transfer models fail to transfer sketch styles because they are mostly designed to transfer textures of a source style image instead of transferring the sparse line styles from a reference sketch. Lacking the necessary volumes of data for standard training of translation systems, at the core of our GAN-based solution is a self-reference sketch style generator that produces various reference sketches with a similar style but different spatial layouts. We use independent attention modules to detect the edges of a colorized image and reference sketch as well as the visual correspondences between them. We apply several loss terms to imitate the style and enforce sparsity in the extracted sketches. Our sketch-extraction method results in a close imitation of a reference sketch style drawn by an artist and outperforms all baseline methods. Using our method, we produce a synthetic dataset representing various sketch styles and improve the performance of auto-colorization models, in high demand in comics. The validity of our approach is confirmed via qualitative and quantitative evaluations.

Supplemental Material

MP4 File
presentation

References

[1]
Kiyoharu Aizawa, Azuma Fujimoto, Atsushi Otsubo, Toru Ogawa, Yusuke Matsui, Koki Tsubota, and Hikaru Ikuta. 2020. Building a Manga Dataset "Manga109" With Annotations for Multimedia Applications. IEEE MultiMedia 27, 2 (2020), 8--18.
[2]
Arash Akbarinia and C. Alejandro Párraga. 2018. Feedback and Surround Modulated Boundary Detection. International Journal of Computer Vision 126 (12 2018).
[3]
Pablo Arbeláez, Michael Maire, Charless Fowlkes, and Jitendra Malik. 2011. Contour Detection and Hierarchical Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 5 (2011), 898--916.
[4]
James Arvo and Kevin Novins. 2000. Fluid sketches: continuous recognition and morphing of simple hand-drawn shapes. In Proceedings of the 13th annual ACM symposium on User interface software and technology. 73--80.
[5]
Seok-Hyung Bae, Ravin Balakrishnan, and Karan Singh. 2008. ILoveSketch: as-natural-as-possible sketching system for creating 3d curve models. In Proceedings of the 21st annual ACM symposium on User interface software and technology. 151--160.
[6]
Gedas Bertasius, Jianbo Shi, and Lorenzo Torresani. 2014. DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection. (12 2014).
[7]
Ali Borji. 2021. Pros and Cons of GAN Evaluation Measures: New Developments. (03 2021).
[8]
J Canny. 1986. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 8, 6 (June 1986), 679--698.
[9]
John F. Canny. 1983. Finding Edges and Lines in Images. Theory of Computing Systems Mathematical Systems Theory (1983), 16.
[10]
Ruizhi Cao, Haoran Mo, and Chengying Gao. 2021. Line Art Colorization Based on Explicit Region Segmentation. Computer Graphics Forum (2021).
[11]
Huiwen Chang, Jingwan Lu, Fisher Yu, and Adam Finkelstein. 2018. PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 40--48.
[12]
Yajing Chen, Shikui Tu, Yuqi Yi, and Lei Xu. 2017. Sketch-pix2seq: a model to generate sketches of multiple categories. arXiv preprint arXiv:1709.04121 (2017).
[13]
Wonwoong Cho, Sungha Choi, David Park, Inkyu Shin, and Jaegul Choo. 2018. Image-to-Image Translation via Group-wise Deep Whitening and Coloring Transformation.
[14]
Yuanzheng Ci, Xinzhu Ma, Zhihui Wang, Haojie Li, and Zhongxuan Luo. 2018. User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks (MM '18). Association for Computing Machinery, New York, NY, USA, 1536--1544.
[15]
DanbooruCommunity. 2021. Danbooru2020: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset. https://www.gwern.net/Danbooru2020. https://www.gwern.net/Danbooru2020 Accessed: 2021/11/03.
[16]
Weijian Deng, Liang Zheng, Qixiang Ye, Guoliang Kang, Yi Yang, and Jianbin Jiao. 2018. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In Proceedings of the IEEE conference on computer vision and pattern recognition. 994--1003.
[17]
Piotr Dollár and C. Zitnick. 2014. Fast Edge Detection Using Structured Forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (06 2014).
[18]
Tzu-Ting Fang, Duc Minh Vo, Akihiro Sugimoto, and Shang-Hong Lai. 2021. Stylized-Colorization for Line Arts. In 2020 25th International Conference on Pattern Recognition (ICPR). 2033--2040.
[19]
Jean-Dominique Favreau, Florent Lafarge, and Adrien Bousseau. 2016. Fidelity vs. simplicity: a global approach to line drawing vectorization. ACM Transactions on Graphics (TOG) 35, 4 (2016), 1--10.
[20]
Noa Fish, Lilach Perry, Amit Bermano, and Daniel Cohen-Or. 2020. SketchPatch: Sketch stylization via seamless patch-level synthesis. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--14.
[21]
Chie Furusawa, Kazuyuki Hiroshiba, Keisuke Ogaki, and Yuri Odagiri. 2017. Comicolorization: Semi-Automatic Manga Colorization. In SIGGRAPH Asia 2017 Technical Briefs (Bangkok, Thailand) (SA '17). Association for Computing Machinery, New York, NY, USA, Article 12, 4 pages.
[22]
Yaroslav Ganin and Victor Lempitsky. 2014. N4-Fields: Neural Network Nearest Neighbor Fields for Image Transforms.
[23]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning. PMLR, 1180--1189.
[24]
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The journal of machine learning research 17, 1 (2016), 2096--2030.
[25]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26]
Xin-Yi Gong, Hu Su, De Xu, Zhengtao Zhang, Fei Shen, and Hua-Bin Yang. 2018. An Overview of Contour Detection Approaches. International Journal of Automation and Computing 15 (06 2018), 1--17.
[27]
Stéphane Grabli, Frédo Durand, and Francois X Sillion. 2004. Density measure for line-drawing simplification. In 12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings. IEEE, 309--318.
[28]
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. The Journal of Machine Learning Research 13, 1 (2012), 723--773.
[29]
Cosmin Grigorescu, Nicolai Petkov, and Michel Westenberg. 2003. Contour detection based on nonclassical receptive field inhibition. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society 12 (02 2003), 729--39.
[30]
Shuyang Gu, Congliang Chen, Jing Liao, and Lu Yuan. 2018. Arbitrary Style Transfer with Deep Feature Reshuffle. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), 8222--8231.
[31]
Yliess HATI, GREGOR JOUET, FRANCIS ROUSSEAUX, and Clement DUHART. 2019. PaintsTorch: A User-Guided Anime Line Art Colorization Tool with Double Generator Conditional Adversarial Network. In European Conference on Visual Media Production (London, United Kingdom) (CVMP '19). Association for Computing Machinery, New York, NY, USA, Article 5, 10 pages.
[32]
Xavier Hilaire and Karl Tombre. 2006. Robust and accurate vectorization of line drawings. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 6 (2006), 890--904.
[33]
Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. 2018. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning. PMLR, 1989--1998.
[34]
Xun Huang and Serge Belongie. 2017. Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. 1510--1519.
[35]
Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. 2018. Multimodal Unsupervised Image-to-Image Translation. (04 2018).
[36]
Yi-Chin Huang, Yi-Shin Tung, Jun-Cheng Chen, Sung-Wen Wang, and Ja-Ling Wu. 2005. An Adaptive Edge Detection Based Colorization Algorithm and Its Applications. In Proceedings of the 13th Annual ACM International Conference on Multimedia (Hilton, Singapore) (MULTIMEDIA '05). Association for Computing Machinery, New York, NY, USA, 351--354.
[37]
Satoshi Iizuka and Edgar Simo-Serra. 2019. DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement. ACM Transactions on Graphics (Proc. of SIGGRAPH ASIA) 38, 6 (2019), 1.
[38]
Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2016. Let there be color! Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics (ToG) 35, 4 (2016), 1--11.
[39]
P. Isola, J. Zhu, T. Zhou, and A. A. Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5967--5976.
[40]
Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu, and M. Song. 2020. Neural Style Transfer: A Review. IEEE Transactions on Visualization & Computer Graphics 26, 11 (nov 2020), 3365--3385.
[41]
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016a. Perceptual Losses for Real-Time Style Transfer and Super-Resolution, Vol. 9906. 694--711.
[42]
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016b. Perceptual losses for realtime style transfer and super-resolution. In European conference on computer vision. Springer, 694--711.
[43]
Hyuncheol Kim, Ho Young Jhoo, Eunhyeok Park, and Sungjoo Yoo. 2019. Tag2Pix: Line Art Colorization Using Text Tag With SECat and Changing Loss. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (2019), 9055--9064.
[44]
Junho Kim, Minjae Kim, Hyeonwoo Kang, and KwangHee Lee. 2020. U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation. ArXiv abs/1907.10830 (2020).
[45]
Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (12 2014).
[46]
Junsoo Lee, Eungyeup Kim, Yunsung Lee, Dongjun Kim, Jaehyuk Chang, and Jaegul Choo. 2020. Reference-Based Sketch Image Colorization Using Augmented-Self Reference and Dense Semantic Correspondence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[47]
Chengze Li, Xueting Liu, and Tien-Tsin Wong. 2017. Deep extraction of manga structural lines. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1--12.
[48]
Mengtian Li, Zhe Lin, Radomir Mech, Ersin Yumer, and Deva Ramanan. 2019. Photo-Sketching: Inferring Contour Drawings From Images. 1403--1412.
[49]
Yunhong Li, Yuandong Bi, Weichuan Zhang, and Changming Sun. 2020. Multi-Scale Anisotropic Gaussian Kernels for Image Edge Detection. IEEE Access 8 (01 2020), 1803--1812.
[50]
Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. 2017. Visual Attribute Transfer through Deep Image Analogy. ACM Trans. Graph. 36, 4, Article 120 (July 2017), 15 pages.
[51]
Fang Liu, Xiaoming Deng, Yu-Kun Lai, Yong-Jin Liu, Cuixia Ma, and Hongan Wang. 2019b. Sketchgan: Joint sketch completion and recognition with generative adversarial network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5830--5839.
[52]
Rui Liu, Chengxi Yang, Wenxiu Sun, Xiaogang Wang, and Hongsheng Li. 2020. Stereogan: Bridging synthetic-to-real domain gap by joint optimization of domain translation and stereo matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12757--12766.
[53]
Xueting Liu, Xiangyu Mao, Xuan Yang, Linling Zhang, and Tien-Tsin Wong. 2013. Stereoscopizing cel animations. ACM Transactions on Graphics (TOG) 32, 6 (2013), 1--10.
[54]
Xueting Liu, Tien-Tsin Wong, and Pheng-Ann Heng. 2015. Closure-aware sketch simplification. ACM Transactions on Graphics (TOG) 34, 6 (2015), 1--10.
[55]
Yun Liu, Ming-Ming Cheng, Xiaowei Hu, Jia-Wang Bian, Le Zhang, Xiang Bai, and Jinhui Tang. 2019a. Richer Convolutional Features for Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 8 (2019), 1939--1946.
[56]
lllyasviel/sketchKeras 2018. sketch keras. Retrieved 2020-04-22 from https://github.com/lllyasviel/sketchKeras
[57]
Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. 2017. Deep Photo Style Transfer. (03 2017).
[58]
Liqian Ma, Xu Jia, Stamatios Georgoulis, Tinne Tuytelaars, and Luc Van Gool. 2019. Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency. ICLR (2019).
[59]
Julien Mairal, Marius Leordeanu, Francis Bach, Martial Hebert, and J. Ponce. 2008. Discriminative Sparse Image Models for Class-Specific Edge Detection and Image Interpretation. 43--56.
[60]
David A. Mély, Junkyung Kim, Mason McGill, Yuliang Guo, and Thomas Serre. 2016. A systematic comparison between visual cues for boundary detection. Vision Research 120 (2016), 93--107.
[61]
Gioacchino Noris, Alexander Hornung, Robert W Sumner, Maryann Simmons, and Markus Gross. 2013. Topology-driven vectorization of clean line drawings. ACM Transactions on Graphics (TOG) 32, 1 (2013), 1--11.
[62]
Yingxue Pang, Jianxin Lin, Tao Qin, and Zhibo Chen. 2021. Image-to-Image Translation: Methods and Applications.
[63]
Taesung Park, Alexei Efros, Richard Zhang, and Jun-Yan Zhu. 2020. Contrastive Learning for Unpaired Image-to-Image Translation. (11 2020), 319--345.
[64]
Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. 2019. Semantic Image Synthesis With Spatially-Adaptive Normalization. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2332--2341.
[65]
Jinye Peng, Jiaxin Wang, Jun Wang, Erlei Zhang, Qunxi Zhang, Yongqin Zhang, Xianlin Peng, and Kai Yu. 2021. A relic sketch extraction framework based on detail-aware hierarchical deep network. Signal Processing 183 (2021), 108008.
[66]
P. Perona and J. Malik. 1990. Detecting and localizing edges composed of steps, peaks and roofs. In [1990] Proceedings Third International Conference on Computer Vision. 52--57.
[67]
Yonggang Qi, Yi-Zhe Song, Tao Xiang, Honggang Zhang, Timothy Hospedales, Yi Li, and Jun Guo. 2015. Making better use of edges via perceptual grouping. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1856--1865.
[68]
Yingge Qu, Tien-Tsin Wong, and Pheng-Ann Heng. 2006. Manga colorization. ACM Transactions on Graphics (TOG) 25, 3 (2006), 1214--1220.
[69]
Mike Roberts, Jason Ramapuram, Anurag Ranjan, Atulit Kumar, Miguel Angel Bautista, Nathan Paczan, Russ Webb, and Joshua M Susskind. 2021. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 10912--10922.
[70]
Swami Sankaranarayanan, Yogesh Balaji, Arpit Jain, Ser Nam Lim, and Rama Chellappa. 2018. Learning from synthetic data: Addressing domain shift for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3752--3761.
[71]
Kazuma Sasaki, Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2017. Joint gap detection and inpainting of line drawings. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5725--5733.
[72]
Kazuma Sasaki, Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2018. Learning to restore deteriorated line drawing. The visual computer 34, 6 (2018), 1077--1085. Chang Wook Seo and Yongduek Seo. 2021. Seg2pix: Few Shot Training Line Art Colorization with Segmented Image Data. Applied Sciences 11, 4 (2021).
[73]
Tamar Rott Shaham, Michaël Gharbi, Richard Zhang, Eli Shechtman, and Tomer Michaeli. 2021. Spatially-Adaptive Pixelwise Networks for Fast Image Translation. In CVPR.
[74]
Amit Shesh and Baoquan Chen. 2008. Efficient and dynamic simplification of line drawings. In Computer Graphics Forum, Vol. 27. Wiley Online Library, 537--545.
[75]
Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2018a. Mastering Sketching: Adversarial Augmentation for Structured Prediction. ACM Trans. Graph. 37, 1, Article 11 (jan 2018), 13 pages.
[76]
Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2018b. Real-time data-driven interactive rough sketch inking. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--14.
[77]
Edgar Simo-Serra, Satoshi Iizuka, Kazuma Sasaki, and Hiroshi Ishikawa. 2016a. Learning to simplify: fully convolutional networks for rough sketch cleanup. ACM Transactions on Graphics (TOG) 35, 4 (2016), 1--11.
[78]
Edgar Simo-Serra, Satoshi Iizuka, Kazuma Sasaki, and Hiroshi Ishikawa. 2016b. Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup. ACM Trans. Graph. 35, 4, Article 121 (July 2016), 11 pages.
[79]
Xavier Soria Poma, Edgar Riba, and Angel Sappa. 2020. Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection. 1912--1921.
[80]
style2paint 2018. paints chainer. Retrieved 2020-04-22 from https://style2paints.github.io/
[81]
Baochen Sun, Jiashi Feng, and Kate Saenko. 2016. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30.
[82]
Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision. Springer, 443--450.
[83]
Harrish Thasarathan and Mehran Ebrahimi. 2019. Artist-Guided Semiautomatic Animation Colorization. 3157--3160.
[84]
Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. 2018. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7472--7481.
[85]
twitter policy 2021. twitter policy. Retrieved 2021-04-30 from https://developer.twitter.com/en/developer-terms/agreement-and-policy
[86]
Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014).
[87]
Mei Wang and Weihong Deng. 2018. Deep visual domain adaptation: A survey. Neurocomputing 312 (2018), 135--153.
[88]
Xiaogang Wang and Xiaoou Tang. 2009. Face Photo-Sketch Synthesis and Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 11 (2009), 1955--1967.
[89]
Yupei Wang, Xin Zhao, and Kaiqi Huang. 2017. Deep Crisp Boundaries. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1724--1732.
[90]
Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600--612.
[91]
Brett Wilson and Kwan-Liu Ma. 2004. Rendering complexity in computer-generated pen-and-ink illustrations. In Proceedings of the 3rd International Symposium on Non-photorealistic Animation and Rendering. 129--137.
[92]
Holger Winnemöller. 2011. XDoG: Advanced Image Stylization with EXtended Difference-of-Gaussians. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering (Vancouver, British Columbia, Canada) (NPAR '11). Association for Computing Machinery, New York, NY, USA, 147--156.
[93]
Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. 2018. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV).
[94]
Ren Xiaofeng and Liefeng Bo. 2012. Discriminatively Trained Sparse Code Gradients for Contour Detection. In Advances in Neural Information Processing Systems, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), Vol. 25. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2012/file/16a5cdae362b8d27a1d8f8c7b78b4330-Paper.pdf
[95]
Xiao Yang Yiheng Zhu Xiaohui Shen Xiaoyu Xiang, Ding Liu. 2021. Anime2Sketch: A Sketch Extractor for Anime Arts with Deep Networks. https://github.com/Mukosame/Anime2Sketch.
[96]
Minshan Xie, Chengze Li, Xueting Liu, and Tien-Tsin Wong. 2020. Manga filling style conversion with screentone variational autoencoder. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--15.
[97]
Minshan Xie, Menghan Xia, Xueting Liu, Chengze Li, and Tien-Tsin Wong. 2021. Seamless manga inpainting with semantics awareness. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--11.
[98]
Saining Xie and Zhuowen Tu. 2015. Holistically-Nested Edge Detection. In 2015 IEEE International Conference on Computer Vision (ICCV). 1395--1403.
[99]
Xuemiao Xu, Minshan Xie, Peiqi Miao, Wei Qu, Wenpeng Xiao, Huaidong Zhang, Xueting Liu, and Tien-Tsin Wong. 2021. Perceptual-Aware Sketch Simplification Based on Integrated VGG Layers. IEEE Transactions on Visualization and Computer Graphics 27, 1 (2021), 178--189.
[100]
Kai-Fu Yang, Shao-Bing Gao, Ce-Feng Guo, Chao-Yi Li, and Yong-Jie Li. 2015. Boundary Detection Using Double-Opponency and Spatial Sparseness Constraint. IEEE Transactions on Image Processing 24, 8 (2015), 2565--2578.
[101]
Ming-Hsuan Yang, D.J. Kriegman, and N. Ahuja. 2002. Detecting faces in images: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1 (2002), 34--58.
[102]
Jaejun Yoo, Youngjung Uh, Sanghyuk Chun, Byeongkyu Kang, and Jung-Woo Ha. 2019. Photorealistic Style Transfer via Wavelet Transforms. 9035--9044.
[103]
Aron Yu and Kristen Grauman. 2014. Fine-Grained Visual Comparisons with Local Learning. In 2014 IEEE Conference on Computer Vision and Pattern Recognition. 192--199.
[104]
Qian Yu, Yongxin Yang, Feng Liu, Yi-Zhe Song, Tao Xiang, and Timothy M Hospedales. 2017. Sketch-a-net: A deep neural network that beats humans. International journal of computer vision 122, 3 (2017), 411--425.
[105]
Mingcheng Yuan and Edgar Simo-Serra. 2021. Line Art Colorization With Concatenated Spatial Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. 3946--3950.
[106]
Kaihua Zhang, Lei Zhang, Kin-Man Lam, and David Zhang. 2016. A Level Set Approach to Image Segmentation With Intensity Inhomogeneity. IEEE Transactions on Cybernetics 46, 2 (2016), 546--557.
[107]
Lvmin Zhang, Yi Ji, Xin Lin, and Chunping Liu. 2017. Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN. 506--511.
[108]
Lvmin Zhang, Yi Ji, and Chunping Liu. 2020. DanbooRegion: An Illustration Region Dataset. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XIII 16. Springer, 137--154.
[109]
Lvmin Zhang, Chengze Li, Tien-Tsin Wong, Yi Ji, and Chunping Liu. 2018b. Two-Stage Sketch Colorization. ACM Trans. Graph. 37, 6, Article 261 (Dec. 2018), 14 pages.
[110]
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018a. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 586--595.
[111]
Haitian Zheng, Haofu Liao, Lele Chen, Wei Xiong, Tianlang Chen, and Jiebo Luo. 2020. Example-Guided Image Synthesis Using Masked Spatial-Channel Attention and Self-supervision. In Computer Vision - ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Springer International Publishing, Cham, 422--439.
[112]
Xingran Zhou, Bo Zhang, Ting Zhang, Pan Zhang, Jianmin Bao, Dong Chen, Zhongfei Zhang, and Fang Wen. 2020. Full-Resolution Correspondence Learning for Image Translation.
[113]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017a. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Computer Vision (ICCV), 2017 IEEE International Conference on.
[114]
Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, and Eli Shechtman. 2017b. Toward Multimodal Image-to-Image Translation (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, 465--476.
[115]
Djemel Ziou and Salvatore Tabbone. 2000. Edge Detection Techniques - An Overview. 8 (06 2000).

Cited By

View all
  • (2024)Stylized Face Sketch Extraction via Generative Prior with Limited DataComputer Graphics Forum10.1111/cgf.1504543:2Online publication date: 30-Apr-2024
  • (2024)Text-guided image-to-sketch diffusion modelsKnowledge-Based Systems10.1016/j.knosys.2024.112441304(112441)Online publication date: Nov-2024
  • (2024)AGD-GAN: Adaptive Gradient-Guided and Depth-supervised generative adversarial networks for ancient mural sketch extractionExpert Systems with Applications10.1016/j.eswa.2024.124639255(124639)Online publication date: Dec-2024
  • Show More Cited By

Index Terms

  1. Reference Based Sketch Extraction via Attention Mechanism

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 41, Issue 6
    December 2022
    1428 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/3550454
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 30 November 2022
    Published in TOG Volume 41, Issue 6

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. auto-colorization
    2. image-to-image translation
    3. sketch-extraction

    Qualifiers

    • Research-article

    Funding Sources

    • Korea government (MSIT)

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)174
    • Downloads (Last 6 weeks)21
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Stylized Face Sketch Extraction via Generative Prior with Limited DataComputer Graphics Forum10.1111/cgf.1504543:2Online publication date: 30-Apr-2024
    • (2024)Text-guided image-to-sketch diffusion modelsKnowledge-Based Systems10.1016/j.knosys.2024.112441304(112441)Online publication date: Nov-2024
    • (2024)AGD-GAN: Adaptive Gradient-Guided and Depth-supervised generative adversarial networks for ancient mural sketch extractionExpert Systems with Applications10.1016/j.eswa.2024.124639255(124639)Online publication date: Dec-2024
    • (2023)Semi-supervised reference-based sketch extraction using a contrastive learning frameworkACM Transactions on Graphics10.1145/359239242:4(1-12)Online publication date: 26-Jul-2023
    • (2023)Hand‐drawn anime line drawing colorization of faces with texture detailsComputer Animation and Virtual Worlds10.1002/cav.219835:1Online publication date: 27-Jul-2023

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media