Skip to main content
Log in

Image-based pencil drawing synthesized using convolutional neural network feature maps

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

In most cases, the conventional pencil-drawing-synthesized methods were in terms of geometry and stroke, or only used classic edge detection method to extract image edge characters. In this paper, we propose a new method to produce pencil drawing from natural image. The synthesized result can not only generate pencil sketch drawing, but also can save the color tone of natural image and the drawing style is flexible. The sketch and style are learned from the edge of original natural image and one pencil image exemplar of artist’s work. They are accomplished through using the convolutional neural network feature maps of a natural image and an exemplar pencil drawing style image. Large-scale bound-constrained optimization (L-BFGS) is applied to synthesize the new pencil sketch whose style is similar to the exemplar pencil sketch. We evaluate the proposed method by applying it to different kinds of images and textures. Experimental results demonstrate that our method is better than conventional method in clarity and color tone. Besides, our method is also flexible in drawing style.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Decarlo, D., Finkelstein, A., Rusinkiewicz, S., Santella, A.: Suggestive contours for conveying shape. ACM Trans. Graph. 22(3), 848–855 (2010)

    Article  Google Scholar 

  2. Judd, T., Durand, F., Adelson, E.H.: Apparent ridges for line drawing. ACM Trans. Graph. 26(3), 19 (2007)

    Article  Google Scholar 

  3. Lee, Y., Markosian, L., Lee, S., Hughes, J.F.: Line drawings via abstracted shading. ACM Trans. Graph. 26(3), 18 (2007)

    Article  Google Scholar 

  4. Gao, X., Zhou, J., Chen, Z., Chen, Y.: Automatic generation of pencil sketch for 2D images. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, pp. 1018–1021 (2010)

  5. Hertzmann, A., Zorin, D.: Illustrating smooth surfaces. In: Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co. pp. 517–526 (2004)

  6. Praun, E., Hoppe, H., Webb, M., Finkelstein A.: Real-time hatching. In: Proceedings of the ACM Siggraph, p. 581 (2004)

  7. Lu, C., Xu, L., Jia, J.: Combining Sketch and Tone for Pencil Drawing Production, pp. 65–73. Eurographics Association, Geneve (2012)

    Google Scholar 

  8. Gatys, L.A., Ecker, A.S., Bethge, M.A.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)

  9. Cai, X., Song, B.: Combining inconsistent textures using convolutional neural networks. J. Vis. Commun. Image Represent. 40, 366–375 (2016)

    Article  Google Scholar 

  10. Wang, N., Zhang, S., Gao, X., Song, B., Li, J., Li, Z.: Unified framework for face sketch synthesis. Signal Process. 130, 1–11 (2017)

    Article  Google Scholar 

  11. Xu, L., Lu, C., Xu, Y., Jia, J.: Image smoothing via L0 gradient minimization. ACM Trans. Graph. (TOG) 30(6), 61–64 (2011)

    Google Scholar 

  12. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp.1097–1105 (2012)

  14. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Berg, A.C.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  15. Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: closing the gap to human-level performance in face verification. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1708 (2014)

  16. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the CVPR (2015)

  17. Mostajabi, M., Yadollahpour, P., Shakhnarovich, G.: Feedforward semantic segmentation with zoom-out features. In: Proceedings of the CVPR (2015)

  18. Arbelaez, P., Pont-Tuset, J., Barron, J., Marques, F., Malik, J.: Multiscale combinatorial grouping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 328–335 (2014)

  19. Gatys, L.A., Ecker, A.S., Bethge, M.A.: Neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015)

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

  21. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: ICLR (2015)

  22. Cadieu, C.F., Hong, H., Yamins, D.L.K.: Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS Comput. Biol. 10(12), e1003963 (2014)

    Article  Google Scholar 

  23. Gl, U., van Gerven, M.A.J.: Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. J. Neurosci. 35(27), 10005–10014 (2015)

    Article  Google Scholar 

  24. Yamins, D.L.K., Hong, H., Cadieu, C.F.: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. 111(23), 8619–8624 (2014)

    Article  Google Scholar 

  25. Khaligh-Razavi, S.M., Kriegeskorte, N.: Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol. 10(11), e1003915 (2014)

    Article  Google Scholar 

  26. Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., Vedaldi, A.: Describing textures in the wild. In: Computer Vision and Pattern Recognition (CVPR), pp. 3606–3613 (2014)

  27. Cimpoi, M, Maji, S., Vedaldi, A.: Deep filter banks for texture recognition and description. In: Proceedings of the CVPR (2015)

  28. Paris, S., Durand, F.: A fast approximation of the bilateral filter using a signal processing approach. IJCV 81(1), 24–52 (2013)

    Article  Google Scholar 

  29. Zhu, S., Ma, K.-K.: A new diamond search algorithm for fast block-matching motion estimation. IEEE Trans. Image Process. 9(2), 287–290 (2000)

    Article  Google Scholar 

  30. Zhu, C., Byrd, R.H., Lu, P., Nocedal, J.: Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw. (TOMS) 23(4), 550–560 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  31. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia, pp. 675–678 (2014)

  32. Heeger, D.J., Bergen, J.R.: Pyramid-based texture analysis/synthesis. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 229–238. ACM (1995)

  33. Portilla, J., Simoncelli, P.: A parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vis. 40(1), 49–71 (2000)

    Article  MATH  Google Scholar 

  34. Xie, X., Tian, F., Seah, H.S.: Feature guided texture synthesis (FGTS) for artistic style transfer. In: Proceedings of the 2nd International Conference on Digital Interactive Media in Entertainment and Arts, pp. 44–49 (2007)

Download references

Acknowledgements

We thank the anonymous reviewers and the editor for their valuable comments. This work has been supported by the National Natural Science Foundation of China (Nos. 61772387 and 61372068), the Research Fund for the Doctoral Program of Higher Education of China (No. 20130203110005), the Fundamental Research Funds for the Central Universities (No. K5051301033), the 111 Project (No. B08038) and also supported by the ISN State Key Laboratory.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Song.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, X., Song, B. Image-based pencil drawing synthesized using convolutional neural network feature maps. Machine Vision and Applications 29, 503–512 (2018). https://doi.org/10.1007/s00138-018-0906-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-018-0906-2

Keywords

Navigation