Abstract
Stippling is a fascinating art form, which is widely used in printing industry. In computer graphics, digital color stippling produces colored points with a certain distribution (e.g. blue noise distribution) from an input color image. However, it is challenging as each color channel should be evenly distributed with respect to each other channel. Deep learning approaches have shown great advantage on many image stylization applications and have not been utilized for stippling yet. The main reason is that stippling has strict constrains, which requires an even and random distribution of the points. In this paper, we propose the first deep learning approach for stippling, which is able to produce point distribution visually similar to stippling. We regard the stippling results as a 3D point cloud structure where the third channel represents for colors. Then we propose a deep network to transform images to points distribution, consisting of a feature extracting encoder to extract features from the input image and a point generating decoder to translate the features into stippling form. We exploit a spectrum loss to achieve the even distribution. As a result, our method can produce color stippling with reasonable cost. Experiments show that our method can produce stippling with a reasonable balance between the quality of the results and the computational efficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Balzer, M., Schlömer, T., Deussen, O.: Capacity-constrained point distributions: a variant of Lloyd’s method. ACM Trans. Graph. (TOG) 28(3), 1–8 (2009)
Chen, Y., Lai, Y.K., Liu, Y.J.: CartoonGAN: generative adversarial networks for photo cartoonization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9465–9474 (2018)
Cook, R.L.: Stochastic sampling in computer graphics. ACM Trans. Graph. (TOG) 5(1), 51–72 (1986)
De Goes, F., Breeden, K., Ostromoukhov, V., Desbrun, M.: Blue noise through optimal transport. ACM Trans. Graph. (TOG) 31(6), 1–11 (2012)
Deussen, O., Hiller, S., Van Overveld, C., Strothotte, T.: Floating points: a method for computing stipple drawings. Comput. Graph. Forum 19, 41–50 (2000)
Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017)
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., Song, M.: Neural style transfer: a review. IEEE Trans. Vis. Comput. Graph. 26(11), 3365–3385 (2019)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Leimkühler, T., Singh, G., Myszkowski, K., Seidel, H.P., Ritschel, T.: Deep point correlation design. ACM Tran. Graph. (TOG) 38(6), 1–17 (2019)
Li, C., Wand, M.: Combining Markov random fields and convolutional neural networks for image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2479–2486 (2016)
Li, X., Zhang, W., Shen, T., Mei, T.: Everyone is a cartoonist: selfie cartoonization with attentive adversarial networks. In: 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 652–657. IEEE (2019)
Lim, I., Ibing, M., Kobbelt, L.: A convolutional decoder for point clouds using adaptive instance normalization. Comput. Graph. Forum 38, 99–108 (2019)
Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Theor. 28(2), 129–137 (1982)
Ma, L., Chen, Y., Qian, Y., Sun, H.: Incremental Voronoi sets for instant stippling. Vis. Comput. 34(6–8), 863–873 (2018)
Ma, L., Deng, H., Wang, B., Chen, Y., Boubekeur, T.: Real-time structure aware color stippling. In: ACM SIGGRAPH 2019 Posters, pp. 1–2 (2019)
MartÃn, D., Arroyo, G., RodrÃguez, A., Isenberg, T.: A survey of digital stippling. Comput. Graph. 67, 24–44 (2017)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. Adv. Neural. Inf. Process. Syst. 30, 5099–5108 (2017)
Qin, H., Chen, Y., He, J., Chen, B.: Wasserstein blue noise sampling. ACM Trans. Graph. (TOG) 36(5), 1–13 (2017)
Secord, A.: Weighted Voronoi stippling. In: Proceedings of the 2nd International Symposium on Non-photorealistic Animation and Rendering, pp. 37–43 (2002)
Vanderhaeghe, D., Barla, P., Thollot, J., Sillion, F.X.: Dynamic point distribution for stroke-based rendering. In: Eurogaphics Symposium on Rendering, pp. 139–146. Eurographics Association (2007)
Wei, L.: Multi-class blue noise sampling. ACM Trans. Graph. (TOG) 29(4), 1–8 (2010)
Xu, Y., Liu, L., Gotsman, C., Gortler, S.J.: Capacity-constrained delaunay triangulation for point distributions. Comput. Graph. 35(3), 510–516 (2011)
Zhang, H., Dana, K.: Multi-style generative network for real-time transfer. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11132, pp. 349–365. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11018-5_32
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Xue, Z., Wang, B., Ma, L. (2021). A Deep Learning Method for 2D Image Stippling. In: Magnenat-Thalmann, N., et al. Advances in Computer Graphics. CGI 2021. Lecture Notes in Computer Science(), vol 13002. Springer, Cham. https://doi.org/10.1007/978-3-030-89029-2_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-89029-2_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89028-5
Online ISBN: 978-3-030-89029-2
eBook Packages: Computer ScienceComputer Science (R0)