Skip to main content

OmniEyes: Analysis and Synthesis of Artistically Painted Eyes

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11961))

Included in the following conference series:

  • 3048 Accesses

Abstract

Faces in artistic paintings most often contain the same elements (eyes, nose, mouth...) as faces in the real world, however they are not a photo-realistic transfer of physical visual content. These creative nuances the artists introduce in their work act as interference when facial detection models are used in the artistic domain. In this work we introduce models that can accurately detect, classify and conditionally generate artistically painted eyes in portrait paintings. In addition, we introduce the OmniEyes Dataset that captures the essence of painted eyes with annotated patches from 250 K artistic paintings and their metadata. We evaluate our approach in inpainting, out of context eye generation and classification on portrait paintings from the OmniArt dataset. We conduct a user case study to further study the quality of our generated samples, asses their aesthetic aspects and provide quantitative and qualitative results for our model’s performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Baltrušaitis, T., Robinson, P., Morency, L.P.: Openface: an open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10. IEEE (2016)

    Google Scholar 

  2. Cetinic, E., Lipic, T., Grgic, S.: A deep learning perspective on beauty, sentiment, and remembrance of art. IEEE Access 7, 73694–73710 (2019)

    Article  Google Scholar 

  3. Ci, Y., Ma, X., Wang, Z., Li, H., Luo, Z.: User-guided deep anime line art colorization with conditional adversarial networks. In: 2018 ACM Multimedia Conference on Multimedia Conference, MM 2018, Seoul, Republic of Korea, October 22–26, 2018, pp. 1536–1544 (2018). https://doi.org/10.1145/3240508.3240661

  4. Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: Can: creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv preprint (2017). arXiv:1706.07068

  5. Elgammal, A., Liu, B., Kim, D., Elhoseiny, M., Mazzone, M.: The shape of art history in the eyes of the machine. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  6. Garcia, N., Renoust, B., Nakashima, Y.: Context-aware embeddings for automatic art analysis. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval, pp. 25–33. ACM (2019)

    Google Scholar 

  7. Goldfarb, D., Merkl, D.: Visualizing art historical developments using the getty ulan, wikipedia and wikidata. In: 2018 22nd International Conference Information Visualisation (IV), pp. 459–466. IEEE (2018)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. 36, 107:1–107:14 (2017)

    Article  Google Scholar 

  10. Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867–1874 (2014)

    Google Scholar 

  11. Kozbelt, A., Seeley, W.P.: Integrating art historical, psychological, and neuroscientific explanations of artists’ advantages in drawing and perception. Psychology of Aesthetics, Creativity, and the Arts 1(2), 80 (2007)

    Article  Google Scholar 

  12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  13. Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919 (2017)

    Google Scholar 

  14. Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100 (2018)

    Google Scholar 

  15. Mao, H., Cheung, M., She, J.: Deepart: learning joint representations of visual arts. In: Proceedings of the 2017 ACM on Multimedia Conference, MM 2017, Mountain View, CA, USA, October 23–27, 2017, pp. 1183–1191 (2017). https://doi.org/10.1145/3123266.3123405

  16. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint (2014). arXiv:1411.1784

  17. Oh, C.: Automatically classifying art images using computer vision (2018)

    Google Scholar 

  18. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)

    Google Scholar 

  19. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)

    Google Scholar 

  20. Rodriguez, C.S., Lech, M., Pirogova, E.: Classification of style in fine-art paintings using transfer learning and weighted image patches. In: 2018 12th International Conference on Signal Processing and Communication Systems (ICSPCS), pp. 1–7. IEEE (2018)

    Google Scholar 

  21. Sbai, O., Elhoseiny, M., Bordes, A., LeCun, Y., Couprie, C.: Design: design inspiration from generative networks. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

  22. Shen, X., Efros, A.A., Aubry, M.: Discovering visual patterns in art collections with spatially-consistent feature learning. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint (2014). arXiv:1409.1556

  24. Song, Y., et al.: Contextual-based image inpainting: infer, match, and translate. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)

    Chapter  Google Scholar 

  25. Strezoski, G., Groenen, I., Besenbruch, J., Worring, M.: Artsight: an artistic data exploration engine. In: 2018 ACM Multimedia Conference on Multimedia Conference, MM 2018, Seoul, Republic of Korea, October 22–26, 2018, pp. 1240–1241 (2018). https://doi.org/10.1145/3240508.3241389

  26. Strezoski, G., Worring, M.: Omniart: a large-scale artistic benchmark. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 14(4), 88 (2018)

    Google Scholar 

  27. Van Noord, N., Postma, E.: Light-weight pixel context encoders for image inpainting. arXiv preprint (2018). arXiv:1801.05585

  28. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  29. Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6721–6729 (2017)

    Google Scholar 

  30. Yoshimura, Y., Cai, B., Wang, Z., Ratti, C.: Deep learning architect: classification for architectural design through the eye of artificial intelligence. In: Geertman, S., Zhan, Q., Allan, A., Pettit, C. (eds.) CUPUM 2019. LNGC, pp. 249–265. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19424-6_14

    Chapter  Google Scholar 

  31. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)

    Google Scholar 

  32. Zaidel, D.W.: Neuropsychology of art: Neurological, Cognitive, and Evolutionary Perspectives. Psychology Press (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gjorgji Strezoski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Strezoski, G., Knoester, R., van Noord, N., Worring, M. (2020). OmniEyes: Analysis and Synthesis of Artistically Painted Eyes. In: Ro, Y., et al. MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science(), vol 11961. Springer, Cham. https://doi.org/10.1007/978-3-030-37731-1_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37731-1_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37730-4

  • Online ISBN: 978-3-030-37731-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics