Skip to main content
Log in

Transfer of content-aware vignetting effect from paintings to photographs

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper discusses how the vignetting effect of paintings may be transferred to photographs, with attention to center-corner contrast. First, the lightness distribution of both are analyzed. The results show that the painter’s vignette is more complex than that achieved using common digital post-processing methods. It is shown to involve both the 2D and 3D geometry of the scene. Then, an algorithm is developed to transfer the vignetting effect from an example painting to a photograph. The example painting is selected as that has similar contextual geometry with the photograph. The lightness weighting pattern extracted from the selected example painting is adaptively blended with the input photograph to create vignetting effect. In order to avoid over-brightened or over-darkened regions in the enhancement result, the extracted lightness weighting pattern is corrected using a nonlinear curve. A content-aware interpolation method is also proposed to warp the lightness weighting to fit the contextual structure of the photograph. Finally, the local contrast is restored. Experiments show that the proposed algorithm can successfully perform this function. The resulting vignetting effect is more naturally presented with regard to esthetic composition as compared with vignetting achieved with popular software tools and camera models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Badrinarayanan V, Kendall A, Cipolla R (2017) Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495

    Article  Google Scholar 

  2. Bychkovsky V, Paris S, Chan E, Durand F (2011) Learning photographic global tonal adjustment with a database of input / output image pairs. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), pp 97–104

  3. Chang Y, Saito S, Nakajima M (2007) Example-based colour transformation of image and video using basic colour categories. IEEE Trans Image Process 16(2):329–336

    Article  MathSciNet  Google Scholar 

  4. Chen W, Fu Z, Yang D, Deng J (2016) Single-image depth perception in the wild. In: Proceedings of conference on neural information processing systems (NIPS), pp 1–9

  5. Cho H, Lee H, Lee S (2014) Radial bright channel prior for single image vignetting correction. In: Proceedings of European conference computer vision, pp 189–202

  6. Edin R, Jepsen D (2010) Color harmonies: paint watercolors filled with light. North Light Books, an imprint of F + W Media Inc

  7. Farbman Z, Fattal R, Lischinski D, Szeliski R (2008) Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans Graph 27 (3):67:1–67:10

    Article  Google Scholar 

  8. Fattal R, Lischinski D, Werman M (2002) Gradient domain high dynamic range compression. ACM Trans Graph 21(3):249–256

    Article  Google Scholar 

  9. Fiser J, Jamriska O, Simons D, Shechtman E, Lu J, Asente P, Lukac M, Sykora D (2017) Example-based synthesis of stylized facial animations. ACM Trans Graph 36(4):155:1–11

    Article  Google Scholar 

  10. Gatys LA, Ecker AS, Bethge M (2015) A neural algorithm of artistic styleleon. arXiv:1508.06576v2

  11. Goldman DB (2010) Vignette and exposure calibration and compensation. IEEE Trans Pattern Anal Mach Intell 32(12):2276–2288

    Article  Google Scholar 

  12. Gong Y, Sbalzarini IF (2014) Image enhancement by gradient distribution specification. In: Proceedings of ACCV workshops, pp 47–62

  13. Huang H, Zang Y, Li CF (2010) Example-based painting guided by color features. Vis Comput 26:933–942

    Article  Google Scholar 

  14. Huang W, Ding H, Chen G (2017) A novel deep multi-channel residual networks-based metric learning method for moving human localization in video surveillance. Signal Process 142:104–113

    Article  Google Scholar 

  15. Jing Y, Yang Y, Feng Z, Ye J, Song M (2017) Neural style transfer: a review. arXiv:1705.04058v1

  16. Kong S, Shen X, Lin Z, Mech R, Fowlkes C (2016) Photo aesthetics ranking network with attributes and content adaptation, pp 1–24. arXiv:1606.01621v2

  17. Li C, Chen T (2009) Aesthetic visual quality assessment of paintings. IEEE J Sel Top Sign Proces 3(2):236–252

    Article  Google Scholar 

  18. Li J, Yao L, Hendriks E, Wang JZ (2012) Rhythmic brushstrokes distinguish van gogh from his contemporaries: findings via automated brushstroke extraction. IEEE Trans Pattern Anal Mach Intell 34:1159–1176

    Article  Google Scholar 

  19. Liao J, Yao Y, Yuan L, Hua G, Kang SB (2017) Visual attribute transfer through deep image analogy. arXiv:1705.01088v2

  20. Liu Y, Cohen M, Uyttendaele M, Rusinkiewicz S (2014) Autostyle: automatic style transfer from image collections to users’ images. Computer Graphics Forum 33 (4):21–31

    Article  Google Scholar 

  21. Liu G, Yan Y, Ricci E, Yang Y, Han Y, Winkler S, Sebe N (2015) Inferring painting style with multi-task dictionary learning. In: Proceedings of the twenty-fourth international joint conference on artificial intelligence, pp 2162–2168

  22. Long J, Shelhamer E, Darrell T (2014) Fully convolutional networks for semantic segmentation. In: Proceedings of international conference on computer vision and pattern recognition (CVPR), pp 3431–3440

  23. Resales R, Achan K, Frey B (2003) Unsupervised image translation. In: Proceedings of the 9th IEEE international conference on computer vision, pp 472–478

  24. Rigau J, Feixas M, Sbert M (2008) Informational aesthetics measures. IEEE Comput Graph Appl 28(2):24–34

    Article  Google Scholar 

  25. Saleh B, Elgammal A (2015) Large-scale classification of fine-art paintings: learning the right metric on the right feature. arXiv:1505.00855

  26. Samii A, Althoff T (2011) Iterative learning: leveraging the computer as an on-demand expert artist. CS281a Statistical Learning Theory (Michael Jordan and Martin Wainwright) and CS294-69 Image Manipulation and Computational Photography (Maneesh Agrawala), University of California, Berkeley, pp 1–11

  27. Shen X, Hertzmann A, Jia J, Paris S, Price B, Shechtman E, Sachs I (2016) Automatic portrait segmentation for image stylization. Computer Graphics Forum/Proc. of EUROGRAPHICS 36(2):1–10

    Google Scholar 

  28. Strezoski G, Worring M (2017) Omniart: multi-task deep learning for artistic data analysis. arXiv:1708.00684v1

  29. Wang B, Wang W, Yang H, Sun J (2004) Efficient example-based painting and synthesis of 2d directional texture. IEEE Trans Vis Comput Graph 10(3):266–277

    Article  Google Scholar 

  30. Wu J, Zhong S, Jiang J, Yang Y (2017) A novel clustering method for static video summarization. Multimedia Tools and Applications 76(7):9625–9641

    Article  Google Scholar 

  31. Yan C, Zhang Y, Dai F, Zhang J, Li L, Dai Q (2014) Efficient parallel hevc intra-prediction on many-core processor. Electron Lett 50(11):805–806

    Article  Google Scholar 

  32. Yan C, Zhang Y, Xu J, Dai F, Zhang J, Dai Q, Wu F (2014) Efficient parallel framework for hevc motion estimation on many-core processors. IEEE Trans Circuits Syst Video Technol 24(12):2077–2089

    Article  Google Scholar 

  33. Yan C, Xie H, Liu S, Yin J, Zhang Y (2017) Effective uyghur language text detection in complex background images for traffic prompt identification. IEEE Trans Intell Transp Syst 99:1–10

    Google Scholar 

  34. Yan C, Xie H, Yang D, Yin J, Zhang Y, Dai Q (2017) Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans Intell Transp Syst 19(1):284–295

    Article  Google Scholar 

  35. Zhang X, Chan KL, Constable M (2014) Atmospheric perspective effect enhancement of landscape photographs through depth-aware contrast manipulation. IEEE Trans Multimedia 16(3):653–667

    Article  Google Scholar 

  36. Zhang X, Constable M, Chan KL (2014) Exemplar-based portrait photograph enhancement as informed by portrait paintings. Computer Graphics Forum 33(8):38–51

    Article  Google Scholar 

  37. Zhang X, Constable M, Chan KL (2017) Transfer of vignetting effect from paintings to photographs. In: Proceedings of IEEE international conference on acoustics, speech and signal processing (ICASSP)

  38. Zhang X, Constable M, Chan KL, Yu J, Wang J (2017) Computational approaches in the transfer of aesthetic values from paintings to photographs. Springer, Singapore

    Google Scholar 

  39. Zhang X, Kim D, Shen S, Yuan P, Liu S, Tang Z, Zhang G, Zhou X, Gateno J, Liebschner MAK, Xia JJ (2017) An eftd-vp framework for efficiently generating patient-specific anatomically detailed facial soft tissue fe mesh for craniomaxillofacial surgery simulation. Biomech Model Mechanobiol 4:1–16

    Google Scholar 

  40. Zhao MT, Zhu SC (2010) Sisley the abstract painter. In: Proceedings of the 8th international symposium on non-photorealistic animation and rendering, pp 99–107

  41. Zheng Y, Grossman M, Awate S, Gee J (2009) Automatic correction of intensity nonuniformity from sparseness of gradient distribution in medical images. In: Proceedings of the 12th international conference on medical image computing and computer assisted intervention (MICCAI), pp 852–859

  42. Zheng Y, Lin S, Kang SB, Xiao R, Gee JC, Kambhamettu C (2013) Single-image vignetting correction from gradient distribution symmetries. IEEE Trans Pattern Anal Mach Intell 35(6):1480–1494

    Article  Google Scholar 

  43. Zhou B, Zhao H, Puig X, Fidler S, Barriuso A, Torralba A (2017) Scene parsing through ade20k dataset. In: Proceedings of international conference on computer vision and pattern recognition (CVPR), pp 1–9

  44. Zhu Y, Tang G, Zhang X, Jiang J, Tian Q (2018) Haze removal method for natural restoration of images with sky. Neurocomputing 275:499–510

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by: (i) the National Natural Science Foundation of China (Grant No. 61602313, 61620106008, and 61602312); (ii) Shenzhen Commission of Scientific Research & Innovations under the Grant No. JCYJ20170302153632883; (iii) Tencent “Rhinoceros Birds” - Scientific Research Foundation for Young Teachers of Shenzhen University; (iv) Research Foundation of Shenzhen University(2016051); (v)Startup Foundation for Advanced Talents, Shenzhen.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyan Zhang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X., Constable, M. & Chan, K.L. Transfer of content-aware vignetting effect from paintings to photographs. Multimed Tools Appl 77, 23851–23875 (2018). https://doi.org/10.1007/s11042-018-5629-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-5629-x

Keywords

Navigation