Skip to main content
Log in

Saliency-aware inter-image color transfer for image manipulation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper proposes a novel saliency-aware inter-image color transfer method to perform image manipulation. Specifically, given the source image, the candidate images are first retrieved from a group of images with the same semantic category, and the corresponding saliency maps are obtained using an existing saliency model. Then, the inter-image color transfer method is proposed to transfer the colors of the high-saliency region in each candidate image to the target object region in the source image, for generating the manipulated image. Finally, from a set of manipulated images, the one with the highest weighted F-measure of its saliency map is selected as the final result. Experimental results show that the proposed method not only highlights objects effectively but also preserves the naturalness of images well, and consistently outperforms other image manipulation methods when viewing the manipulated images with or without the source image as the reference.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Bernhard M, Zhang L, and Wimmer M (2011) Manipulating attention in computer games. In: Proc. of IEEE Image, Video, and Multidimensional Signal Processing Workshop, pp. 153–158

  2. Fei-Fei L, Fergus R, Perona P (2006) One-shot learning of object categories. IEEE Trans Pattern Anal Mach Intell 28(4):594–611

    Article  Google Scholar 

  3. Fei-Fei L, Fergus R, Perona P (2007) Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Comput Vis Image Underst 106(1):59–70

    Article  Google Scholar 

  4. Fried O, Shechtman E, Goldman DB, and Finkelstein A (2015) Finding distractors in images. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1703–1712

  5. Gatys LA, Kümmerer M, Wallis TS, and Bethge M (2017) Guiding human gaze with convolutional neural networks. arXiv preprint arXiv:1712.06492

  6. Hagiwara A, Sugimoto A, and Kawamoto K (2011) Saliency-based image editing for guiding visual attention. In: Proc. of the 1st international workshop on pervasive eye tracking & mobile eye-based interaction, pp. 43–48

  7. Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum HY (2011) Learning to detect a salient object. IEEE Trans Pattern Anal Mach Intell 33(2):353–367

    Article  Google Scholar 

  8. Liu Z, Zou W, Le Meur O (2014) Saliency tree: A novel saliency detection framework. IEEE Trans Image Process 23(5):1937–1952

    Article  MathSciNet  MATH  Google Scholar 

  9. Margolin R, Zelnik-Manor L, and Tal A (2014) How to evaluate foreground maps?. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255

  10. Mateescu VA, and Bajić IV (2014) Attention retargeting by color manipulation in images. In: Proc. of the 1st International Workshop on Perception Inspired Video Processing, pp. 15–20

  11. Mateescu VA, Bajic IV (2016) Visual attention retargeting. IEEE Multimedia 23(1):82–91

    Article  Google Scholar 

  12. Mechrez R, Shechtman E, and Zelnik-Manor L (2018) Saliency driven image manipulation. In: Proc. of IEEE Workshop on Applications of Computer Vision, pp. 1368–1376

  13. Nguyen TV, Ni B, Liu H, Xia W, Luo J, Kankanhalli M, Yan S (2013) Image re-attentionizing. IEEE Trans Multimedia 15(8):1910–1919

    Article  Google Scholar 

  14. Pal R, and Roy D (2017). Enhancing saliency of an object using genetic algorithm. In: Proc. of IEEE Conference on Computer and Robot Vision, pp. 337–344

  15. Reinhard E, Adhikhmin M, Gooch B, Shirley P (2001) Color transfer between images. IEEE Comput Graph Appl 21(5):34–41

    Article  Google Scholar 

  16. Ren J, Liu Z, Zhou X, Sun G, Bai C (2018) Saliency integration driven by similar images. J Vis Commun Image Represent 50:227–236

    Article  Google Scholar 

  17. Song M, Chen C, Wang S, Yang Y (2014) Low level and high-level prior learning for visual saliency estimation. Inf Sci 281:573–585

    Article  Google Scholar 

  18. Song H, Liu Z, Du H, Sun G, Le Meur O, Ren T (2017) Depth-aware salient object detection and segmentation via multiscale discriminative saliency fusion and bootstrap learning. IEEE Trans Image Process 26(9):4204–4216

    Article  MathSciNet  MATH  Google Scholar 

  19. Su SL, Durand F, and Agrawala M (2005) De-emphasis of distracting image regions using texture power maps. In: Proc. of IEEE International Workshop on Texture Analysis and Synthesis, pp. 119–124

  20. Takimoto H, Hitomi S, Yamauchi H, Kishihara M, Okubo K (2017) Image modification based on spatial frequency components for visual attention retargeting. IEICE Trans on Information and Systems 100(6):1339–1349

    Article  Google Scholar 

  21. Tao D, Cheng J, Song M, Lin X (2016) Manifold ranking-based matrix factorization for saliency detection. IEEE Transactions on Neural Networks and Learning Systems 27(6):1122–1134

    Article  MathSciNet  Google Scholar 

  22. Vazquez-Corral J, and Bertalmío M (2017) Gamut mapping for visual attention retargeting. In: Proc. of Color and Imaging Conference, pp. 313–316

  23. Wang L, Wang L, Lu H, Zhang P, and Ruan X (2016) Saliency detection with recurrent fully convolutional networks. In: Proc. of European Conference on Computer Vision, pp. 825–841

  24. Wong LK, and Low KL (2011) Saliency retargeting: An approach to enhance image aesthetics. In: Proc. of IEEE Workshop on Applications of Computer Vision, pp. 73–80

  25. Yan Z, Zhang H, Wang B, Paris S, Yu Y (2016) Automatic photo adjustment using deep neural networks. ACM Trans Graph 35(2):1–15

    Article  Google Scholar 

  26. Zavalishin SS, and Bekhtin YS (2018) Visually aesthetic image contrast enhancement. In: Proc. of the 7th IEEE Mediterranean Conference on Embedded Computing, pp. 1–4

  27. Zhang P, Wang D, Lu H, Wang H, and Ruan X (2017) Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 202–211

  28. Zhou X, Liu Z, Sun G, Wang X (2017) Adaptive saliency fusion based on quality assessment. Multimed Tools Appl 76(22):23187–23211

    Article  Google Scholar 

  29. Zhu W, Liang S, Wei Y, and Sun J (2014) Saliency optimization from robust background detection. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2814–2821

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 61771301.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi Liu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, X., Liu, Z., Jiao, Q. et al. Saliency-aware inter-image color transfer for image manipulation. Multimed Tools Appl 78, 21629–21644 (2019). https://doi.org/10.1007/s11042-019-7450-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-7450-6

Keywords

Navigation