Skip to main content
Log in

Machine learning-based framework for saliency detection in distorted images

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Visual saliency detection is useful in carrying out image compression, image segmentation, image retrieval, and other image processing applications. Majority of existing saliency detection algorithms are presented for distortion-free images. However, this situation is not always the case. In this paper, we first evaluate the performances of state-of-the-art saliency detection algorithms against different distortion types and levels. A machine learning-based framework for saliency detection is proposed for two common types of distortions, noise and JPEG compression. First, a machine learning method is proposed to predict the distortion level, and then the distortion is removed using the parameter setting that is tuned for that distortion level. Finally, the saliency map is calculated by using saliency detection algorithms. We evaluate the saliency detection algorithms on Tampere Image Database (TID2013), which is proposed for image quality assessment application. We manually label the salient objects in each image and obtain its ground truth saliency map in order to adapt TID2013 for visual saliency detection application. Experimental results demonstrate that the distortions usually decrease the performances of the saliency detection algorithms, particularly in high levels of distortions. The performance rankings of the saliency detection algorithms for the distortion-free images and distorted images are different. Moreover, our proposed machine learning-based framework for saliency detection improves the performances of saliency detection algorithms in distorted images in most of the distortion levels, particularly in high levels of distortions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

Notes

  1. https://code.google.com/p/randomforest-matlab/.

References

  1. Achanta R, Hemami S, Estrada F, Süsstrunk S (2009) Frequency-tuned salient region detection. IEEE Conf Comput Vis Pattern Recognit:1597–1604

  2. Alers H, Liu H, Redi J, Heynderickx I TUD image quality database: eye-tracking release 2. URL http://mmi.tudelft.nl/iqlab/eye_tracking_2.html

  3. Bao L, Lu J, Li Y, Shi Y (2014) A saliency detection model using shearlet transform. Multimed Tools Appl 74(11):4045–4058

    Article  Google Scholar 

  4. Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  MATH  Google Scholar 

  5. Bredies K, Holler M (2012) A total-variation-based JPEG decompression model. Siam J Imaging Sci 5(1):366–393

    Article  MathSciNet  MATH  Google Scholar 

  6. Burger HC, Schuler CJ, Harmeling S (2012) Image denoising: can plain neural networks compete with BM3D. IEEE Conf Comput Vis Pattern Recognit:2392–2399

  7. Callet PL, Autrusseau F (2005) Subjective quality assessment IRCCyN/IVC database

  8. Chang H, Ng MK, Zeng T (2014) Reducing artifacts in JPEG decompression via a learned dictionary. IEEE Trans Signal Proc 62(3):718–728

    Article  MathSciNet  Google Scholar 

  9. Cheng MM, Zhang GX, Mitra NJ, Huang X, Hu SM (2011) Global contrast based salient region detection. IEEE Conf Comput Vis Pattern Recognit:409–416

  10. Dabov K, Foi A, Katkovnik V, Egiazarian K (2007) Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminance-chrominance space. IEEE Int Conf Image Process:313–316

  11. Erdem E, Erdem A (2013) Visual saliency estimation by nonlinearly integrating features using region covariances. J Vis 13(4)(11):1–20

    Google Scholar 

  12. Gide MS, Karam LJ (2012) Comparative evaluation of visual saliency models for quality assessment task

  13. Hou XD, Zhang LQ (2007) Saliency detection: a spectral residual approach. IEEE Conf Comput Vis Pattern Recognit:1–8

  14. Hou XD, Zhang LQ (2008) Dynamic visual attention: searching for coding length increments. Adv Neural Inform Process Syst 21:681–688

    Google Scholar 

  15. Jiang H, Wang J, Yuan Z, Wu Y, Zheng N, Li S (2013) Salient object detection: a discriminative regional feature integration approach. IEEE Conf Comput Vis Pattern Recognit:2083–2090

  16. Jiang B, Zhang L, Lu H, Yang C, Yang MH (2013) Saliency detection via absorbing markov chain. IEEE Int Conf Comput Vis:1665–1672

  17. Kim C, Milanfar P (2013) Visual saliency in noisy images. J Vis 15(4)(5):1–14

    Google Scholar 

  18. Kim J, Han D, Tai Y-W, Kim J (2014) Salient region detection via high-dimensional color transform. IEEE Conf Comput Vis Pattern Recognit:883–890

  19. Kong X, Li K, Yang Q, Liu W (2013) A new image quality metric for image auto-denoising. IEEE Int Conf Comput Vis:2888–2895

  20. Larson EC, Chandler DM (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electron Imaging 19(1):011006–1-011006-21

    Google Scholar 

  21. Liang Z, Wang M, Zhou X, Lin L, Li W (2014) Salient object detection based on regions. Multimed Tools Appl 68(3):517–544

    Article  Google Scholar 

  22. Liu T, Sun J, Zheng N, Tang X, Shum H-Y (2007) Learning to detect a salient object. IEEE Conf Comput Vis Pattern Recognit

  23. Neethu KJ, Jabbar S (2015) Approaches for reducing artifacts in JPEG decompression: a survey. Int J Adv Res Comput Sci Manag Stud 3(6):350–355

  24. Niu YZ, Liu F (2012) What makes a professional video? A computational aesthetics approach. IEEE Trans Circ Syst Video Technol 22(7):1037–1049

    Article  Google Scholar 

  25. Perazzi F, Krähenbül P, Pritch Y, Hornung A (2012) Saliency filters: contrast based filtering for salient region detection. IEEE Conf Comput Vis Pattern Recognit:733–740

  26. Ponomarenko N, Lukin V, Zelensky A, Egiazarian K, Carli M, Battisti F (2009) TID2008-a database for evaluation of full-reference visual quality assessment metrics. Adv Mod Radioelectron 10 (4):30–45

    Google Scholar 

  27. Ponomarenko N, Jin L, Ieremeiev O, Lukin V, Egiazarian K, Astola J, et al. (2015) Image database TID2013: peculiarities, results and perspectives. Signal Process Image Commun 30 :57–77

    Article  Google Scholar 

  28. Sheikh HR, Wang Z, Cormack L, Bovik AC (2005) Live image quality assessment database release 2

  29. Tang C, Hou C, Wang P, Song Z (2015) Salient object detection using color spatial distribution and minimum spanning tree weight. Multimed Tools Appl:1–16

  30. Wang Z, Sheikh HR, Bovik AC (2002) No-reference perceptual quality assessment of JPEG compressed images. IEEE Int Conf Image Process:477–480

  31. Wei Y, Wen F, Zhu W, Sun J (2012) Geodesic saliency using background priors. Eur Conf Comput Vis:29–42

  32. Yang C, Zhang L, Lu H, Ruan X, Yang MH (2013) Saliency detection via graph-based manifold ranking. IEEE Conf Comput Vis Pattern Recognit:3166–3173

  33. Zhai Y, Shah M (2006) Visual attention detection in video sequences using spatiotemporal cues. ACM Int Conf Multimed:815–824

  34. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: A feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386

    Article  MathSciNet  MATH  Google Scholar 

  35. Zhang L, Shen Y, Li H (2014) VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans Image Process 23(10):4270–4281

    Article  MathSciNet  MATH  Google Scholar 

  36. Zhu W, Liang S, Wei Y, Sun J (2014) Saliency optimization from robust background detection. IEEE Conf Comput Vis Pattern Recognit:2814–2821

Download references

Acknowledgments

This work is partly supported by the National Natural Science Foundation of China under Grant No. 61300102 and No. 61672158, the Fujian Natural Science Funds for Distinguished Young Scholar under Grant No. 2015J06014, the Natural Science Foundation of Fujian Province under Grant No. 2014J01233, and the Key Project of Industry-Academic Cooperation of Fujian Province under Grant No. 2014H6014.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuzhong Chen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Niu, Y., Lin, L., Chen, Y. et al. Machine learning-based framework for saliency detection in distorted images. Multimed Tools Appl 76, 26329–26353 (2017). https://doi.org/10.1007/s11042-016-4128-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-4128-1

Keywords

Navigation