Skip to main content

AFSnet: Fixation Prediction in Movie Scenes with Auxiliary Facial Saliency

  • Conference paper
  • First Online:
Advances in Brain Inspired Cognitive Systems (BICS 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10989))

Included in the following conference series:

  • 2472 Accesses

Abstract

While data-driven methods for image saliency detection has become more and more mature, video saliency detection, which has additional inter-frame motion and temporal information, still needs further exploration. Different from images, video data, in addition to rich semantic information, also contains a large number of contextual information and motion features. For different scenes, video saliency also has different tendencies. In the movie scene, the face has the strongest visual stimulus to the viewer. In view of the specific movie scene, we propose an efficient and novel video attention prediction model with auxiliary facial saliency (AFSnet) to predict human eye locations in movie scene. The proposed model takes FCN as the basic structure, and improves the prediction effect by adaptively combining facial saliency hints. We give qualitative and quantitative experiments to prove the validity of the model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ren, J., et al.: Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection. IET Image Process. 4(4), 294–301 (2010)

    Article  Google Scholar 

  2. Yan, Y., et al.: Adaptive fusion of color and spatial features for noise-robust retrieval of colored logo and trademark images. Multidimension. Syst. Signal Process. 27(4), 945–968 (2016)

    Article  MathSciNet  Google Scholar 

  3. Yan, Y., Ren, J., et al.: Fusion of dominant colour and spatial layout features for effective image retrieval of coloured logos and trademarks. In: Multimedia Big Data (2015)

    Google Scholar 

  4. Zheng, J., Liu, Y., Ren, J., et al.: Fusion of block and keypoints based approaches for effective copy-move image forgery detection. Multidimension. Syst. Signal Process. 27(4), 989–1005 (2016)

    Article  MathSciNet  Google Scholar 

  5. Li, X., Zhao, L., Wei, L., et al.: DeepSaliency: multi-task deep neural network model for salient object detection. IEEE Trans. Image Proc. 25(8), 3919–3930 (2016)

    Article  MathSciNet  Google Scholar 

  6. Wang, L., Wang, L., Lu, H., Zhang, P., Ruan, X.: Saliency detection with recurrent fully convolutional networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 825–841. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_50

    Chapter  Google Scholar 

  7. Zhang, P., Wang, D., Lu, H., Wang, H., Ruan, X.: Amulet: aggregating multi-level convolutional features for salient object detection. In: IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  8. Zhang, P., Wang, D., Lu, H., Wang, H., Yin, B.: Learning uncertain convolutional features for accurate saliency detection. In: IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  9. Lu, H., Yang, G., Ruan, X., Borji, A.: Detect globally, refine locally: a novel approach to saliency detection. In: IEEE Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  10. Zhang, X., Wang, T., Qi, J., Lu, H., Wang, G.: Progressive attention guided recurrent network for salient object detection. In: IEEE Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  11. Lee, G., et al.: ELD-Net: an efficient deep learning architecture for accurate saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, August 2017

    Google Scholar 

  12. Wang, W., Shen, J.: Video salient object detection via fully convolutional networks. IEEE Trans. Image Proc. 27(1), 38–49 (2018)

    Article  MathSciNet  Google Scholar 

  13. Yan, Y., et al.: Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement. Pattern Recogn. 79, 65–78 (2018)

    Article  Google Scholar 

  14. Wang, W., Shen, J.: Deep visual attention prediction. IEEE Trans. Image Proc. 27(5), 2368–2378 (2018)

    Article  MathSciNet  Google Scholar 

  15. Liu, Z., Li, J.H., Ye, L.W., et al.: Saliency detection for unconstrained videos using superpixel-level graph and spatiotemporal propagation. IEEE Trans. Circuits Syst. Video Technol. 27(12), 2527–2542 (2016)

    Article  Google Scholar 

  16. Wang, W., Shen, J., et al.: Consistent video saliency using local gradient flow optimization and global refinement. IEEE Trans. Image Proc. 24(11), 4185–4196 (2015)

    Article  MathSciNet  Google Scholar 

  17. Han, J., Sun, L., Hu, X., Han, J., Shao, L.: Spatial and temporal visual attention prediction in videos using eye movement data. Neurocomputing 145(2014), 140–153 (2014)

    Article  Google Scholar 

  18. Dehkordi, B., et al.: A learning-based visual saliency prediction model for stereoscopic 3D video (LBVS-3D). Multimed. Tools Appl. 76(22), 23859–23890 (2017)

    Article  Google Scholar 

  19. Wang, Z., et al.: A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos. Neurocomputing 287(2018), 68–83 (2018)

    Article  Google Scholar 

  20. Yan, Y., et al.: Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos. Cogn. Comput. 10(1), 94–104 (2018)

    Article  Google Scholar 

  21. Toet, A.: Computational versus psychophysical bottom-up image saliency: a comparative evaluation study. IEEE Trans. Patt. Anal. Mach. Intell. 33(11), 2131–2148 (2011)

    Article  Google Scholar 

  22. Ofir, P., Michael, W.: Fast and robust earth mover’s distances. In: IEEE International Conferences on Computer Vision, pp. 460–467 (2009). https://doi.org/10.1109/iccv.2009.5459199

  23. Hu, P., Ramanan, D.: Finding tiny faces. In: IEEE Conferences on Computer Vision and Pattern Recognition, pp. 1522–1530, July 2017

    Google Scholar 

  24. Mathe, S., et al.: Actions in the eye: dynamic gaze datasets and learnt saliency models for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(7), 1408–1424 (2015)

    Article  Google Scholar 

  25. Soomro, K., Zamir, A.R.: Action recognition in realistic sports videos. In: Moeslund, T.B., Thomas, G., Hilton, A. (eds.) Computer Vision in Sports. ACVPR, pp. 181–208. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-09396-3_9

    Chapter  Google Scholar 

  26. Han, J., Zhang, D., Wen, S., Guo, L., Liu, T.: Two-stage learning to predict human eye fixations via SDAEs. IEEE Trans. Cybern. 46(2), 487–498 (2016)

    Article  Google Scholar 

  27. Cornia, M., Baraldi, L., Serra, G., Cucchiara, R.: Predicting human eye fixations via an LSTM-based saliency attentive model. arXiv preprint, arXiv:1611.09571 (2017)

  28. Zhang, L.Y., Tong, M.H., Marks, T.K.: SUN: a bayesian framework for saliency using natural statistics. J. Vis. 8, 32 (2008)

    Article  Google Scholar 

  29. Achanta, R., Susstrunk, S.: Saliency detection using maximum symmetric surround. IEEE Trans. on Image Processing, vol. 119, no. 9, pp. 2653–2656 (2010)

    Google Scholar 

  30. Guo, C., et al.: Spatio-temporal saliency detection using phase spectrum of quaternion Fourier transform. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, June 2008

    Google Scholar 

  31. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of the Neural Information Processing Systems, pp. 545–552 (2007)

    Google Scholar 

  32. Rezazadegan Tavakoli, H., Rahtu, E., Heikkilä, J.: Fast and efficient saliency detection using sparse sampling and kernel density estimation. In: Heyden, A., Kahl, F. (eds.) SCIA 2011. LNCS, vol. 6688, pp. 666–675. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21227-7_62

    Chapter  Google Scholar 

  33. Li, J., Tian, Y., Huang, T.: Visual saliency with statistical priors. Int. J. Comput. Vis. 107(3), 239–253 (2014)

    Article  MathSciNet  Google Scholar 

  34. R. Tavakoli, H., Laaksonen, J.: Bottom-up fixation prediction using unsupervised hierarchical models. In: Chen, C.-S., Lu, J., Ma, K.-K. (eds.) ACCV 2016. LNCS, vol. 10116, pp. 287–302. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54407-6_19

    Chapter  Google Scholar 

  35. Hou, X., Harel, J., Koch, C.: Image signature: highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 194–201 (2011)

    Google Scholar 

  36. Jia, Y., Shelhamer, E., Donahue, J., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of ACM Multimedia, pp. 675–678 (2014)

    Google Scholar 

Download references

Acknowledgment

The authors wish to acknowledge the support for the research work from the National Natural Science Foundation of China under grant Nos. 61572351, and 61772360.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Z., Sun, M., Ren, J., Wang, Z. (2018). AFSnet: Fixation Prediction in Movie Scenes with Auxiliary Facial Saliency. In: Ren, J., et al. Advances in Brain Inspired Cognitive Systems. BICS 2018. Lecture Notes in Computer Science(), vol 10989. Springer, Cham. https://doi.org/10.1007/978-3-030-00563-4_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-00563-4_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-00562-7

  • Online ISBN: 978-3-030-00563-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics