Abstract
Where people look when watching videos is believed to be heavily influenced by temporal patterns. In this work, we test this assumption by quantifying to which extent gaze on recent video saliency benchmarks can be predicted by a static baseline model. On the recent LEDOV dataset, we find that at least 75% of the explainable information as defined by a gold standard model can be explained using static features. Our baseline model “DeepGaze MR” even outperforms state-of-the-art video saliency models, despite deliberately ignoring all temporal patterns. Visual inspection of our static baseline’s failure cases shows that clear temporal effects on human gaze placement exist, but are both rare in the dataset and not captured by any of the recent video saliency models. To focus the development of video saliency models on better capturing temporal effects we construct a meta-dataset consisting of those examples requiring temporal information.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer Normalization. arXiv:1607.06450 [cs, stat], July 2016
Bak, C., Kocak, A., Erdem, E., Erdem, A.: Spatio-temporal saliency networks for dynamic saliency prediction. IEEE Trans. Multimed 20(7), 1688–1698 (2018). https://doi.org/10.1109/TMM.2017.2777665
Bazzani, L., Larochelle, H., Torresani, L.: Recurrent mixture density network for spatiotemporal visual attention. In: ICLR 2017 (2017)
Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013). https://doi.org/10.1109/TPAMI.2012.89
Bylinskii, Z., et al.: MIT saliency benchmark. http://saliency.mit.edu/
Bylinskii, Z., et al.: Learning visual importance for graphic designs and data visualizations. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017, pp. 57–69. ACM, New York (2017). https://doi.org/10.1145/3126594.3126653
Cichy, R.M., Kaiser, D.: Deep neural networks as scientific models. Trends Cogn. Sci. 23(4), 305–317 (2019). https://doi.org/10.1016/j.tics.2019.01.009
Dorr, M., Martinetz, T., Gegenfurtner, K.R., Barth, E.: Variability of eye movements when viewing dynamic natural scenes. J. Vis. 10(10), 28–28 (2010). https://doi.org/10.1167/10.10.28
Eysenck, M.W., Keane, M.T.: Cognitive Psychology: A Student’s Handbook, vol. 6. Psychology Press, London (2010)
Fang, Y., Lin, W., Chen, Z., Tsai, C.M., Lin, C.W.: A video saliency detection model in compressed domain. IEEE Trans. Circuits Syst. Video Technol. 24(1), 27–38 (2014). https://doi.org/10.1109/TCSVT.2013.2273613
Guo, C., Zhang, L.: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Process. 19(1), 185–198 (2010). https://doi.org/10.1109/TIP.2009.2030969
He, S., Tavakoli, H.R., Borji, A., Mi, Y., Pugeault, N.: Understanding and visualizing deep visual saliency models. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10206–10215, June 2019
Hossein Khatoonabadi, S., Vasconcelos, N., Bajic, I.V., Shan, Y.: How many bits does it take for a stimulus to be salient? In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5501–5510, June 2015
Hou, X., Zhang, L.: Dynamic visual attention: searching for coding length increments. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems 21, pp. 681–688. Curran Associates, Inc. (2009)
Huang, X., Shen, C., Boix, X., Zhao, Q.: SALICON: reducing the semantic gap in saliency prediction by adapting deep neural networks. In: The IEEE International Conference on Computer Vision (ICCV), pp. 262–270, December 2015
Itti, L.: Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes. Visual Cogn. 12(6), 1093–1123 (2005). https://doi.org/10.1080/13506280444000661
Itti, L., Koch, C.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2(3), 194–203 (2001). https://doi.org/10.1038/35058500
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998). https://doi.org/10.1109/34.730558
Jiang, L., Xu, M., Liu, T., Qiao, M., Wang, Z.: DeepVS: a deep learning based video saliency prediction approach. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 625–642. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_37
Jiang, L., Xu, M., Wang, Z.: Predicting Video Saliency with Object-to-Motion CNN and Two-layer Convolutional LSTM. arXiv:1709.06316 [cs], September 2017
Jost, T., Ouerhani, N., von Wartburg, R., Müri, R., Hügli, H.: Assessing the contribution of color in visual attention. Comput. Vis. Image Underst. 100(1–2), 107–123 (2005). https://doi.org/10.1016/j.cviu.2004.10.009
Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations. MIT Technical report, January 2012
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR 2015, May 2015
Kümmerer, M., Theis, L., Bethge, M.: Deep gaze I: boosting saliency prediction with feature maps trained on ImageNet. In: ICLR Workshops 2015, May 2015
Kümmerer, M., Wallis, T.S.A., Bethge, M.: Information-theoretic model comparison unifies saliency metrics. Proc. Natl. Acad. Sci. 112(52), 16054–16059 (2015). https://doi.org/10.1073/pnas.1510393112
Kümmerer, M., Wallis, T.S.A., Bethge, M.: Saliency benchmarking made easy: separating models, maps and metrics. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 798–814. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_47
Kümmerer, M., Wallis, T.S., Gatys, L.A., Bethge, M.: Understanding low- and high-level contributions to fixation prediction. In: The IEEE International Conference on Computer Vision (ICCV), pp. 4799–4808, October 2017
Lai, Q., Wang, W., Sun, H., Shen, J.: Video saliency prediction using spatiotemporal residual attentive networks. IEEE Trans. Image Process. 29, 1113–1126 (2020). https://doi.org/10.1109/TIP.2019.2936112
Le Meur, O., Baccino, T.: Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behav. Res. Methods 45(1), 251–266 (2013). https://doi.org/10.3758/s13428-012-0226-9
Leborán, V., García-Díaz, A., Fdez-Vidal, X.R., Pardo, X.M.: Dynamic whitening saliency. IEEE Trans. Pattern Anal. Mach. Intell. 39(5), 893–907 (2017). https://doi.org/10.1109/TPAMI.2016.2567391
Linardos, P., Mohedano, E., Nieto, J.J., O’Connor, N.E., Giro-i-Nieto, X., McGuinness, K.: Simple vs complex temporal recurrences for video saliency prediction. In: British Machine Vision Conference (BMVC), September 2019
Mathe, S., Sminchisescu, C.: Actions in the eye: dynamic gaze datasets and learnt saliency models for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(7), 1408–1424 (2015). https://doi.org/10.1109/TPAMI.2014.2366154
Min, K., Corso, J.J.: TASED-Net: temporally-aggregating spatial encoder-decoder network for video saliency detection. In: The IEEE International Conference on Computer Vision (ICCV), pp. 2394–2403, October 2019
Mital, P.K., Smith, T.J., Hill, R.L., Henderson, J.M.: Clustering of gaze during dynamic scene viewing is predicted by motion. Cogn. Comput. 3(1), 5–24 (2011). https://doi.org/10.1007/s12559-010-9074-z
Pan, J., et al.: SalGAN: visual saliency prediction with generative adversarial networks. arXiv:1701.01081 [cs], January 2017
Peters, R.J., Iyer, A., Itti, L., Koch, C.: Components of bottom-up gaze allocation in natural images. Vision. Res. 45(18), 2397–2416 (2005). https://doi.org/10.1016/j.visres.2005.03.019
Rajashekar, U., Cormack, L.K., Bovik, A.C.: Point-of-gaze analysis reveals visual search strategies. In: Human Vision and Electronic Imaging IX, vol. 5292, pp. 296–306. International Society for Optics and Photonics, June 2004. https://doi.org/10.1117/12.537118
Ren, Z., Gao, S., Chia, L.T., Rajan, D.: Regularized feature reconstruction for spatio-temporal saliency detection. IEEE Trans. Image Process. 22(8), 3120–3132 (2013). https://doi.org/10.1109/TIP.2013.2259837
Rosenholtz, R.: A simple saliency model predicts a number of motion popout phenomena. Vision. Res. 39(19), 3157–3163 (1999). https://doi.org/10.1016/S0042-6989(99)00077-2
Rudoy, D., Goldman, D.B., Shechtman, E., Zelnik-Manor, L.: Learning video saliency from human gaze using candidate selection. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1147–1154, June 2013
Seo, H.J., Milanfar, P.: Static and space-time visual saliency detection by self-resemblance. J. Vis. 9(12), 15–15 (2009). https://doi.org/10.1167/9.12.15
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR 2015, May 2015
Sun, Z., Wang, X., Zhang, Q., Jiang, J.: Real-time video saliency prediction via 3d residual convolutional neural network. IEEE Access 7, 147743–147754 (2019). https://doi.org/10.1109/ACCESS.2019.2946479
Tatler, B.W.: The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7(14), 4–4 (2007). https://doi.org/10.1167/7.14.4
Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12(1), 97–136 (1980). https://doi.org/10.1016/0010-0285(80)90005-5
Vig, E., Dorr, M., Cox, D.: Large-scale optimization of hierarchical features for saliency prediction in natural images. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2798–2805 (2014)
Wang, W., Shen, J., Guo, F., Cheng, M.M., Borji, A.: Revisiting video saliency: a large-scale benchmark and a new model. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4894–4903, June 2018
Wang, W., et al.: Learning unsupervised video object segmentation through visual attention. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3064–3074, June 2019
Wilming, N., Betz, T., Kietzmann, T.C., König, P.: Measures and limits of models of fixation selection. PLoS ONE 6(9), e24038 (2011). https://doi.org/10.1371/journal.pone.0024038
Wu, X., Wu, Z., Zhang, J., Ju, L., Wang, S.: SalSAC: a video saliency prediction model with shuffled attentions and correlation-based ConvLSTM. In: Thirty-Fourth AAAI Conference on Artificial Intelligence. AAAI Press, February 2020
Zhang, L., Tong, M.H., Cottrell, G.W.: SUNDAy: saliency using natural statistics for dynamic analysis of scenes. In: Proceedings of the 31st Annual Meeting of the Cognitive Science Society, pp. 2944–2949. AAAI Press, Cambridge (2009)
Zhong, S.h., Liu, Y., Ren, F., Zhang, J., Ren, T.: Video saliency detection via dynamic consistent spatio-temporal attention modelling. In: Twenty-Seventh AAAI Conference on Artificial Intelligence. AAAI Press, July 2013
Zhou, F., Bing Kang, S., Cohen, M.F.: Time-mapping using space-time saliency. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3358–3365, June 2014
Acknowledgements
This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): Germany’s Excellence Strategy – EXC 2064/1 – 390727645 and SFB 1233, Robust Vision: Inference Principles and Neural Mechanisms, TP 3, project number: 276693517. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Matthias Tangemann.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 2 (mp4 6525 KB)
Supplementary material 3 (mp4 2240 KB)
Supplementary material 4 (mp4 4535 KB)
Supplementary material 5 (mp4 1502 KB)
Supplementary material 6 (mp4 2682 KB)
Supplementary material 7 (mp4 5601 KB)
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Tangemann, M., Kümmerer, M., Wallis, T.S.A., Bethge, M. (2020). Measuring the Importance of Temporal Features in Video Saliency. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12373. Springer, Cham. https://doi.org/10.1007/978-3-030-58604-1_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-58604-1_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58603-4
Online ISBN: 978-3-030-58604-1
eBook Packages: Computer ScienceComputer Science (R0)