Abstract
We propose a saliency model to estimate the task-driven eye-movement. Human eye movement patterns is affected by observer’s task and mental state [1]. However, the existing saliency model are detected from the low-level image features such as bright regions, edges, colors, etc. In this paper, the tasks (e.g., evaluation of a piano performance) are given to the observer who is watching the music videos. Unlike existing visual-based methods, we use musical score features and image features to detect a saliency. We show that our saliency model outperforms existing models that use eye movement patterns.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Yarbus, A.L.: Eye Movements and Vision. Plenum Press, New York (1967)
Henderson, J.M., Shinkareva, S.V., Wang, J., Luke, S.G., Olejarczyk, J.: Predicting cognitive state from eye movements. PLoS One 8(5), e64937 (2013)
DeAngelusa, M., Pelza, J.B.: Top-down control of eye movements: yarbus revisited. Visual Cogn. 17, 790–811 (2009)
Borji, A., Itti, L.: Defending Yarbus: eye movements reveal observers’ task. J. Vis. 14(3), 29 (2014)
Kunze, K., Utsumi, Y., Shiga, Y., Kise, K., Bulling, A.: I know what you are reading: recognition of document types using mobile eye tracking. In: International Symposium on Wearable Computers (2013)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. PAMI 20(11), 1254–1259 (1998)
Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: CVPR, pp. 3166–3173 (2013)
Riche, N., Mancas, M., Culibrk, D., Crnojevic, V., Gosselin, B., Dutoit, T.: Dynamic saliency models and human attention: a comparative study on videos. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012, Part III. LNCS, vol. 7726, pp. 586–598. Springer, Heidelberg (2013)
Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: NIPS (2005)
Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: NIPS (2006)
Seo, H.J., Milanfar, P.: Static and space-time visual saliency detection by self-resemblance. J. Vis. 9(15), 1–27 (2009)
Wang, L., Xue, J., Zheng, N., Hua, G.: Automatic salient object extraction with contextual cue. In: ICCV (2011)
Shi, K., Wang, K., Lu, J., Lin, L.: PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors. In: CVPR (2013)
Iwatsuki, A., Hirayama, T., Mase, K.: Analysis of soccer coach’s eye gaze behavior. In: Proceedings of ASVAI (2013)
Peters, R.J., et al.: Components of bottom-up gaze allocation in natural images. Vis. Res. 45(18), 2397–2416 (2005)
Ouerhani, N., von Wartburg, R., Hugli, H., Muri, R.M.: Empirical validation of saliency-based model of visual attention. Electron. Lett. Comput. Vis. Image Anal. 3(1), 13–24 (2003)
Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: Advances in Neural Information Processing Systems (2005)
Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Numano, S., Enami, N., Ariki, Y. (2015). Task-Driven Saliency Detection on Music Video. In: Jawahar, C., Shan, S. (eds) Computer Vision - ACCV 2014 Workshops. ACCV 2014. Lecture Notes in Computer Science(), vol 9009. Springer, Cham. https://doi.org/10.1007/978-3-319-16631-5_48
Download citation
DOI: https://doi.org/10.1007/978-3-319-16631-5_48
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16630-8
Online ISBN: 978-3-319-16631-5
eBook Packages: Computer ScienceComputer Science (R0)