Skip to main content

Task-Driven Saliency Detection on Music Video

  • Conference paper
  • First Online:
Computer Vision - ACCV 2014 Workshops (ACCV 2014)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9009))

Included in the following conference series:

Abstract

We propose a saliency model to estimate the task-driven eye-movement. Human eye movement patterns is affected by observer’s task and mental state [1]. However, the existing saliency model are detected from the low-level image features such as bright regions, edges, colors, etc. In this paper, the tasks (e.g., evaluation of a piano performance) are given to the observer who is watching the music videos. Unlike existing visual-based methods, we use musical score features and image features to detect a saliency. We show that our saliency model outperforms existing models that use eye movement patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Yarbus, A.L.: Eye Movements and Vision. Plenum Press, New York (1967)

    Book  Google Scholar 

  2. Henderson, J.M., Shinkareva, S.V., Wang, J., Luke, S.G., Olejarczyk, J.: Predicting cognitive state from eye movements. PLoS One 8(5), e64937 (2013)

    Article  Google Scholar 

  3. DeAngelusa, M., Pelza, J.B.: Top-down control of eye movements: yarbus revisited. Visual Cogn. 17, 790–811 (2009)

    Article  Google Scholar 

  4. Borji, A., Itti, L.: Defending Yarbus: eye movements reveal observers’ task. J. Vis. 14(3), 29 (2014)

    Article  Google Scholar 

  5. Kunze, K., Utsumi, Y., Shiga, Y., Kise, K., Bulling, A.: I know what you are reading: recognition of document types using mobile eye tracking. In: International Symposium on Wearable Computers (2013)

    Google Scholar 

  6. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. PAMI 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  7. Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: CVPR, pp. 3166–3173 (2013)

    Google Scholar 

  8. Riche, N., Mancas, M., Culibrk, D., Crnojevic, V., Gosselin, B., Dutoit, T.: Dynamic saliency models and human attention: a comparative study on videos. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012, Part III. LNCS, vol. 7726, pp. 586–598. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  9. Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: NIPS (2005)

    Google Scholar 

  10. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: NIPS (2006)

    Google Scholar 

  11. Seo, H.J., Milanfar, P.: Static and space-time visual saliency detection by self-resemblance. J. Vis. 9(15), 1–27 (2009)

    Google Scholar 

  12. Wang, L., Xue, J., Zheng, N., Hua, G.: Automatic salient object extraction with contextual cue. In: ICCV (2011)

    Google Scholar 

  13. Shi, K., Wang, K., Lu, J., Lin, L.: PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors. In: CVPR (2013)

    Google Scholar 

  14. Iwatsuki, A., Hirayama, T., Mase, K.: Analysis of soccer coach’s eye gaze behavior. In: Proceedings of ASVAI (2013)

    Google Scholar 

  15. Peters, R.J., et al.: Components of bottom-up gaze allocation in natural images. Vis. Res. 45(18), 2397–2416 (2005)

    Article  Google Scholar 

  16. Ouerhani, N., von Wartburg, R., Hugli, H., Muri, R.M.: Empirical validation of saliency-based model of visual attention. Electron. Lett. Comput. Vis. Image Anal. 3(1), 13–24 (2003)

    Google Scholar 

  17. Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: Advances in Neural Information Processing Systems (2005)

    Google Scholar 

  18. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: CVPR (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shunsuke Numano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Numano, S., Enami, N., Ariki, Y. (2015). Task-Driven Saliency Detection on Music Video. In: Jawahar, C., Shan, S. (eds) Computer Vision - ACCV 2014 Workshops. ACCV 2014. Lecture Notes in Computer Science(), vol 9009. Springer, Cham. https://doi.org/10.1007/978-3-319-16631-5_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-16631-5_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-16630-8

  • Online ISBN: 978-3-319-16631-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics