skip to main content
10.1145/2578153.2578199acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
research-article

Influence of stimulus and viewing task types on a learning-based visual saliency model

Published: 26 March 2014 Publication History

Abstract

Learning-based approaches using actual human gaze data have been proven to be an efficient way to acquire accurate visual saliency models and attracted much interest in recent years. However, it still remains yet to be answered how different types of stimulus (e.g., fractal images, and natural images with or without human faces) and viewing tasks (e.g., free viewing or a preference rating task) affect learned visual saliency models. In this study, we quantitatively investigate how learned saliency models differ when using datasets collected in different settings (image contextual level and viewing task) and discuss the importance of choosing appropriate experimental settings.

References

[1]
Borji, A., and Itti, L. 2013. State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Machine Intell. 35, 1, 185--207.
[2]
Borji, A., Tavakoli, H. R., Sihite, D. N., and Itti, L. 2013. Analysis of scores, datasets, and models in visual saliency modeling. In Proc. ICCV2013, 921--928.
[3]
Bruce, N. D., and Tsotsos, J. K. 2009. Saliency, attention, and visual search: An information theoretic approach. Journal of vision 9, 3.
[4]
Cerf, M., Frady, E. P., and Koch, C. 2009. Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of vision 9, 12.
[5]
Harel, J., Koch, C., and Perona, P. 2006. Graph-based visual saliency. In Proc. NIPS2006, 545--552.
[6]
Itti, L., Koch, C., and Niebur, E. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Tram. Pattern Anal. Machine Intell. 20, 11, 1254--1259.
[7]
Johnson, R. A., and Wichern, D. W. 2002. Applied multivariate statistical analysis. Pearson.
[8]
Judd, T., Ehinger, K., Durand, F., and Torralba, A. 2009. Learning to predict where humans look. In Proc. ICCV2009, 2106--2113.
[9]
Kienzle, W., Wichmann, F. A., Schlkopf, B., and Franz, M. O. 2007. A nonparametric approach to bottom-up visual saliency. In Proc. NIPS2007, 689--696.
[10]
Kubota, H., Sugano, Y., Okabe, T., Sato, Y., Sugimoto, A., and Hiraki, K. 2012. Incorporating visual field characteristics into a saliency map. In Proc. ETRA2012, ACM, 333--336.
[11]
Le Meur, O., Le Callet, P., and Barba, D. 2007. Predicting visual fixations on video based on low-level visual features. Vision research 47, 19, 2483--2498.
[12]
Lienhart, R., and Maydt, J. 2002. An extended set of haar-like features for rapid object detection. In Proc. ICIP2002, 900--903.
[13]
Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., and Chua, T.-S. 2010. An eye fixation database for saliency detection in images. In Proc. ECCV2010, 30--43.
[14]
Tsotsos, J. K. 1991. Is complexity theory appropriate for analyzing biological systems? Behavioral and Brain Sciences 14 (12), 770--773.
[15]
Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., and Torralba, A. 2010. Sun database: Large-scale scene recognition from abbey to zoo. In Proc. CVPR2010, IEEE, 3485--3492.
[16]
Zhao, Q., and Koch, C. 2011. Learning a saliency map using fixated locations in natural scenes. Journal of vision 11, 3.
[17]
Zhao, Q., and Koch, C. 2012. Learning visual saliency by combining feature maps in a nonlinear manner using adaboost. Journal of Vision 12, 6.
[18]
Zhao, Q., and Koch, C. 2013. Learning saliency-based visual attention: A review. Signal Processing 93, 6, 1401--1407.

Cited By

View all

Index Terms

  1. Influence of stimulus and viewing task types on a learning-based visual saliency model

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ETRA '14: Proceedings of the Symposium on Eye Tracking Research and Applications
    March 2014
    394 pages
    ISBN:9781450327510
    DOI:10.1145/2578153
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 March 2014

    Check for updates

    Author Tags

    1. context level
    2. dataset
    3. difference
    4. free viewing
    5. preference rating
    6. saliency model
    7. statistical hypothesis test

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ETRA '14
    ETRA '14: Eye Tracking Research and Applications
    March 26 - 28, 2014
    Florida, Safety Harbor

    Acceptance Rates

    Overall Acceptance Rate 69 of 137 submissions, 50%

    Upcoming Conference

    ETRA '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media