skip to main content
10.1145/3359997.3365738acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
abstract

An Eye-Tracking Dataset for Visual Attention Modelling in a Virtual Museum Context

Published: 14 November 2019 Publication History

Abstract

Predicting the user’s visual attention enables a virtual reality (VR) environment to provide a context-aware and interactive user experience. Researchers have attempted to understand visual attention using eye-tracking data in a 2D plane. In this poster, we propose the first 3D eye-tracking dataset for visual attention modelling in the context of a virtual museum. It comprises about 7 million records and may facilitate visual attention modelling in a 3D VR space.

References

[1]
Y. Fang, Chi Zhang, J. Li, M. P. Da Silva, and P. Le Callet. 2016. Visual attention modeling for stereoscopic video. In 2016 IEEE International Conference on Multimedia Expo Workshops (ICMEW). 1–6. https://doi.org/10.1109/ICMEW.2016.7574768
[2]
H. Fu, D. Xu, and S. Lin. 2017. Object-Based Multiple Foreground Segmentation in RGBD Video. IEEE Transactions on Image Processing 26, 3 (March 2017), 1418–1427. https://doi.org/10.1109/TIP.2017.2651369
[3]
K. Hirota and K. Tagawa. 2016. Interaction with virtual object using deformable hand. In 2016 IEEE Virtual Reality (VR). 49–56. https://doi.org/10.1109/VR.2016.7504687
[4]
S. S. S. Kruthiventi, K. Ayush, and R. V. Babu. 2017. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations. IEEE Transactions on Image Processing 26, 9 (Sept. 2017), 4446–4456. https://doi.org/10.1109/TIP.2017.2710620
[5]
Pedro Lopes, Sijing You, Lung-Pan Cheng, Sebastian Marwecki, and Patrick Baudisch. 2017. Providing Haptics to Walls & Heavy Objects in Virtual Reality by Means of Electrical Muscle Stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17. ACM Press, Denver, Colorado, USA, 1471–1482. https://doi.org/10.1145/3025453.3025600
[6]
M. Nielsen, C. Toft, N. C. Nilsson, R. Nordahl, and S. Serafin. 2016. Evaluating two alternative walking in place interfaces for virtual reality gaming. In 2016 IEEE Virtual Reality (VR). 299–300. https://doi.org/10.1109/VR.2016.7504772
[7]
Lingyun Sun, Yunzhan Zhou, Preben Hansen, Weidong Geng, and Xiangdong Li. 2018. Cross-objects user interfaces for video interaction in virtual reality museum context. Multimedia Tools and Applications 77, 21 (Nov. 2018), 29013–29041. https://doi.org/10.1007/s11042-018-6091-5
[8]
Pingmei Xu, Krista A. Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R. Kulkarni, and Jianxiong Xiao. 2015. TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking. arXiv:1504.06755 [cs] (April 2015). http://arxiv.org/abs/1504.06755 arXiv: 1504.06755.

Cited By

View all
  • (2022)EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museumEDVAM:用于虚拟博物馆视觉注意建模的三维眼动数据集Frontiers of Information Technology & Electronic Engineering10.1631/FITEE.200031823:1(101-112)Online publication date: 6-Feb-2022
  • (2021)Predicting user visual attention in virtual reality with a deep learning modelVirtual Reality10.1007/s10055-021-00512-725:4(1123-1136)Online publication date: 1-Dec-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
VRCAI '19: Proceedings of the 17th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry
November 2019
354 pages
ISBN:9781450370028
DOI:10.1145/3359997
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 November 2019

Check for updates

Author Tags

  1. eye-tracking datasets
  2. gaze detection
  3. neural networks
  4. visual attention

Qualifiers

  • Abstract
  • Research
  • Refereed limited

Conference

VRCAI '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 51 of 107 submissions, 48%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)33
  • Downloads (Last 6 weeks)1
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2022)EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museumEDVAM:用于虚拟博物馆视觉注意建模的三维眼动数据集Frontiers of Information Technology & Electronic Engineering10.1631/FITEE.200031823:1(101-112)Online publication date: 6-Feb-2022
  • (2021)Predicting user visual attention in virtual reality with a deep learning modelVirtual Reality10.1007/s10055-021-00512-725:4(1123-1136)Online publication date: 1-Dec-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media