skip to main content
10.1145/3379156.3391338acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
short-paper

Decoding Task From Oculomotor Behavior In Virtual Reality

Published: 02 June 2020 Publication History

Abstract

In the present study, we aim to explore whether and how well we can predict tasks based on eye movements in a virtual environment. We designed four different tasks in which participants had to align two cubes of different sizes. To define where participants looked, we used a ray-based method to calculate the point-of-regard (POR) on each cube at each time point. Using leave-one-subject-out cross-validation, our model performed well with an f1-score of 0.51 ± 0.17 (chance level 0.25) in predicting the four alignment types. Results suggest that the type of task can be decoded based on the aggregation of PORs. We further discuss the implications of object size on task inference and thus set an exciting road-map for how to design intention recognition experiments in virtual reality.

References

[1]
D H Ballard, M M Hayhoe, and J B Pelz. 1995. Memory representations in natural tasks. Journal of Cognitive Neuroscience 7, 1 (1995), 66–80.
[2]
T Betz, T C Kietzmann, N Wilming, and P König. 2010. Investigating task-dependent top-down effects on overt visual attention. Journal of Vision 10, 3 (2010), 15–15.
[3]
C J Bohil, B Alicea, and F A Biocca. 2011. Virtual reality in neuroscience research and therapy. Nature Reviews Neuroscience 12, 12 (2011), 752–762.
[4]
A Borji and L Itti. 2014. Defending Yarbus: eye movements reveal observers’ task. Journal of Vision 14, 3 (2014), 29.
[5]
G T Buswell. 1935. How people look at pictures: a study of the psychology and perception in art. Chicago University Press 198 (1935).
[6]
M S Castelhano, M L Mack, and J M Henderson. 2009. Viewing task influences eye movement control during active scene perception. Journal of Vision 9, 3 (2009), 6–15.
[7]
V Clay, P König, and S König. 2019. Eye tracking in virtual reality. Journal of Eye Movement Research 12, 1 (2019). https://doi.org/10.16910/jemr.12.1.3
[8]
M I Coco and F Keller. 2014. Classification of visual and linguistic tasks using eye-movement features. Journal of Vision 14, 3 (2014), 11–11.
[9]
M DeAngelus and J B Pelz. 2009. Top-down control of eye movements: Yarbus revisited. Visual Cognition 17, 6 (2009), 790–811.
[10]
A K Engel, A Maye, M Kurthen, and P König. 2013. Where’s the action? The pragmatic turn in cognitive science. Trends in Cognitive Sciences 17, 5 (2013), 202–209.
[11]
M R Greene, T Liu, and J M Wolfe. 2011. Reconsidering Yarbus: Pattern classification cannot predict observer’s task from scan paths. Journal of Vision 11, 11 (2011), 498–498.
[12]
M R Greene, T Liu, and J M Wolfe. 2012. Reconsidering Yarbus: a failure to predict observers’ task from eye movement patterns. Vision Research 62(2012), 1–8.
[13]
A Haji-Abolhassani and J J Clark. 2014. An inverse Yarbus process: predicting observers’ task from eye movement patterns. Vision Research 103(2014), 127–142.
[14]
M M Hayhoe, A Shrivastava, R Mruczek, and J B Pelz. 2003. Visual memory and motor planning in a natural task. Journal of Vision 3, 1 (2003), 49–63.
[15]
C Kanan, N A Ray, D N F Bseiso, J H Hsiao, and G W Cottrell. 2014. Predicting an Observer’s Task Using Multi-fixation Pattern Analysis. In Proceedings of the Symposium on Eye Tracking Research and Applications (Safety Harbor, Florida) (ETRA ’14). ACM, New York, NY, USA, 287–290.
[16]
Magdalena Ewa Król and Michał Król. 2018. The right look for the job: decoding cognitive processes involved in the task from spatial eye-movement patterns. Psychological Research(2018), 1–14.
[17]
M Land, N Mennie, and J Rusted. 1999. The roles of vision and eye movements in the control of activities of daily living. Perception 28, 11 (1999), 1311–1328.
[18]
M F Land and P McLeod. 2000. From eye movements to actions: how batsmen hit the ball. Nature Neuroscience 3, 12 (2000), 1340–1345.
[19]
D L Mann, H Nakamoto, N Logt, L Sikkink, and E Brenner. 2019. Predictive eye movements when hitting a bouncing ball. Journal of Vision 19, 14 (2019), 28.
[20]
M Mills, A Hollingworth, S Van der Stigchel, L Hoffman, and M D Dodd. 2011. Examining the influence of task set on eye movements and fixations. Journal of Vision 11, 8 (2011), 17.
[21]
A Mottelson and K Hornbæk. 2017. Virtual reality studies outside the laboratory. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology - VRST ’17 (Gothenburg, Sweden). ACM Press, New York, New York, USA, 1–10.
[22]
B W Tatler, N J Wade, H Kwan, J M Findlay, and B M Velichkovsky. 2010. Yarbus, eye movements, and vision. Iperception 1, 1 (2010), 7–27.
[23]
T L Taylor. 2002. Living Digitally: Embodiment in Virtual Worlds. In The Social Life of Avatars, Msc Ralph Schroeder BA (Ed.). Springer London, 40–62.
[24]
A L Yarbus. 1967. Eye Movements and Vision. Springer.
[25]
X Zeng, Y Chen, C Tao, and D v. Alphen. 2009. Feature Selection Using Recursive Feature Elimination for Handwritten Digit Recognition. In 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, 1205–1208.

Cited By

View all
  • (2024)Just-in-time: Gaze guidance in natural behaviorPLOS Computational Biology10.1371/journal.pcbi.101252920:10(e1012529)Online publication date: 24-Oct-2024
  • (2024)Gaze-Based Intention Estimation: Principles, Methodologies, and Applications in HRIACM Transactions on Human-Robot Interaction10.1145/365637613:3(1-30)Online publication date: 26-Sep-2024
  • (2024)Real-World Scanpaths Exhibit Long-Term Temporal Dependencies: Considerations for Contextual AI for AR ApplicationsProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3656352(1-7)Online publication date: 4-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ETRA '20 Short Papers: ACM Symposium on Eye Tracking Research and Applications
June 2020
305 pages
ISBN:9781450371346
DOI:10.1145/3379156
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 June 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. eye movements
  2. support vector machines
  3. task inference
  4. virtual reality

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

  • BMBF

Conference

ETRA '20

Acceptance Rates

Overall Acceptance Rate 69 of 137 submissions, 50%

Upcoming Conference

ETRA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)41
  • Downloads (Last 6 weeks)2
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Just-in-time: Gaze guidance in natural behaviorPLOS Computational Biology10.1371/journal.pcbi.101252920:10(e1012529)Online publication date: 24-Oct-2024
  • (2024)Gaze-Based Intention Estimation: Principles, Methodologies, and Applications in HRIACM Transactions on Human-Robot Interaction10.1145/365637613:3(1-30)Online publication date: 26-Sep-2024
  • (2024)Real-World Scanpaths Exhibit Long-Term Temporal Dependencies: Considerations for Contextual AI for AR ApplicationsProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3656352(1-7)Online publication date: 4-Jun-2024
  • (2024)Uncovering and Addressing Blink-Related Challenges in Using Eye Tracking for Interactive SystemsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642086(1-23)Online publication date: 11-May-2024
  • (2024)Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345616430:11(7277-7287)Online publication date: 1-Nov-2024
  • (2023)Highlighting the Challenges of Blinks in Eye Tracking for Interactive SystemsProceedings of the 2023 Symposium on Eye Tracking Research and Applications10.1145/3588015.3589202(1-7)Online publication date: 30-May-2023
  • (2023)Action affordance affects proximal and distal goal‐oriented planningEuropean Journal of Neuroscience10.1111/ejn.1596357:9(1546-1560)Online publication date: 27-Mar-2023
  • (2023)EHTask: Recognizing User Tasks From Eye and Head Movements in Immersive Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2021.313890229:4(1992-2004)Online publication date: 1-Apr-2023
  • (2021)Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place TasksFrontiers in Neurorobotics10.3389/fnbot.2021.64793015Online publication date: 16-Apr-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media