skip to main content
10.1145/3379156.3391361acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
short-paper

Challenges in Interpretability of Neural Networks for Eye Movement Data

Published: 02 June 2020 Publication History

Abstract

Many applications in eye tracking have been increasingly employing neural networks to solve machine learning tasks. In general, neural networks have achieved impressive results in many problems over the past few years, but they still suffer from the lack of interpretability due to their black-box behavior. While previous research on explainable AI has been able to provide high levels of interpretability for models in image classification and natural language processing tasks, little effort has been put into interpreting and understanding networks trained with eye movement datasets. This paper discusses the importance of developing interpretability methods specifically for these models. We characterize the main problems for interpreting neural networks with this type of data, how they differ from the problems faced in other domains, and why existing techniques are not sufficient to address all of these issues. We present preliminary experiments showing the limitations that current techniques have and how we can improve upon them. Finally, based on the evaluation of our experiments, we suggest future research directions that might lead to more interpretable and explainable neural networks for eye tracking.

References

[1]
Jaegul Choo and Shixia Liu. 2018. Visual analytics for explainable deep learning. IEEE Computer Graphics and Applications 38, 4 (2018), 84–92.
[2]
Kirsten A Dalrymple, Ming Jiang, Qi Zhao, and Jed T Elison. 2019. Machine learning accurately classifies age of toddlers based on eye tracking. Scientific Reports 9(2019), 1–10.
[3]
Wolfgang Fuhl, Efe Bozkir, Benedikt Hosp, Nora Castner, David Geisler, Thiago C Santini, and Enkelejda Kasneci. 2019. Encodji: encoding gaze data into emoji space for an amusing scanpath classification approach. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. Article 64, 4 pages.
[4]
Wolfgang Fuhl, Thiago Santini, Gjergji Kasneci, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. PupilNet v2.0: Convolutional neural networks for CPU based real time robust pupil detection. arXiv preprint 1711.00112(2017).
[5]
Rafael Garcia, Alexandru C Telea, Bruno Castro da Silva, Jim Tørresen, and João Luiz Dihl Comba. 2018. A task-and-technique centered survey on visual analytics for deep learning model engineering. Computers & Graphics 77(2018), 30–49.
[6]
Ioana Giurgiu and Anika Schumann. 2019. Explainable failure predictions with RNN classifiers based on time series data. arXiv preprint 1901.08554(2019).
[7]
Michelle R Greene, Tommy Liu, and Jeremy M Wolfe. 2012. Reconsidering Yarbus: A failure to predict observers’ task from eye movement patterns. Vision Research 62(2012), 1–8.
[8]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(1997), 1735–1780.
[9]
Fred Hohman, Minsuk Kahng, Robert Pienta, and Duen Horng Chau. 2018. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics 25, 8(2018), 2674–2693.
[10]
Minsuk Kahng, Pierre Y Andrews, Aditya Kalro, and Duen Horng Polo Chau. 2017. ActiVis: Visual exploration of industry-scale deep neural network models. IEEE Transactions on Visualization and Computer Graphics 24, 1(2017), 88–97.
[11]
Oleg V Komogortsev and Alex Karpov. 2013. Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades. Behavior Research Methods 45 (2013), 203–215.
[12]
Ayush Kumar, Debesh Mohanty, Kuno Kurzhals, Fabian Beck, Daniel Weiskopf, and Klaus Mueller. 2020. Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data. In Proceedings of the 12th ACM Symposium on Eye Tracking Research & Applications (ETRA ’20 Adjunct). 3.
[13]
Ayush Kumar, Anjul Tyagi, Michael Burch, Daniel Weiskopf, and Klaus Mueller. 2019. Task classification model for visual fixation, exploration, and search. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. Article 65, 4 pages.
[14]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436–444.
[15]
Shixia Liu, Xiting Wang, Mengchen Liu, and Jun Zhu. 2017. Towards better analysis of machine learning models: A visual analytics perspective. Visual Informatics 1, 1 (2017), 48–56.
[16]
Paulo E Rauber, Samuel G Fadel, Alexandre X Falcao, and Alexandru C Telea. 2016. Visualizing the hidden activity of artificial neural networks. IEEE Transactions on Visualization and Computer Graphics 23 (2016), 101–110.
[17]
Mikhail Startsev, Ioannis Agtzidis, and Michael Dorr. 2019. 1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits. Behavior Research Methods 51 (2019), 556–572.
[18]
Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, and Alexander M Rush. 2017. LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Transactions on Visualization and Computer Graphics 24 (2017), 667–676.
[19]
Enkelejda Tafaj, Thomas C Kübler, Gjergji Kasneci, Wolfgang Rosenstiel, and Martin Bogdan. 2013. Online classification of eye tracking data for automated analysis of traffic hazard perception. In International Conference on Artificial Neural Networks. Springer, 442–450.
[20]
Svante Wold, Kim Esbensen, and Paul Geladi. 1987. Principal component analysis. Chemometrics and Intelligent Laboratory Systems 2 (1987), 37–52.
[21]
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. 2015. Understanding Neural Networks Through Deep Visualization. arxiv:cs.CV/1506.06579
[22]
Raimondas Zemblys, Diederick C Niehorster, and Kenneth Holmqvist. 2019. gazeNet: End-to-end eye-movement event detection with deep neural networks. Behavior Research Methods 51 (2019), 840–864.
[23]
Raimondas Zemblys, Diederick C Niehorster, Oleg Komogortsev, and Kenneth Holmqvist. 2018. Using machine learning to detect events in eye-tracking data. Behavior Research Methods 50, 1 (2018), 160–181.
[24]
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2019. MPIIGaze: Real-world dataset and deep appearance-based gaze estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 1(2019), 162–175.

Cited By

View all
  • (2024)DMT-EV: An Explainable Deep Network for Dimension ReductionIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.322339930:3(1710-1727)Online publication date: Mar-2024
  • (2024)A Trainable Feature Extractor Module for Deep Neural Networks and Scanpath ClassificationPattern Recognition10.1007/978-3-031-78201-5_19(292-304)Online publication date: 2-Dec-2024
  • (2023)Exploring the Effects of Scanpath Feature Engineering for Supervised Image Classification ModelsProceedings of the ACM on Human-Computer Interaction10.1145/35911307:ETRA(1-18)Online publication date: 18-May-2023
  • Show More Cited By
  1. Challenges in Interpretability of Neural Networks for Eye Movement Data

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ETRA '20 Short Papers: ACM Symposium on Eye Tracking Research and Applications
    June 2020
    305 pages
    ISBN:9781450371346
    DOI:10.1145/3379156
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 June 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Eye tracking
    2. deep learning
    3. explainable AI
    4. visualization

    Qualifiers

    • Short-paper
    • Research
    • Refereed limited

    Conference

    ETRA '20

    Acceptance Rates

    Overall Acceptance Rate 69 of 137 submissions, 50%

    Upcoming Conference

    ETRA '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)25
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)DMT-EV: An Explainable Deep Network for Dimension ReductionIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.322339930:3(1710-1727)Online publication date: Mar-2024
    • (2024)A Trainable Feature Extractor Module for Deep Neural Networks and Scanpath ClassificationPattern Recognition10.1007/978-3-031-78201-5_19(292-304)Online publication date: 2-Dec-2024
    • (2023)Exploring the Effects of Scanpath Feature Engineering for Supervised Image Classification ModelsProceedings of the ACM on Human-Computer Interaction10.1145/35911307:ETRA(1-18)Online publication date: 18-May-2023
    • (2023)Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence ModelsProceedings of the 2023 Symposium on Eye Tracking Research and Applications10.1145/3588015.3588412(1-8)Online publication date: 30-May-2023
    • (2022)A Functional Contextual Account of Background Knowledge in Categorization: Implications for Artificial General Intelligence and Cognitive Accounts of General KnowledgeFrontiers in Psychology10.3389/fpsyg.2022.74530613Online publication date: 2-Mar-2022
    • (2021)Visual analytics tool for the interpretation of hidden states in recurrent neural networksVisual Computing for Industry, Biomedicine, and Art10.1186/s42492-021-00090-04:1Online publication date: 29-Sep-2021
    • (2020)Inner-process visualization of hidden states in recurrent neural networksProceedings of the 13th International Symposium on Visual Information Communication and Interaction10.1145/3430036.3430047(1-5)Online publication date: 8-Dec-2020

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media