skip to main content
10.1145/3448017.3457377acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
research-article
Open Access

The Power of Linked Eye Movement Data Visualizations

Authors Info & Claims
Published:25 May 2021Publication History

ABSTRACT

In this paper we showcase several eye movement data visualizations and how they can be interactively linked to design a flexible visualization tool for eye movement data. The aim of this project is to create a user-friendly and easy accessible tool to interpret visual attention patterns and to facilitate data analysis for eye movement data. Hence, to increase accessibility and usability we provide a web-based solution. Users can upload their own eye movement data set and inspect it from several perspectives simultaneously. Insights can be shared and collaboratively be discussed with others. The currently available visualization techniques are a 2D density plot, a scanpath representation, a bee swarm, and a scarf plot, all supporting several standard interaction techniques. Moreover, due to the linking feature, users can select data in one visualization, and the same data points will be highlighted in all active visualizations for solving comparison tasks. The tool also provides functions that make it possible to upload both, private or public data sets, and can generate URLs to share the data and settings of customized visualizations. A user study showed that the tool is understandable and that providing linked customizable views is beneficial for analyzing eye movement data.

References

  1. Hristo Bakardzhiev, Marloes van der Burgt, Eduardo Martins, Bart van den Dool, Chyara Jansen, David van Scheppingen, Günter Wallner, and Michael Burch. 2020. A Web-Based Eye Tracking Data Visualization Tool. In Proceedings of Pattern Recognition. ICPR International Workshops and Challenges - Virtual Event, Part III(Lecture Notes in Computer Science, Vol. 12663), Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani (Eds.). Springer, 405–419.Google ScholarGoogle Scholar
  2. Tanja Blascheck, Michael Burch, Michael Raschke, and Daniel Weiskopf. 2015. Challenges and Perspectives in Big Eye-Movement Data Visual Analytics. In Proceedings of the Symposium on Big Data Visual Analytics, BDVA. IEEE, Washington, DC, USA, 17–24.Google ScholarGoogle ScholarCross RefCross Ref
  3. Tanja Blascheck, Kuno Kurzhals, Michael Raschke, Michael Burch, Daniel Weiskopf, and Thomas Ertl. 2017. Visualization of Eye Tracking Data: A Taxonomy and Survey. Computer Graphics Forum 36, 8 (2017), 260–284.Google ScholarGoogle ScholarCross RefCross Ref
  4. Agnieszka Bojko. 2009. Informative or Misleading? Heatmaps Deconstructed. In Proceedings of the Conference on Human-Computer Interaction, HCI(Lecture Notes in Computer Science, Vol. 5610), Julie A. Jacko (Ed.). Springer, Berlin, Heidelberg, Germany, 30–39.Google ScholarGoogle Scholar
  5. Mike Bostock. 2019. d3-contour. https://github.com/d3/d3-contour Accessed: June, 2020.Google ScholarGoogle Scholar
  6. Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer. 2011. D3: Data-Driven Documents. IEEE Transactions on Visualization and Computer Graphics 17, 12(2011), 2301–2309.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Michael Burch. 2016a. Isoline-Enhanced Dynamic Graph Visualization. In Proceedings of 20th International Conference Information Visualisation, IV, Ebad Banissi, Mark W. McK. Bannatyne, Fatma Bouali, Remo Burkhard, John Counsell, Urska Cvek, Martin J. Eppler, Georges G. Grinstein, Weidong Huang, Sebastian Kernbach, Chun-Cheng Lin, Feng Lin, Francis T. Marchese, Chi Man Pun, Muhammad Sarfraz, Marjan Trutschl, Anna Ursyn, Gilles Venturini, Theodor G. Wyeld, and Jian J. Zhang (Eds.). IEEE, Washington, DC, USA, 1–8.Google ScholarGoogle Scholar
  8. Michael Burch. 2016b. Time-Preserving Visual Attention Maps. In Proceedings of Intelligent Decision Technologies: Smart Innovation, Systems and Technologies, Ireneusz Czarnowski, Alfonso Mateos Caballero, Robert J. Howlett, and Lakhmi C. Jain (Eds.), Vol. 57. Springer, Cham, Switzerland, 273–283.Google ScholarGoogle ScholarCross RefCross Ref
  9. Michael Burch. 2017. Mining and visualizing eye movement data. In Proceedings of the Symposium on Visualization, SIGGRAPH ASIA, Koji Koyamada and Puripant Ruchikachorn (Eds.). Association for Computing Machinery, New York, NY, USA, 3:1–3:8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Michael Burch, Ayush Kumar, and Neil Timmermans. 2019a. An interactive web-based visual analytics tool for detecting strategic eye movement patterns. In Proceedings of the 11th Association for Computing Machinery Symposium on Eye Tracking Research & Applications, ETRA, Krzysztof Krejtz and Bonita Sharif (Eds.). Association for Computing Machinery, New York, NY, USA, 93:1–93:5.Google ScholarGoogle Scholar
  11. Michael Burch, Alberto Veneri, and Bangjie Sun. 2019b. EyeClouds: A Visualization and Analysis Tool for Exploring Eye Movement Data. In Proceedings of the 12th International Symposium on Visual Information Communication and Interaction, VINCI. Association for Computing Machinery, New York, NY, USA, 8:1–8:8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Michael Burch, Alberto Veneri, and Bangjie Sun. 2020. Exploring eye movement data with image-based clustering. Journal of Visualization 23, 4 (2020), 677–694.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Wen-Ying Sylvia Chou, Neha Trivedi, Emily Peterson, Anna Gaysynsky, Mindy Krakow, and Emily Vraga. 2020. How do social media users process cancer prevention messages on Facebook? An eye-tracking study. Patient Education and Counseling 103, 6 (2020), 1161–1167.Google ScholarGoogle ScholarCross RefCross Ref
  14. Çagatay Demiralp, Jesse Cirimele, Jeffrey Heer, and Stuart K. Card. 2015. The VERP Explorer: A Tool for Exploring Eye Movements of Visual-Cognitive Tasks Using Recurrence Plots. In Proceedings of Workshop on Eye Tracking and Visualization, ETVIS(Mathematics and Visualization), Michael Burch, Lewis L. Chuang, Brian D. Fisher, Albrecht Schmidt, and Daniel Weiskopf(Eds.). Springer, Cham, Switzerland, 41–55.Google ScholarGoogle Scholar
  15. Bryan Farnsworth. 2019. 10 Free Eye Tracking Software Programs [Pros and Cons]. https://imotions.com/blog/free-eye-tracking-software/ Accessed: May, 2020.Google ScholarGoogle Scholar
  16. Joseph H. Goldberg and Jonathan I. Helfman. 2010. Visual scanpath representation. In Proceedings of the Symposium on Eye-Tracking Research & Applications, ETRA, Carlos Hitoshi Morimoto, Howell O. Istance, Aulikki Hyrskykari, and Qiang Ji (Eds.). Association for Computing Machinery, New York, NY, USA, 203–210.Google ScholarGoogle Scholar
  17. Weidong Huang, Peter Eades, and Seok-Hee Hong. 2009. A graph reading behavior: Geodesic-path tendency. In Proceedings of the IEEE Pacific Visualization Symposium, PacificVis, Peter Eades, Thomas Ertl, and Han-Wei Shen(Eds.). IEEE, Washington, DC, USA, 137–144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Andrew Lee James Tamplin. 2011. Firebase. https://firebase.google.com/ Accessed: November, 2020.Google ScholarGoogle Scholar
  19. Kuno Kurzhals, Florian Heimerl, and Daniel Weiskopf. 2014. ISeeCube: visual analysis of gaze data for video. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA, Pernilla Qvarfordt and Dan Witzner Hansen (Eds.). Association for Computing Machinery, New York, NY, USA, 43–50.Google ScholarGoogle Scholar
  20. Kuno Kurzhals, Marcel Hlawatsch, Florian Heimerl, Michael Burch, Thomas Ertl, and Daniel Weiskopf. 2016. Gaze Stripes: Image-Based Visualization of Eye Tracking Data. IEEE Transactions on Visualization and Computer Graphics 22, 1(2016), 1005–1014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jinchao Li and Ali H. Sayed. 2012. Modeling bee swarming behavior through diffusion adaptation with asymmetric information sharing. EURASIP Journal on Advances in Signal Processing 18, 2012(2012), 17 pages. https://doi.org/10.1186/1687-6180-2012-18Google ScholarGoogle Scholar
  22. Carsten Maple. 2003. Geometric Design and Space Planning Using the Marching Squares and Marching Cube Algorithms. In Proceedings of International Conference on Geometric Modeling and Graphics, GMAG. IEEE, Washington, DC, USA, 90–95.Google ScholarGoogle ScholarCross RefCross Ref
  23. Margaret W. Matlin and Thomas A. Farmer. 2017. Cognition (9 ed.). Wiley, Hobokon, NJ, USA.Google ScholarGoogle Scholar
  24. Raphael Menges, Sophia Kramer, Stefan Hill, Marius Nisslmueller, Chandan Kumar, and Steffen Staab. 2020. A Visualization Tool for Eye Tracking Data Analysis in the Web. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA, Andreas Bulling, Anke Huckauf, Eakta Jain, Ralph Radach, and Daniel Weiskopf (Eds.). Association for Computing Machinery, New York, NY, USA, 46:1–46:5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Rudolf Netzel, Bettina Ohlhausen, Kuno Kurzhals, Robin Woods, Michael Burch, and Daniel Weiskopf. 2017. User performance and reading strategies for metro maps: An eye tracking study. Spatial Cognition & Computation 17, 1-2 (2017), 39–64.Google ScholarGoogle ScholarCross RefCross Ref
  26. Hai Nguyen. 2013. Material-UI: A popular React UI framework. https://material-ui.com/ Accessed: November, 2020.Google ScholarGoogle Scholar
  27. Travis Oliphant. 2012. Numba: A high performance Python compiler. http://numba.pydata.org/ Accessed: November, 2020.Google ScholarGoogle Scholar
  28. Kirill Ragozin and Kai Kunze. 2019. Dyslexic and private reader: an eye-tracking platform for reading interactions with applications to increase empathy and privacy. In Proceedings of the Association for Computing Machinery International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the Association for Computing Machinery International Symposium on Wearable Computers, UbiComp/ISWC, Robert Harle, Katayoun Farrahi, and Nicholas D. Lane (Eds.). Association for Computing Machinery, New York, NY, USA, 296–297.Google ScholarGoogle Scholar
  29. Sebastián Ramírez. 2019. FastAPI. https://fastapi.tiangolo.com/.Google ScholarGoogle Scholar
  30. Michael Raschke, Dominik Herr, Tanja Blascheck, Thomas Ertl, Michael Burch, Sven Willmann, and Michael Schrauf. 2014. A visual approach for scan path comparison. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA, Pernilla Qvarfordt and Dan Witzner Hansen (Eds.). Association for Computing Machinery, New York, NY, USA, 339–346.Google ScholarGoogle Scholar
  31. Jonathan C. Roberts. 2003. Guest editor’s introduction: special issue on coordinated and multiple views in exploratory visualization. Information Visualization 2, 4 (2003), 199–200.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Erin Robertson and Jennifer Gallant. 2019. Eye tracking reveals subtle spoken sentence comprehension problems in children with dyslexia. Lingua 228(2019), 102708.Google ScholarGoogle ScholarCross RefCross Ref
  33. Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, and Zhenlan Jin. 2005. Feature congestion: a measure of display clutter. In Proceedings of the Conference on Human Factors in Computing Systems, CHI, Gerrit C. van der Veer and Carolyn Gale (Eds.). Association for Computing Machinery, New York, NY, USA, 761–770.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Dario D. Salvucci and Joseph H. Goldberg. 2000. Identifying Fixations and Saccades in Eye-Tracking Protocols. In Proceedings of the Symposium on Eye Tracking Research & Applications. Association for Computing Machinery, New York, NY, USA, 71–78.Google ScholarGoogle Scholar
  35. Jeff Sauro and Joseph S. Dumas. 2009. Comparison of Three One-Question, Post-Task Usability Questionnaires. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1599––1608. https://doi.org/10.1145/1518701.1518946Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Robert Spence and Mark Witkowski. 2013. Rapid Serial Visual Presentation - Design for Cognition. Springer, London, UK.Google ScholarGoogle Scholar
  37. Michael K. Tanenhaus. 2007. Eye Movements and spoken language processing. In Eye Movements, Roger P.G. Van Gompel, Martin H. Fischer, Wayne S. Murray, and Robin L. Hill (Eds.). Elsevier, Oxford, 443–470.Google ScholarGoogle Scholar
  38. TechEmpower. 2019. TechEmpower Framework Benchmarks. https://www.techempower.com/benchmarks/#section=data-r18&hw=ph&test=query&l=zijzen-7Accessed: November, 2020.Google ScholarGoogle Scholar
  39. Barbara Tversky, Julie Bauer Morrison, and Mireille Bétrancourt. 2002. Animation: can it facilitate?International Journal of Human Computer Studies 57, 4 (2002), 247–262.Google ScholarGoogle Scholar
  40. Nicholas J. Wade. 2007. Scanning the seen: Vision and the origins of eye-movement research. In Eye Movements, Roger P.G. Van Gompel, Martin H. Fischer, Wayne S. Murray, and Robin L. Hill (Eds.). Elsevier, Oxford, 31–63.Google ScholarGoogle Scholar
  41. Jordan Walke. 2013. React - A JavaScript library for building user interfaces. https://reactjs.org/ Accessed: November, 2020.Google ScholarGoogle Scholar
  42. Yunhai Wang, Jian Zhang, Dirk J. Lehmann, Holger Theisel, and Xuebin Chi. 2012. Automating Transfer Function Design with Valley Cell-Based Clustering of 2D Density Plots. Computer Graphics Forum 31, 3 (2012), 1295–1304.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Max Wertheimer and Kurt Riezler. 1944. Gestalt Theory. Social Research 11, 1 (1944), 78–99.Google ScholarGoogle Scholar
  44. Chia-Kai Yang and Chat Wacharamanotham. 2018. Alpscarf: Augmenting Scarf Plots for Exploring Temporal Gaze Patterns. In Proceedings of Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI, Regan L. Mandryk, Mark Hancock, Mark Perry, and Anna L. Cox (Eds.). Association for Computing Machinery, New York, NY, USA, 1–6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Ji Soo Yi, Youn ah Kang, John T. Stasko, and Julie A. Jacko. 2007. Toward a Deeper Understanding of the Role of Interaction in Information Visualization. IEEE Transactions on Visualization and Computer Graphics 13, 6(2007), 1224–1231.Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. The Power of Linked Eye Movement Data Visualizations

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ETRA '21 Full Papers: ACM Symposium on Eye Tracking Research and Applications
      May 2021
      122 pages
      ISBN:9781450383448
      DOI:10.1145/3448017

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 May 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate69of137submissions,50%

      Upcoming Conference

      ETRA '24
      The 2024 Symposium on Eye Tracking Research and Applications
      June 4 - 7, 2024
      Glasgow , United Kingdom

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format