Skip to main content

REGARD: Remote Gaze-Aware Reference Detector

  • Chapter
Book cover Eye Gaze in Intelligent User Interfaces

Abstract

Previous studies have shown that people tend to look at a visual referent just before saying the corresponding word, and similarly, listeners look at the referent right after hearing the name of the object. We first replicated these results in an ecologically valid situation in which collaborators are engaged in an unconstrained dialogue. Secondly, building upon these findings, we developed a model, called REGARD, which monitors speech and gaze during collaboration in order to automatically detect associations between words and objects of the shared workspace. The results are very promising showing that the model is actually able to detect correctly most of the references made by the collaborators. Perspectives of applications are briefly discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Cross-recurrence is a general measure that quantifies the similarity or the coupling between two dynamical systems.

  2. 2.

    Concept-maps are diagrams consisting of boxes representing concepts and labeled links representing relations between concepts.

  3. 3.

    IHMC: http://cmap.ihmc.us/.

  4. 4.

    http://transag.sourceforge.net/.

  5. 5.

    CMU-Sphinx (http://cmusphinx.sourceforge.net/html/cmusphinx.php) is an open-source general speech recognition engine developed at Carnegie Mellon University.

  6. 6.

    “Regard” is also the French word corresponding to “gaze”.

References

  • Allopenna PD, Magnuson JS, Tanenhaus MK (1998) Tracking the time course of spoken word recognition using eye movements: evidence for continuous mapping models. J Mem Lang 38(4):419–439. doi:10.1006/jmla.1997.2558

    Article  Google Scholar 

  • Cherubini M, Nüssli M-A, Dillenbourg P (2008) Deixis and gaze in collaborative work at a distance: a computational model to detect misunderstandings. In: Proceedings of the 2008 symposium on eye tracking research and applications (ETRA’08). ACM, New York, pp 173–180

    Chapter  Google Scholar 

  • Deléglise P, Estève Y, Meignier S, Merlin T (2005) The LIUM speech transcription system: a CMU Sphinx III-based system for French broadcast news. In: Interspeech 2005

    Google Scholar 

  • Griffin ZM (2001) Gaze durations during speech reflect word selection and phonological encoding. Cognition 82(1):B1–B14

    Article  Google Scholar 

  • Griffin ZM, Bock K (2000) What the eyes say about speaking. Psychol Sci 11(4):274–279

    Article  Google Scholar 

  • Griffin ZM, Oppenheimer DM (2006) Speakers gaze at objects while preparing intentionally inaccurate labels for them. J Exp Psychol Learn Mem Cogn 32(4):943–948. doi:10.1037/0278-7393.32.4.943

    Article  Google Scholar 

  • Meyer AS, Sleiderink AM, Levelt WJM (1998) Viewing and naming objects: eye movements during noun phrase production. Cognition 66(2):B25–B33. doi:10.1016/S0010-0277(98)00009-2

    Article  Google Scholar 

  • Nüssli M-A (2011) Dual eye-tracking methods for the study of remote collaborative problem solving. PhD thesis, École Polytechnique Fédérale de Lausanne

    Google Scholar 

  • Richardson DC, Dale R (2005) Looking to understand: the coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cogn Sci 29(29):1045–1060

    Article  Google Scholar 

  • Sangin M (2009) Peer knowledge modeling in computer supported collaborative learning. PhD thesis, École Polytechnique Fédérale de Lausanne

    Google Scholar 

  • Sangin M, Molinari G, Nüssli M-A, Dillenbourg P (2008) How learners use awareness cues about their peer’s knowledge: insights from synchronized eye-tracking data. In: ICLS, vol 2, pp 287–296

    Google Scholar 

  • Sangin M, Molinari G, Nüssli M-A, Dillenbourg P (2011) Facilitating peer knowledge modeling: effects of a knowledge awareness tool on collaborative learning outcomes and processes. Comput Hum Behav 27(3):1059–1067. doi:10.1016/j.chb.2010.05.032

    Article  Google Scholar 

  • Zelinsky GJ, Murphy GL (2000) Synchronizing visual and language processing: an effect of object name length on eye movements. Psychol Sci 11(2):125–131

    Article  Google Scholar 

Download references

Acknowledgements

This work was funded by the Swiss National Science Foundation (grant #K-12K1-117909).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marc-Antoine Nüssli .

Editor information

Editors and Affiliations

Appendix

Appendix

In order to allow the reader to get a general idea of the type of dialogue that occurred during the task, Tables 5.3 and 5.4 show two translated excerpts from two different dyads. The references to objects of the map have been put in quotation marks. The first excerpt (see Table 5.3) shows a dialogue centered around the objects that are drawn on the map while the second excerpt (see Table 5.4) is more conceptual with few explicit references to the objects of the concept-map.

Table 5.3 Translated excerpt from the verbal interaction of a dyad (originally in French)
Table 5.4 Translated excerpt from the verbal interaction of a dyad (originally in French)

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Nüssli, MA., Jermann, P., Sangin, M., Dillenbourg, P. (2013). REGARD: Remote Gaze-Aware Reference Detector. In: Nakano, Y., Conati, C., Bader, T. (eds) Eye Gaze in Intelligent User Interfaces. Springer, London. https://doi.org/10.1007/978-1-4471-4784-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4784-8_5

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4783-1

  • Online ISBN: 978-1-4471-4784-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics