Skip to main content

Co-present or Not?

Embodiment, Situatedness and the Mona Lisa Gaze Effect

  • Chapter

Abstract

The interest in embodying and situating computer programmes took off in the autonomous agents community in the 90s. Today, researchers and designers of programmes that interact with people on human terms endow their systems with humanoid physiognomies for a variety of reasons. In most cases, attempts at achieving this embodiment and situatedness has taken one of two directions: virtual characters and actual physical robots. In addition, a technique that is far from new is gaining ground rapidly: projection of animated faces on head-shaped 3D surfaces. In this chapter, we provide a history of this technique; an overview of its pros and cons; and an in-depth description of the cause and mechanics of the main drawback of 2D displays of 3D faces (and objects): the Mona Liza gaze effect. We conclude with a description of an experimental paradigm that measures perceived directionality in general and the Mona Lisa gaze effect in particular.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Walt Disney’s Wonderful World of Color, Season 16, Episode 20, Walt Disney Productions.

  2. 2.

    Michael Naimark has made a film showing the talking head projection in action available at http://www.naimark.net/projects/head.html.

References

  • Al Moubayed S, Skantze G (2011) Effects of 2D and 3D displays on turn-taking behavior in multiparty human-computer dialog. In: Proceedings of SemDial, Los Angeles, pp 192–193

    Google Scholar 

  • Al Moubayed S, Alexanderson S, Beskow J, Granström B (2011) A robotic head using projected animated faces. In: Salvi G, Beskow J, Engwall O, Al Moubayed S (eds) Proceedings of AVSP2011, p 69

    Google Scholar 

  • Al Moubayed S, Beskow J, Granström B, Gustafson J, Mirning N, Skantze G, Tscheligi M (2012a) Furhat goes to Robotville: a large-scale multiparty human-robot interaction data collection in a public space. In: Proceedings of LREC workshop on multimodal corpora, Istanbul, Turkey

    Google Scholar 

  • Al Moubayed S, Edlund J, Beskow J (2012b) Taming Mona Lisa: communicating gaze faithfully in 2D and 3D facial projections. ACM Trans Interact Intell Syst 1(2):25

    Google Scholar 

  • Argyle M, Cook M (1976) Gaze and mutual gaze. Science 194(4260):54–55

    Article  Google Scholar 

  • Bavelas J, Coates L, Johnson T (2002) Listener responses as a collaborative process: the role of gaze. J Commun 52(3):566–580

    Article  Google Scholar 

  • Beskow J, Al Moubayed S (2010) Perception of gaze direction in 2D and 3D facial projections. In: The ACM/SSPNET 2nd international symposium on facial analysis and animation, Edinburgh, UK

    Google Scholar 

  • Beskow J, Salvi G, Al Moubayed S (2009) SynFace—verbal and non-verbal face animation from audio. In: Proceedings of the international conference on auditory-visual speech processing, AVSP’09, Norwich, England

    Google Scholar 

  • Bilvi M, Pelachaud C (2003) Communicative and statistical eye gaze predictions. In: Proceedings of international conference on autonomous agents and multi-agent systems (AAMAS), Melbourne, Australia

    Google Scholar 

  • Boye J, Gustafson J (2005) How to do dialogue in a fairy-tale world. In: 6th SIGdial workshop on discourse and discourse

    Google Scholar 

  • Breazeal C, Scassellati B (2001) Challenges in building robots that imitate people. In: Dautenhahn K, Nehaniv CL (eds) Imitation in animals and artifacts. MIT Press, Boston, pp 363–390

    Google Scholar 

  • Cassel J, Sullivan J, Prevost S, Churchill EE (2000) Embodied conversational agents. MIT Press, Cambridge

    Google Scholar 

  • Cassell J, Stocky T, Bickmore T, Gao Y, Nakano Y, Ryokai K (2002) MACK: media lab autonomous conversational kiosk. In: Proceedings of Imagina02, Monte Carlo

    Google Scholar 

  • Cuijpers RH, van der Pol D, Meesters LMJ (2010) Mediated eye-contact is determined by relative pupil position within the sclera. In: Perception ECVP abstract supplement, p 129

    Google Scholar 

  • Dariush B, Gienger M, Arumbakkam A, Goerick C, Zhu Y, Fujimura K (2008) Online and markerless motion retargeting with kinematic constraints. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS 2008), pp 191–198

    Google Scholar 

  • Delaunay F, de Greeff J, Belpaeme T (2009) Towards retro-projected robot faces: an alternative to mechatronic and android faces. In: Proceedings of the international symposium on robot and human interactive communication (RO-MAN), Toyama, Japan

    Google Scholar 

  • Descartes R (1637) Dioptrics. In: Discourse on method, optics, geometry, and meteorology. Hackett, Indianapolis, pp 65–162

    Google Scholar 

  • Edlund J (2011) In search of the conversational homunculus—serving to understand spoken human face-to-face interaction. Doctoral dissertation, KTH

    Google Scholar 

  • Edlund J, Beskow J (2009) MushyPeek—a framework for online investigation of audiovisual dialogue phenomena. Lang Speech 52(2–3):351–367

    Article  Google Scholar 

  • Edlund J, Nordstrand M (2002) Turn-taking gestures and hour-glasses in a multi-modal dialogue system. In: Proceedings of ISCA workshop on multi-modal dialogue in mobile environments, Kloster Irsee, Germany

    Google Scholar 

  • Edlund J, Al Moubayed S, Beskow J (2011) The Mona Lisa gaze effect as an objective metric for perceived cospatiality. In: Vilhjálmsson HH, Kopp S, Marsella S, Thórisson KR (eds) Proceedings of the 10th international conference on intelligent virtual agents (IVA 2011), Reykjavík. Springer, Berlin, pp 439–440

    Google Scholar 

  • Edlund J, Heldner M, Gustafson J (2012) Who am I speaking at?—perceiving the head orientation of speakers from acoustic cues alone. In: Proceedings of LREC workshop on multimodal corpora 2012, Istanbul, Turkey

    Google Scholar 

  • Gregory R (1997) Eye and brain: the psychology of seeing. Princeton University Press, Princeton

    Google Scholar 

  • Gu E, Badler N (2006) Visual attention and eye gaze during multiparty conversations with distractions. In: Proceedings of the international conference on intelligent virtual agents

    Google Scholar 

  • Hartholt A, Gratch J, Weiss L, Leuski A, Morency L-P, Marsella S, Liewer M, Thiebaux M, Doraiswamy P, Tsiartas A (2009) At the virtual frontier: introducing Gunslinger, a multi-character, mixed-reality, story-driven experience. In: Proceedings of the 9th international conference on intelligent virtual agents (IVA’09). Springer, Berlin, pp 500–501

    Google Scholar 

  • Hashimoto M, Morooka D (2006) Robotic facial expression using a curved surface display. J Robot Mechatron 18(4):504–505

    Google Scholar 

  • Hjalmarsson A, Wik P, Brusk J (2007) Dealing with DEAL: a dialogue system for conversation training. In: Proceedings of SIGdial, Antwerp, Belgium, pp 132–135

    Google Scholar 

  • Jalbert G (1925) Lay figure. Technical report, US Patent 1653180

    Google Scholar 

  • Kendon A (1967) Some functions of gaze direction in social interaction. Acta Psychol 26:22–63

    Article  Google Scholar 

  • Kipp M, Gebhard P (2008) IGaze: studying reactive gaze behavior in semi-immersive human-avatar interactions. In: Proceedings of the 8th international conference on intelligent virtual agents (IVA’08), Tokyo, Japan

    Google Scholar 

  • Kleinke CL (1986) Gaze and eye contact: a research review. Psychol Bull 100:78–100

    Article  Google Scholar 

  • Kuratate T, Matsusaka Y, Pierce B, Cheng G (2011) Mask-bot: a life-size robot head using talking head animation for human-robot communication. In: Proceedings of the 11th IEEE-RAS international conference on humanoid robots (humanoids), pp 99–104

    Chapter  Google Scholar 

  • Lance B, Marsella S (2008) A model of gaze for the purpose of emotional expression in virtual embodied agents. In: Proceedings of the 7th international conference on autonomous agents and multiagent systems, pp 199–206

    Google Scholar 

  • Liljegren GE, Foster EL (1989) Figure with back projected image using fiber optics. Technical report, US Patent 4978216

    Google Scholar 

  • Morishima S, Yotsukura T, Binsted K, Nielsen F, Pinhanez C (2002) HyperMask: talking head projected onto real objects. Vis Comput 18(2):111–120

    Article  MATH  Google Scholar 

  • Naimark M (2005) Two unusual projection spaces. Presence 14(5):597–605

    Google Scholar 

  • Nordenberg M, Svanfeldt G, Wik P (2005) Artificial gaze—perception experiment of eye gaze in synthetic faces. In: Proceedings from the second Nordic conference on multimodal communication

    Google Scholar 

  • Poggi I, Pelachaud C (2000) Emotional meaning and expression in performative faces. In: Paiva A (ed) Affective interactions: towards a new generation of computer interfaces, pp 182–195

    Google Scholar 

  • Smith AM (1996) Ptolemy’s theory of visual perception: an English translation of the “Optics” with introduction and commentary. Am. Philos. Soc., Philadelphia

    MATH  Google Scholar 

  • Steels L, Brooks R (eds) (1995) The artificial life route to artificial intelligence: building embodied, situated agents. Lawrence Erlbaum Associates, Hillsdale

    Google Scholar 

  • Takeuchi A, Nagao K (1993) Communicative facial displays as a new conversational modality. In: Proceedings of the INTERACT’93 and CHI’93 conference on human factors in computing systems

    Google Scholar 

  • Todorović D (2006) Geometrical basis of perception of gaze direction. Vis Res 45(21):3549–3562

    Article  Google Scholar 

  • Traum D (2008) Talking to virtual humans: dialogue models and methodologies for embodied conversational agent. In: Wachsmuth I, Knoblich G (eds) Modeling communication with robots and virtual humans. Springer, Berlin, pp 296–309

    Chapter  Google Scholar 

  • Traum D, Rickel J (2002) Embodied agents for multi-party dialogue in immersive virtual worlds. In: Proceedings of the first international joint conference on autonomous agents and multiagent systems (AAMAS 02). ACM, New York

    Google Scholar 

  • Wollaston WH (1824) On the apparent direction of eyes in a portrait. Philos Trans R Soc Lond B 114:247–260

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jens Edlund .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Edlund, J., Al Moubayed, S., Beskow, J. (2013). Co-present or Not?. In: Nakano, Y., Conati, C., Bader, T. (eds) Eye Gaze in Intelligent User Interfaces. Springer, London. https://doi.org/10.1007/978-1-4471-4784-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4784-8_10

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4783-1

  • Online ISBN: 978-1-4471-4784-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics