Skip to main content
Log in

Animatronic shader lamps avatars

  • SI: Augmented Reality
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

Applications such as telepresence and training involve the display of real or synthetic humans to multiple viewers. When attempting to render the humans with conventional displays, non-verbal cues such as head pose, gaze direction, body posture, and facial expression are difficult to convey correctly to all viewers. In addition, a framed image of a human conveys only a limited physical sense of presence—primarily through the display’s location. While progress continues on articulated robots that mimic humans, the focus has been on the motion and behavior of the robots rather than on their appearance. We introduce a new approach for robotic avatars of real people: the use of cameras and projectors to capture and map both the dynamic motion and the appearance of a real person onto a humanoid animatronic model. We call these devices animatronic Shader Lamps Avatars (SLA). We present a proof-of-concept prototype comprised of a camera, a tracking system, a digital projector, and a life-sized styrofoam head mounted on a pan-tilt unit. The system captures imagery of a moving, talking user and maps the appearance and motion onto the animatronic SLA, delivering a dynamic, real-time representation of the user to multiple viewers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Ahlberg J, Forchheimer R (2003) Face tracking for model-based coding and face animation. Int J Imaging Syst Technol 13(1):8–22

    Article  Google Scholar 

  • AIST (2009) Successful development of a robot with appearance and performance similar to humans. http://www.aist.go.jp/aist_e/latest_research/2009/20090513/20090513.html

  • Argyle M, Cook M (1976) Gaze and mutual gaze/Michael Argyle and Mark Cook. Cambridge University Press, Cambridge, Eng., New York

    Google Scholar 

  • Bandyopadhyay D, Raskar R, Fuchs H (2001) Dynamic shader lamps: painting on real objects. In: Proceedings of IEEE and ACM international symposium on augmented reality (ISAR ’01). IEEE Computer Society, New York, NY, USA, pp 207–216

  • Criminisi A, Shotton J, Blake A, Torr P (2003) Gaze manipulation for one-to-one teleconferencing. Computer Vision. IEEE International Conference on 1:191

  • DeAndrea JL (2009) AskART. http://www.askart.com/askart/d/john_louis_de_andrea/john_louis_de_andrea.aspx

  • Epstein R (2006) My date with a robot. Scientific American Mind, June/July, pp 68–73

  • Hietanen JK (1999) Does your gaze direction and head orientation shift my visual attention?. Neuroreport 10(16):3443–3447

    Article  Google Scholar 

  • Honda Motor Co., Ltd (2009) Honda Worldwide—ASIMO. http://world.honda.com/ASIMO/

  • Huang TS, Tao H (2001) Visual face tracking and its application to 3d model-based video coding. In: Picture coding symposium, pp 57–60

  • Ilie A (2009) Camera and projector calibrator. http://www.cs.unc.edu/~adyilie/Research/CameraCalibrator/

  • Ishiguro H (2009) Intelligent robotics laboratory, Osaka University.http://www.is.sys.es.osaka-u.ac.jp/research/index.en.html

  • Jones A, McDowall I, Yamada H, Bolas M, Debevec P (2007) Rendering for an interactive 360^ light field display. In: SIGGRAPH ’07: ACM SIGGRAPH 2007 papers, vol 26. ACM, New York, NY, USA, pp 40–1–40–10

  • Jones A, Lang M, Fyffe G, Yu X, Busch J, McDowall I, Bolas M, Debevec P (2009) Achieving eye contact in a one-to-many 3d video teleconferencing system. In: SIGGRAPH ’09: ACM SIGGRAPH 2009 papers. ACM, New York, NY, USA, pp 1–8

  • Kerse D, Regenbrecht H, Purvis M (2005) Telepresence and user-initiated control. In: Proceedings of the 2005 international conference on Augmented tele-existence. ACM, p 240

  • Takanishi Laboratory (2009) Various face shape expression robot. http://www.takanishi.mech.waseda.ac.jp/top/research/docomo/index.htm

  • Lincoln P, Nashel A, Ilie A, Towles H, Welch G, Fuchs H (2009) Multi-view lenticular display for group teleconferencing. Immerscom

  • LOOXIS GmbH (2009) FaceWorx. http://www.looxis.com/en/k75.Downloads_Bits-and-Bytes-to-download.htm

  • Mori M (1970) The uncanny valley. Energy 7(4):33–35

    Google Scholar 

  • Nguyen D, Canny J (2005) Multiview: spatially faithful group video conferencing. In: CHI ’05: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, NY, USA, pp 799–808

  • Nguyen DT, Canny J (2007) Multiview: improving trust in group video conferencing through spatial faithfulness. In: CHI ’07: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, NY, USA, pp 1465–1474

  • OpenCV (2009) The OpenCV library. http://sourceforge.net/projects/opencvlibrary/

  • Paulos E, Canny J (2001) Social tele-embodiment: understanding presence. Auton Robots 11(1):87–95

    Article  MATH  Google Scholar 

  • Raskar R, Welch G, Chen W-C (1999) Table-top spatially-augmented reality: bringing physical models to life with projected imagery. In: IWAR ’99: Proceedings of the 2nd IEEE and ACM international workshop on augmented reality. IEEE Computer Society, Washington, DC, USA, p 64

  • Raskar R, Welch G, Low K-L, Bandyopadhyay D (2001) Shader lamps: animating real objects with image-based illumination. In: Eurographics workshop on rendering

  • Schreer O, Feldmann I, Atzpadin N, Eisert P, Kauff P, Belt H (2008) 3DPresence-a system concept for multi-user and multi-party immersive 3D videoconferencing, CVMP 2008, pp 1–8

  • Seeing Machines (2009) faceAPI. http://www.seeingmachines.com/product/faceapi/

  • Shin HJ, Lee J, Shin SY, Gleicher M (2001) Computer puppetry: an importance-based approach. ACM Trans Graph 20(2):67–94

    Article  Google Scholar 

  • State A (2007) Exact eye contact with virtual humans. In: ICCV-HCI, pp 138–145

  • Tachi S (2009) http://projects.tachilab.org/telesar2/

  • Tachi S, Kawakami N, Inami M, Zaitsu Y (2004) Mutual telexistence system using retro-reflective projection technology. Int J HR 1(1):45–64

    Google Scholar 

  • Wikipedia (2010) Cisco telepresence. http://en.wikipedia.org/wiki/Cisco_TelePresence

  • Woodworth C, Golden G, Gitlin R (1993) An integrated multimedia terminal for teleconferencing. In: Global telecommunications conference, 1993, including a communications theory mini-conference. Technical Program Conference Record, IEEE in Houston. GLOBECOM ’93., IEEE, vol 1. pp 399–405

  • Yotsukura T, Nielsen F, Binsted K, Morishima S, Pinhanez CS (2002) Hypermask: talking head projected onto real object. The Vis Comput 18(2):111–120

    Article  MATH  Google Scholar 

Download references

Acknowledgments

We thank Herman Towles for his insightful suggestions and technical help and advice. John Thomas provided mechanical and electronic engineering assistance. David Harrison set up our full-duplex audio subsystem. Dorothy Turner became our first non-author SLA user (Fig. 5, bottom half of image set). Tao Li helped set up the ISMAR demonstration. Donna Boggs modeled as the Avatar’s interlocutor (Fig. 2). We thank Chris Macedonia, M.D. for inspiring us by expressing his desire to visit his patients in remote hospitals and other medical facilities with a greater effectiveness than is possible with current remote presence systems, and for offering the term “prosthetic presence.” We are grateful to Brian Bradley for his appearance as a prosthetic physician at our ISMAR 2009 booth, and we thank all ISMAR participants who visited our booth and engaged both the Avatar and the researchers with questions and suggestions. Partial funding for this work was provided by the Office of Naval Research (award N00014-09-1-0813, “3D Display and Capture of Humans for Live-Virtual Training,” Dr. Roy Stripling, Program Manager).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Lincoln.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lincoln, P., Welch, G., Nashel, A. et al. Animatronic shader lamps avatars. Virtual Reality 15, 225–238 (2011). https://doi.org/10.1007/s10055-010-0175-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-010-0175-5

Keywords

Navigation