Skip to main content
Log in

Human Visual Attention Model Based on Analysis of Magic for Smooth Human–Robot Interaction

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

In order to smoothly interact with humans, it is desirable that a robot can guide human attention and behaviors. In this study, we developed a model of human visual attention for guiding human attention based on an analysis of a magic trick performance. We measured human gaze points of people watching a video of a magic trick performance and compared them with the area where the magician intended to draw a spectator’s attention. The analysis showed that the relationship between the magician’s face, hands, and gaze plays an important role in guiding the spectator’s attention. On the basis of the preliminary user studies on watching the magic video, we integrated a saliency map and a manipulation map that describes the relationship between gaze and hands to develop a novel human attention model. The evaluation using the observed gaze points demonstrated that the proposed model can better explain human visual attention than the saliency map while people are watching a video of a magic trick performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Frith CD, Frith U (2006) How we predict what other people are going to do. Brain Res 1079:36–46

    Article  Google Scholar 

  2. Sato T, Nishida Y, Ichikawa J, Hatamura Y, Mizoguchi H (1994) Active understanding of human intention by a robot through monitoring of human behavior. In: Proceedings of the IEEE/RSJ/GI international conference on intelligent robots and systems ’94, pp 405–414

  3. Sakita K, Ogawara K, Murakami S, Kawamura K, Ikeuchi K (2004) Flexible cooperation between human and robot by interpreting human intention from gaze information. In: Proceedings of the 2004 IEEE/RSJ international conference on intelligent robots and systems, pp 846–851

  4. Tamura Y, Sugi M, Ota J, Arai T (2004) Deskwork support system based on the estimation of human intentions. In: Proceedings of the 13th IEEE international workshop on robot and human interactive communication, pp 413–418

  5. Tahboub KA (2006) Intelligent human-machine interaction based on dynamic bayesian networks probabilistic intention recognition. J Intell Robot Syst 45(1):31–52

    Article  Google Scholar 

  6. Schmid AJ, Weede O, Worn H (2007) Proactive robot task selection given an human intention estimate. In: Proceedings of the 16th IEEE international symposium on robot and human interactive communication, pp 726–731

  7. Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: Proceedings of the 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 708–713

  8. Hegel F, Gieselmann S, Peters A, Holthaus P, Wrede B (2011) Towards a typology of meaningful signals and cues in social robotics. In: Proceedings of the 20th IEEE international symposium on robot and human interactive communication, pp 72–78

  9. Fong T, Thorpe C, Bauer C (2003) Collaboration, dialogue, and human-robot interaction. Robot Res, Springer Tracts Adv Robot 6:255–266

    Article  Google Scholar 

  10. Finzi A, Orlandini A (2005) Human-robot interaction through mixed-initiative planning for rescue and search rovers. AI*IA 2005: advances in artificial intelligence. Lect Notes Comput Sci 3673:483–494

  11. Hong J-H, Song Y-S, Cho S-B (2007) Mixed-initiative human-robot interaction using hierarchical bayesian networks. IEEE Trans Syst Man Cybern—Part A 37(6):1158–1164

    Article  Google Scholar 

  12. Sugiyama O, Kanda T, Imai M, Ishiguro H, Hagita N, Anzai Y (2006) Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connect Sci 18(4):379–402

    Article  Google Scholar 

  13. Friesen CK, Kingstone A (1998) The eyes have it! reflexive orienting is triggered by nonpredictive gaze. Psychon Bull Rev 5(3):490–495

    Article  Google Scholar 

  14. Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S (1999) Gaze perception triggers reflexive visuospatial orienting. Vis Cogn 6(5):509–540

    Article  Google Scholar 

  15. Langton S, Bruce V (1999) Reflexive visual orienting in response to the social attention of others. Vis Cogn 6:541–568

    Article  Google Scholar 

  16. Langton SR, Watt RJ, Bruce V (2000) Do the eyes have it? Cues to the direction of social attention. Trends Cogn Sci 4(2):50–59

    Article  Google Scholar 

  17. Hoque MM, Onuki T, Kobayashi Y, Kuno Y (2013) Effect of robot’s gaze behaviors for attracting and controlling human attention. Adv Robot 27(11):813–829

    Article  Google Scholar 

  18. Zheng M, Moon A, Croft EA, Meng MQH (2015) Impacts of robot head gaze on robot-to-human handovers. Int J Soc Robot 7:783–798

    Article  Google Scholar 

  19. Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4:219–227

    Google Scholar 

  20. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1258

    Article  Google Scholar 

  21. Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194–203

    Article  Google Scholar 

  22. Schillaci G, Bodiroža S, Hafner VV (2013) Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. Int J Soc Robot 5:139–152

    Article  Google Scholar 

  23. Ma Y-F, Zhang H-J (2002) A model of motion attention for video skimming. In: Proceedings of the 2002 international conference on image processing, pp 129–132

  24. Lang M, Hornung A, Wang O, Poulakos S, Smolic A, Gross M (2010) Nonlinear disparity mapping for stereoscopic 3D. ACM Trans Graph 29(4):75

    Article  Google Scholar 

  25. Cerf M, Harel J, Einhäuser W, Koch C (2008) Predicting human gaze using low-level saliency combined with face detection. Adv Neural Inf Process Syst 20:241–248

    Google Scholar 

  26. Ozeki M, Kashiwagi Y, Inoue M, Oka N (2011) Top-down visual attention control based on a particle filter for human-interactive robots. In: Proceedings of the 4th international conference on human system interaction, pp 188–194

  27. Lamont P, Wiseman R (1999) Magic in theory—an introduction to the theoretical and psychological elements of conjuring. University of Hertfordshire Press, Hatfield

    Google Scholar 

  28. Kuhn G, Amlani AA, Rensink R (2008) Towards a science of magic. Trends Cogn Sci 12(9):349–354

    Article  Google Scholar 

  29. Macknik SL, King M, Randi J, Robbins A, Teller Thompson J, Martinez-Conde S (2008) Attention and awareness in stage magic: turning tricks into research. Nat Rev Neurosci 9(11):871–879

    Article  Google Scholar 

  30. Kuhn G, Tatler BW, Findlay JM, Cole GG (2008) Misdirection in magic: implications for the relationship between eye gaze and attention. Vis Cogn 16:391–405

    Article  Google Scholar 

  31. Kuhn G, Tatler BW, Cole GG (2009) You look where I look! Effect of gaze cues on overt and covert attention in misdirection. Vis Cogn 17:925–944

    Article  Google Scholar 

  32. Tamariz J (2007) The five points in magic. Hermetic Press, Seattle

    Google Scholar 

  33. Otero-Millan J, Macknik SL, Robbins A, Martinez-Conde S (2011) Stronger misdirection in curved than in straight motion. Front Hum Neurosci 5(133):1–4

    Google Scholar 

  34. Posner MI (1980) Orienting of attention. Q J Exp Psychol 32:3–25

    Article  Google Scholar 

  35. Kowler E, Anderson E, Dosher B, Blaser E (1995) The role of attention in the programming of saccades. Vis Res 35(13):1897–1916

    Article  Google Scholar 

  36. Henderson JM (2003) Human gaze control during real-world scene perception. Trends Cogn Sci 7(11):498–504

  37. Akashi T, Tamura Y, Yano S, Osumi H (2013) Analysis of manipulating other’s attention for smooth interaction between human and robot. In: Proceedings of the 2013 IEEE/SICE international symposium on system integration, pp 340–345

  38. Posner MI, Snyder CRR, Davidson BJ (1980) Attention and detection of signals. J Exp Psychol 109(2):160–174

    Article  Google Scholar 

  39. Tamura Y, Yano S, Osumi H (2014) Visual attention model for manipulating human attention by a robot. In: Proceedings of the 2014 IEEE international conference on robotics and automation, pp 5307–5312

  40. Farnebäck G (2003) Two-frame motion estimation based on polynomial expansion. In: Proceedings of the 13th scandinavian conference on image analysis, pp 363–370

  41. Navalpakkam V, Itti L (2005) Modeling the influence of task on attention. Vis Res 45:205–231

    Article  Google Scholar 

  42. Rothkopf CA, Ballard DH, Hayhoe MM (2007) Task and context determine where you look. J Vis 7(14):16, 1–20

Download references

Acknowledgments

This work was supported by the Japan Society for the Promotion of Science, Grant-in-Aid for Young Scientists (B), 24700190.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yusuke Tamura.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tamura, Y., Akashi, T., Yano, S. et al. Human Visual Attention Model Based on Analysis of Magic for Smooth Human–Robot Interaction. Int J of Soc Robotics 8, 685–694 (2016). https://doi.org/10.1007/s12369-016-0354-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-016-0354-y

Keywords

Navigation