Abstract
In order to smoothly interact with humans, it is desirable that a robot can guide human attention and behaviors. In this study, we developed a model of human visual attention for guiding human attention based on an analysis of a magic trick performance. We measured human gaze points of people watching a video of a magic trick performance and compared them with the area where the magician intended to draw a spectator’s attention. The analysis showed that the relationship between the magician’s face, hands, and gaze plays an important role in guiding the spectator’s attention. On the basis of the preliminary user studies on watching the magic video, we integrated a saliency map and a manipulation map that describes the relationship between gaze and hands to develop a novel human attention model. The evaluation using the observed gaze points demonstrated that the proposed model can better explain human visual attention than the saliency map while people are watching a video of a magic trick performance.
Similar content being viewed by others
References
Frith CD, Frith U (2006) How we predict what other people are going to do. Brain Res 1079:36–46
Sato T, Nishida Y, Ichikawa J, Hatamura Y, Mizoguchi H (1994) Active understanding of human intention by a robot through monitoring of human behavior. In: Proceedings of the IEEE/RSJ/GI international conference on intelligent robots and systems ’94, pp 405–414
Sakita K, Ogawara K, Murakami S, Kawamura K, Ikeuchi K (2004) Flexible cooperation between human and robot by interpreting human intention from gaze information. In: Proceedings of the 2004 IEEE/RSJ international conference on intelligent robots and systems, pp 846–851
Tamura Y, Sugi M, Ota J, Arai T (2004) Deskwork support system based on the estimation of human intentions. In: Proceedings of the 13th IEEE international workshop on robot and human interactive communication, pp 413–418
Tahboub KA (2006) Intelligent human-machine interaction based on dynamic bayesian networks probabilistic intention recognition. J Intell Robot Syst 45(1):31–52
Schmid AJ, Weede O, Worn H (2007) Proactive robot task selection given an human intention estimate. In: Proceedings of the 16th IEEE international symposium on robot and human interactive communication, pp 726–731
Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: Proceedings of the 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 708–713
Hegel F, Gieselmann S, Peters A, Holthaus P, Wrede B (2011) Towards a typology of meaningful signals and cues in social robotics. In: Proceedings of the 20th IEEE international symposium on robot and human interactive communication, pp 72–78
Fong T, Thorpe C, Bauer C (2003) Collaboration, dialogue, and human-robot interaction. Robot Res, Springer Tracts Adv Robot 6:255–266
Finzi A, Orlandini A (2005) Human-robot interaction through mixed-initiative planning for rescue and search rovers. AI*IA 2005: advances in artificial intelligence. Lect Notes Comput Sci 3673:483–494
Hong J-H, Song Y-S, Cho S-B (2007) Mixed-initiative human-robot interaction using hierarchical bayesian networks. IEEE Trans Syst Man Cybern—Part A 37(6):1158–1164
Sugiyama O, Kanda T, Imai M, Ishiguro H, Hagita N, Anzai Y (2006) Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connect Sci 18(4):379–402
Friesen CK, Kingstone A (1998) The eyes have it! reflexive orienting is triggered by nonpredictive gaze. Psychon Bull Rev 5(3):490–495
Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S (1999) Gaze perception triggers reflexive visuospatial orienting. Vis Cogn 6(5):509–540
Langton S, Bruce V (1999) Reflexive visual orienting in response to the social attention of others. Vis Cogn 6:541–568
Langton SR, Watt RJ, Bruce V (2000) Do the eyes have it? Cues to the direction of social attention. Trends Cogn Sci 4(2):50–59
Hoque MM, Onuki T, Kobayashi Y, Kuno Y (2013) Effect of robot’s gaze behaviors for attracting and controlling human attention. Adv Robot 27(11):813–829
Zheng M, Moon A, Croft EA, Meng MQH (2015) Impacts of robot head gaze on robot-to-human handovers. Int J Soc Robot 7:783–798
Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4:219–227
Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1258
Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194–203
Schillaci G, Bodiroža S, Hafner VV (2013) Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. Int J Soc Robot 5:139–152
Ma Y-F, Zhang H-J (2002) A model of motion attention for video skimming. In: Proceedings of the 2002 international conference on image processing, pp 129–132
Lang M, Hornung A, Wang O, Poulakos S, Smolic A, Gross M (2010) Nonlinear disparity mapping for stereoscopic 3D. ACM Trans Graph 29(4):75
Cerf M, Harel J, Einhäuser W, Koch C (2008) Predicting human gaze using low-level saliency combined with face detection. Adv Neural Inf Process Syst 20:241–248
Ozeki M, Kashiwagi Y, Inoue M, Oka N (2011) Top-down visual attention control based on a particle filter for human-interactive robots. In: Proceedings of the 4th international conference on human system interaction, pp 188–194
Lamont P, Wiseman R (1999) Magic in theory—an introduction to the theoretical and psychological elements of conjuring. University of Hertfordshire Press, Hatfield
Kuhn G, Amlani AA, Rensink R (2008) Towards a science of magic. Trends Cogn Sci 12(9):349–354
Macknik SL, King M, Randi J, Robbins A, Teller Thompson J, Martinez-Conde S (2008) Attention and awareness in stage magic: turning tricks into research. Nat Rev Neurosci 9(11):871–879
Kuhn G, Tatler BW, Findlay JM, Cole GG (2008) Misdirection in magic: implications for the relationship between eye gaze and attention. Vis Cogn 16:391–405
Kuhn G, Tatler BW, Cole GG (2009) You look where I look! Effect of gaze cues on overt and covert attention in misdirection. Vis Cogn 17:925–944
Tamariz J (2007) The five points in magic. Hermetic Press, Seattle
Otero-Millan J, Macknik SL, Robbins A, Martinez-Conde S (2011) Stronger misdirection in curved than in straight motion. Front Hum Neurosci 5(133):1–4
Posner MI (1980) Orienting of attention. Q J Exp Psychol 32:3–25
Kowler E, Anderson E, Dosher B, Blaser E (1995) The role of attention in the programming of saccades. Vis Res 35(13):1897–1916
Henderson JM (2003) Human gaze control during real-world scene perception. Trends Cogn Sci 7(11):498–504
Akashi T, Tamura Y, Yano S, Osumi H (2013) Analysis of manipulating other’s attention for smooth interaction between human and robot. In: Proceedings of the 2013 IEEE/SICE international symposium on system integration, pp 340–345
Posner MI, Snyder CRR, Davidson BJ (1980) Attention and detection of signals. J Exp Psychol 109(2):160–174
Tamura Y, Yano S, Osumi H (2014) Visual attention model for manipulating human attention by a robot. In: Proceedings of the 2014 IEEE international conference on robotics and automation, pp 5307–5312
Farnebäck G (2003) Two-frame motion estimation based on polynomial expansion. In: Proceedings of the 13th scandinavian conference on image analysis, pp 363–370
Navalpakkam V, Itti L (2005) Modeling the influence of task on attention. Vis Res 45:205–231
Rothkopf CA, Ballard DH, Hayhoe MM (2007) Task and context determine where you look. J Vis 7(14):16, 1–20
Acknowledgments
This work was supported by the Japan Society for the Promotion of Science, Grant-in-Aid for Young Scientists (B), 24700190.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Tamura, Y., Akashi, T., Yano, S. et al. Human Visual Attention Model Based on Analysis of Magic for Smooth Human–Robot Interaction. Int J of Soc Robotics 8, 685–694 (2016). https://doi.org/10.1007/s12369-016-0354-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-016-0354-y