Skip to main content
Log in

Implications of Robot Actions for Human Perception. How Do We Represent Actions of the Observed Robots?

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

Social robotics aims at developing robots that are to assist humans in their daily lives. To achieve this aim, robots must act in a comprehensible and intuitive manner for humans. That is, humans should be able to cognitively represent robot actions easily, in terms of action goals and means to achieve them. This yields a question of how actions are represented in general. Based on ideomotor theories (Greenwald Psychol Rev 77:73–99, 1970) and accounts postulating common code between action and perception (Hommel et al. Behav Brain Sci 24:849–878, 2001) as well as empirical evidence (Wykowska et al. J Exp Psychol 35:1755–1769, 2009), we argue that action and perception domains are tightly linked in the human brain. The aim of the present study was to examine if robot actions would be represented similarly, and in consequence, elicit similar perceptual effects, as representing human actions. Our results showed that indeed robot actions elicited perceptual effects of the same kind as human actions, arguing in favor of that humans are capable of representing robot actions in a similar manner as human actions. Future research will aim at examining how much these representations depend on physical properties of the robot actor and its behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Note that when the two left-handed participants were excluded from analysis, the pattern of results remained the same, with significant interaction of movement type and task type, F(1,13)\(\,=\,\)7.34, \(p = 0.018\), \(\eta _\mathrm{p}^{2} = 0.361\); with faster RTs in the size taks for grasping (M\(\,=\,\)376 ms), relative to pointing (M\(\,=\,\)381 ms), t(13)\(\,=\,\)1.91, \(p = 0.039\), one-tailed; and faster RTs in the luminance task for pointing (M\(\,=\,\)380 ms) relative to grasping (M\(\,=\,\)386 ms), t(13)\(\,=\,\)1.92, \(p = 0.035\), one-tailed. The interaction with type of cue (human vs. robot) was not significant, F(1,13)\(\,=\,\)0.17, \(p = 0.68\), indicating that the congruency effects were similar for both human and robot cartoon hands. Thus, even though two participants were naturally left-handed; their data did not affect the pattern of results.

References

  1. Greenwald A (1970) Sensory feedback mechanisms in performance control: with special reference to the ideomotor mechanism. Psychol Rev 77:73–99

    Article  Google Scholar 

  2. Hommel B, Müsseler J, Aschersleben G, Prinz W (2001) The theory of event coding (TEC): a framework for perception and action planning. Behav Brain Sci 24:849–878

    Article  Google Scholar 

  3. Wykowska A, Schubö A, Hommel B (2009) How you move is what you see: action planning biases selection in visual search. J Exp Psychol 35:1755–1769

    Google Scholar 

  4. Baron-Cohen S (1995) Mindblindness: an essay on autism and theory of mind. MIT Press, Boston

  5. Frith CD, Frith U (2006) How we predict what other people are going to do. Brain Res 1079:36–46

    Article  Google Scholar 

  6. Decety D, Grèzes J (1999) Neural mechanisms subserving the perception of human actions. Trends Cogn Sci 3:172–178

    Article  Google Scholar 

  7. Rizzolatti G, Fogassi L, Gallese V (2001) Neurophysiological mechanisms underlyingthe understanding and imitation of action. Nat Rev Neurosci 2:661–670

    Article  Google Scholar 

  8. Schilbach L, Timmermans B et al (2013) Toward a second-person neuroscience. Behav Brain Sci 36:393–462

    Article  Google Scholar 

  9. Kilner JM, Friston KJ, Frith CD (2007) Predicitve coding: an account of the mirror neurons system. Cogn Process 8:159–166

    Article  Google Scholar 

  10. Wolpert DM, Ghahramani Z (2000) Computational principles of movement neuroscience. Nat Neurosci 3:1212–1217

    Article  Google Scholar 

  11. Hommel B (2010) Grounding attention in action control: the intentional control of selection. In: Bruya BJ (ed) Effortless attention: a new perspective in the cognitive science of attention and actio. MIT Press, Cambridge, pp 121–140

    Chapter  Google Scholar 

  12. Wykowska A, Hommel B, Schubö A (2011) Action-induced effects on perception depend neither on element-level nor on set-level similarity between stimulus and response sets. Atten Percept Psychol 73:1034–1041

    Article  Google Scholar 

  13. Wykowska A, Hommel B, Schubö A (2012) Imaging when acting: picture but not word cues induce action-related biases of visual attention. Front Psychol 3:388

    Google Scholar 

  14. Wykowska A, Schubö A (2012) Action intentions modulate allocation of visual attention: electrophysiological evidence. Front Psychol 3:379

    Google Scholar 

  15. Mori M (1970) Bukimi no tani the uncanny valley (KF MacDorman, T Minato, Trans.). Energy 7:3335 (Originally in Japanese)

  16. Saygin AP, Chaminade T, Ishiguro H, Driver J, Frith C (2012) The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Soc Cogn Affect Neurosci 7:413–422

    Article  Google Scholar 

  17. Moore RK (2012) A Bayesian explanation of the ‘uncanny valley’ effect and related psychological phenomena. Sci Rep 2:864. doi:10.1038/srep00864

    Article  Google Scholar 

  18. Freier NG, Kahn PH Jr (2009) The fast-paced change of children’s technological environments. Child Youth Environ 19:1–11

    Google Scholar 

  19. Reeves B, Nass C (1996) The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press, Cambridge

    Google Scholar 

  20. Press C, Bird G, Flach R, Heyes C (2005) Robotic movement elicits automatic imitation. Cogn Brain Res 25:632–640

    Article  Google Scholar 

  21. Anderson SJ, Yamagishi N (2000) Spatial localization of colour and luminance stimuli in human peripheral vision. Vis Res 40:759771

    Google Scholar 

  22. Cousineau D (2005) Confidence intervals in within-subject designs: a simpler solution to Loftus & Masson’s method. Tutor Quant Methods Psychol 1:42–45

    Google Scholar 

  23. Oberman LM, McCmeery JP, Ramachandran VS, Pineda JA (2007) EEG evidence for mirror neuron activity during the observation of human and robot actions: toward an analysis of the human qualities of interactive robots. Neurocomputing 70:2194–2203

  24. Oztop E, Franklin DW, Chaminade T, Cheng G (2005) Humanhumanoid interaction: is a humanoid robot perceived as a human? Int J Hum Robot 2:537559

    Article  Google Scholar 

  25. Press C, Gillmeister H, Heyes C (2006) Bottom-up, not top-down modualtion of imitation by human and robotic models. Eur J Neurosci 24:1–5

    Article  Google Scholar 

  26. Chun MM, Wolfe JM (1996) Just say no: how are visual searches terminated when there is no target present? Cogn Psychol 30:39–78

    Article  Google Scholar 

  27. Schubö A, Schröger E, Meinecke C (2004) Texture segmentation and visual search for pop-out targets. An ERP study. Cognit Brain Res 21:317–334

    Article  Google Scholar 

  28. Schubö A, Wykowska A, Müller HJ (2007) Detecting pop-out targets in contexts of varying homogeneity: investigating homogeneity coding with event-related brain potentials (ERPs). Brain Res 1138:136–147

    Article  Google Scholar 

  29. Brayda L, Chellali R (2012) Measuring human-robot interactions. Int J Soc Robot. doi:10.1007/s12369-012-0150

  30. Schütz-Bosbach S, Prinz W (2007) Perceptual resonance: action-induced modulation of perception. Trends Cognit Sci 11:349–355

    Article  Google Scholar 

  31. Bosbach S, Cole J, Prinz W, Knoblich G (2005) Inferring anothers expectation from action: the role of peripheral sensation. Nat Neurosci 8:12951297

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) grant awarded to AW (WY-122/1-1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Agnieszka Wykowska.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wykowska, A., Chellali, R., Al-Amin, M.M. et al. Implications of Robot Actions for Human Perception. How Do We Represent Actions of the Observed Robots?. Int J of Soc Robotics 6, 357–366 (2014). https://doi.org/10.1007/s12369-014-0239-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-014-0239-x

Keywords

Navigation