Abstract
Humans convey information about their emotional state through facial expressions. Robots typically cannot show facial expressions like humans do, making it hard for them to imitate emotions. Here we investigate how LED patterns around the eyes of Aldebaran’s Nao robot can be used to imitate human emotions. We performed two experiments. In the first experiment we examined the LED color, intensity, frequency, sharpness, and orientation that humans associate with different emotions. Based on the results, 12 LED patterns were created. The second experiment measured how well humans recognized those LED patterns as the emotions intended by the design. We used a ROC (Receiver Operating Characteristic) graph to determine which of the 12 LED patterns were the best ones for the Nao robot to imitate emotions with. Our technique of using ROC graphs is generally applicable to determining the best of other methods for imitating human emotions (e.g., gestures, speech), as well.







Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Adams RB Jr, Kleck RE (2005) Effects of direct and averted gaze on the perception of facially communicated emotion. Am Psychol Assoc 5(1):3–11. doi:10.1037/1528-3542.5.1.3
Aldebaran Robotics (2012) Nao key features. www.aldebaran-robotics.com/en/Discover-NAO/Key-Features/hardware-platform.html
Bazo D, Vaidyanathan R, Lentz A, Melhuish C (2010) Design and testing of a hybrid expressive face for a humanoid robot. In: IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE Press, New York, pp 5317–5322
Becker-Asano C, Ishiguro H (2011) Evaluating facial displays of emotion for the android robot Geminoid F. In: IEEE workshop on affective computational intelligence (WACI). IEEE Press, New York, pp 1–8
Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum-Comput Stud 59:119–155
Cohen I, Sebe N, Garg A, Chen LS, Huang TS (2003) Facial expression recognition from video sequences: temporal and static modeling. Comput Vis Image Underst 91:160–187
Ekman P, Friesen WV (1969) The repertoire of nonverbal behavior: categories, origins, usage, and coding. Semiotica 1:49–98
Ekman P, Friesen WV (1978) Manual for facial action coding system. Consulting Psychologists Press, Palo Alto
Lisetti CL, Schiano DJ (2000) Automatic facial expression interpretation: where human-computer interaction, artificial intelligence and cognitive science intersect. Pragmat Cogn 8(1):185–235
Nadel J, Simon M, Canet P, Soussignan R, Blancard P, Canamero L, Gaussier P (2006) Human responses to an expressive robot. In: Proceedings of the sixth international workshop on epigenetic robotics. Lund University cognitive studies, vol 128, pp 79–86
Torta E, Oberzaucher J, Werner F, Cuijpers RH, Juola JF (2012) The attitude toward socially assistive robots in intelligent homes: results from laboratory studies and field trials. J Human-Robot Interact 1(2):76–99
Tsao DY, Livingstone MS (2008) Mechanisms of face perception. Annu Rev Neurosci 31:411–437. doi:10.1146/annurev.neuro.30.051606.094238
Zeng Z, Pantic M, Roisman GI, Huang TS (2009) A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans Pattern Anal Mach Intell 31(1):39–58
Robot L (2013) Vision in ambient homes. In: García-Rodríguez J, Cazorla M (eds) Robotic vision: technologies for machine learning and vision applications. IGI Global, Hershey
Kanoh M, Iwata S, Kato S, Itoh H (2005) Emotive facial expressions of sensitivity communication robot “Ifbot”. Kansei Eng Int 5(3):35–42
Morency L, Darrell T (2006) Head gesture recognition in intelligent interfaces: the role of context in improving recognition. In: Proceedings of the international conference on intelligent user interfaces, Sydney, Australia, pp 32–38
Wong B, Cronin-Golomb A, Neargarder S (2005) Patterns of visual scanning as predictors of emotion identification in normal aging. Neuropsychology 19:739–749
Acknowledgements
The research leading to these results is part of the KSERA project (http://www.ksera-project.eu) and has received funding from the European Commission under the 7th Framework Programme (FP7) for Research and Technological Development under grant agreement n2010-248085.
We would also like to thank Dennis Hulsen, Jeremy Karouta, Mike Vogel, and Daniel Lakens, for their contributions to this work.
Author information
Authors and Affiliations
Corresponding author
Appendix: Experiment 1 Questionnaire
Appendix: Experiment 1 Questionnaire
For each emotion (A–F), mark a color and intensity that best fits the emotion; then for each pair of lines pick the one that bests fits the emotion. There is no right or wrong answer, just how you feel.

Rights and permissions
About this article
Cite this article
Johnson, D.O., Cuijpers, R.H. & van der Pol, D. Imitating Human Emotions with Artificial Facial Expressions. Int J of Soc Robotics 5, 503–513 (2013). https://doi.org/10.1007/s12369-013-0211-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-013-0211-1