Skip to main content

What Do I Look Like? A Conditional GAN Based Robot Facial Self-Awareness Approach

  • Conference paper
  • First Online:
Social Robotics (ICSR 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13817))

Included in the following conference series:

  • 923 Accesses

Abstract

In uncertain social scenarios, the self-awareness of facial expressions helps a person to understand, predict, and control his/her states better. Self-awareness gives animals the ability to distinguish self from others and to self-recognize themselves. For cognitive robots, the ability to be aware of their actions and the effects of actions on self and the environment is crucial for reliable and trustworthy intelligent robots. In particular, we are interested in robot facial expression awareness by using action joint data to achieve self-face perception and recognition, passing a deep learning model. Our methodology proposes the first attempt toward robot facial expression self-awareness. We discuss the crucial role of self-awareness in social robots and propose a CGAN (Conditional Generative Adversarial Network) model to generate robot facial expression images from motors’ angle parameters. By using the CGAN method, the robot learns its facial self-awareness from a series of facial images. In addition, we introduce our robots facial self-awareness dataset. Our methodology can make the robot find the difference between self and others from its current generated image. The results show good performance and demonstrate the ability to achieve real-time robot facial self-awareness.

This work was supported by ENSTA Paris, Institut Polytechnique de Paris, France and the CSC PhD Scholarship.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Birlo, M., Tapus, A.: The crucial role of robot self-awareness in hri. In: 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 115–116. IEEE (2011)

    Google Scholar 

  2. Blanch, M.G., Mrak, M., Smeaton, A.F., O’Connor, N.E.: End-to-end conditional gan-based architectures for image colourisation. In: 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP), pp. 1–6. IEEE (2019)

    Google Scholar 

  3. Breazeal, C.: Designing Sociable Robots. MIT press, Cambridge (2004)

    Book  MATH  Google Scholar 

  4. Cangelosi, A., Asada, M.: Cognitive robotics (2022)

    Google Scholar 

  5. Chen, B., Kwiatkowski, R., Vondrick, C., Lipson, H.: Full-body visual self-modeling of robot morphologies. arXiv preprint arXiv:2111.06389 (2021)

  6. Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., Bharath, A.A.: Generative adversarial networks: an overview. IEEE Signal Process. Maga. 35(1), 53–65 (2018)

    Article  Google Scholar 

  7. Duval, S., Wicklund, R.A.: A theory of objective self awareness (1972)

    Google Scholar 

  8. Fenigstein, A., Scheier, M.F., Buss, A.H.: Public and private self-consciousness: assessment and theory. J. Consult. Clin. Psychol. 43, 522–527 (1975)

    Article  Google Scholar 

  9. Ganguli, S., Garzon, P., Glaser, N.: Geogan: A conditional gan with reconstruction and style loss to generate standard layer of maps from satellite images. arXiv preprint arXiv:1902.05611 (2019)

  10. George, L., Stopa, L.: Private and public self-awareness in social anxiety. J. Behav. Therapy Exp. Psychiat. 39(1), 57–72 (2008)

    Article  Google Scholar 

  11. Goury, O., Carrez, B., Duriez, C.: Real-time simulation for control of soft robots with self-collisions using model order reduction for contact forces. IEEE Rob. Autom. Lett. 6(2), 3752–3759 (2021)

    Article  Google Scholar 

  12. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inf. Process. Syst. 30, 1–12 (2017)

    Google Scholar 

  13. Hoffmann, M., Wang, S., Outrata, V., Alzueta, E., Lanillos, P.: Robot in the mirror: toward an embodied computational model of mirror self-recognition. KI-Künstliche Intelligenz 35(1), 37–51 (2021)

    Article  Google Scholar 

  14. Ishi, C.T., Minato, T., Ishiguro, H.: Analysis and generation of laughter motions, and evaluation in an android robot. APSIPA Trans. Signal Inf. Process. 8, 1–10 (2019)

    Article  Google Scholar 

  15. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behav. Brain Sci. 40, 1–72 (2017)

    Article  Google Scholar 

  16. Lanillos, P., Cheng, G., et al.: Robot self/other distinction: active inference meets neural networks learning in a mirror. arXiv preprint arXiv:2004.05473 (2020)

  17. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  18. Mitchell, R.W.: Mental models of mirror-self-recognition: two theories. New Ideas Psychol. 11(3), 295–325 (1993)

    Article  Google Scholar 

  19. Morin, A.: Self-awareness part 1: definition, measures, effects, functions, and antecedents. Social Pers. Psychol. Compass 5(10), 807–823 (2011)

    Article  Google Scholar 

  20. Pipitone, A., Chella, A.: Robot passes the mirror test by inner speech. Rob. Auton. Syst. 144, 103838 (2021)

    Article  Google Scholar 

  21. Qu, F., Yan, W.J., Chen, Y.H., Li, K., Zhang, H., Fu, X.: “You should have seen the look on your face’’: self-awareness of facial expressions. Front. Psychol. 8, 832 (2017)

    Article  Google Scholar 

  22. Rochat, P.: Five levels of self-awareness as they unfold early in life. Conscious. Cogn. 12(4), 717–731 (2003)

    Article  Google Scholar 

  23. Silvia, P.J., Duval, T.S.: Self-awareness, self-motives, and self-motivation. In: Motivational Analyses of Social Behavior: Building on Jack Brehm’s Contributions to Psychology, pp. 57–75 (2004)

    Google Scholar 

  24. Söderlund, M.: When service robots look at themselves in the mirror: an examination of the effects of perceptions of robotic self-recognition. J. Retail. Cons. Serv. 64, 102820 (2022)

    Article  Google Scholar 

  25. Sosnowski, S., Bittermann, A., Kuhnlenz, K., Buss, M.: Design and evaluation of emotion-display eddie. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3113–3118. IEEE (2006)

    Google Scholar 

  26. Todorov, E., Erez, T., Tassa, Y.: Mujoco: a physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE (2012)

    Google Scholar 

  27. Yu, C.: Robot Behavior Generation and Human Behavior Understanding in Natural Human-Robot Interaction. Ph.D. thesis, Institut Polytechnique de Paris (2021)

    Google Scholar 

  28. Yu, C., Changzeng, F., Chen, R., Tapus, A.: First attempt of gender-free speech style transfer for genderless robot. In: 2022 ACM/IEEE International Conference on Human-Robot Interaction, pp. 1110–1113 (2022)

    Google Scholar 

  29. Yu, C., Tapus, A.: Interactive robot learning for multimodal emotion recognition. In: Salichs, M.A., et al. (eds.) ICSR 2019. LNCS (LNAI), vol. 11876, pp. 633–642. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35888-4_59

    Chapter  Google Scholar 

  30. Yu, C., Tapus, A.: Multimodal emotion recognition with thermal and rgb-d cameras for human-robot interaction. In: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 532–534 (2020)

    Google Scholar 

  31. Yu, C., Tapus, A.: Srg 3: speech-driven robot gesture generation with gan. In: 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 759–766. IEEE (2020)

    Google Scholar 

  32. Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2528–2535. IEEE (2010)

    Google Scholar 

  33. Zhegong, S., Yu, C., Tapus, A.: What do i look like? dataset for social robot facial expression self-awareness. In: Workshop on Robot Curiosity in Human Robot Interaction (RCHRI). University of Waterloo (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chuang Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhegong, S., Yu, C., Huang, W., Sun, Z., Tapus, A. (2022). What Do I Look Like? A Conditional GAN Based Robot Facial Self-Awareness Approach. In: Cavallo, F., et al. Social Robotics. ICSR 2022. Lecture Notes in Computer Science(), vol 13817. Springer, Cham. https://doi.org/10.1007/978-3-031-24667-8_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-24667-8_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-24666-1

  • Online ISBN: 978-3-031-24667-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics