Skip to main content

Human Robot Social Interaction Framework Based on Emotional Episodic Memory

  • Conference paper
  • First Online:
Robot Intelligence Technology and Applications (RiTA 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1015))

Abstract

Nowadays the application of robots are emerging in areas of modern life. It is expected we will be living in a new era in which robots such as socially interactive robots will make an important effect on our daily lives. Considering emotions play a critical role in human social communication, emotional episodes are necessary for human-robot social interactions. In this regard, we propose a framework that can form a social relationship between human and robot using an emotional episodic memory. Proposed framework enables personalized social interactions with each user by identifying the user and retrieving the matching episode in the memory. The interaction is not fixed, the emotional episodic memory is developmental through additional experiences with the user. The proposed framework is applied to an interactive humanoid robot platform, named Mybot to verify the effectiveness. As demonstration scenarios, photo shooting, and user identification and robot’s emotional reactions are used.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lin, P., Abney, K., Bekey, G.A.: Robot Ethics: The Ethical and Social Implications of Robotics. The MIT Press, Cambridge (2014)

    Google Scholar 

  2. Scheutz, M.: What is robot ethics? [TC spotlight]. IEEE Robot. Autom. Mag. 20(4), 20–165 (2013)

    Article  Google Scholar 

  3. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Auton. Syst. 42(3–4), 143–166 (2003)

    Article  Google Scholar 

  4. Knight, H.: How Humans Respond to Robots: Building Public Policy Through Good Design. Brookings, Washington, DC (2014)

    Google Scholar 

  5. Bartneck, C., Forlizzi, J.: A design-centred framework for social human-robot interaction. In: 13th IEEE International Workshop on Robot and Human Interactive Communication 2004, ROMAN 2004. IEEE (2004)

    Google Scholar 

  6. Belpaeme, T., et al.: Multimodal child-robot interaction: building social bonds. J. Hum. Robot Interact. 1(2), 33–53 (2013)

    Article  Google Scholar 

  7. Gorostiza, J.F., et al.: Multimodal human-robot interaction framework for a personal robot. In: The 15th IEEE International Symposium on Robot and Human Interactive Communication 2006, ROMAN 2006. IEEE (2006)

    Google Scholar 

  8. Glas, D., et al.: An interaction design framework for social robots. Robot. Sci. Syst. 7, 89 (2012)

    Google Scholar 

  9. Duffy, B.R., Dragone, M., O’Hare, G.M.P.: Social robot architecture: a framework for explicit social interaction. In: Android Science: Towards Social Mechanisms, CogSci 2005 Workshop, Stresa, Italy (2005)

    Google Scholar 

  10. Breazeal, C.L.: Designing Sociable Robots with CDROM. MIT Press, Cambridge (2004)

    Book  Google Scholar 

  11. ZÅ‚otowski, J., Strasser, E., Bartneck, C.: Dimensions of anthropomorphism: from humanness to humanlikeness. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction. ACM (2014)

    Google Scholar 

  12. Arkin, R.C., et al.: An ethological and emotional basis for human–robot interaction. Robot. Auton. Syst. 42(3–4), 191–201 (2003)

    Article  Google Scholar 

  13. Kim, H.-R., Lee, K.W., Kwon, D.-S.: Emotional interaction model for a service robot. In: IEEE International Workshop on Robot and Human Interactive Communication, ROMAN 2005. IEEE (2005)

    Google Scholar 

  14. Lee, W.H., et al.: Motivational emotion generation and behavior selection based on emotional experiences for social robots. In: Workshops in ICSR 2014 (2014)

    Google Scholar 

  15. Miwa, H., et al.: A new mental model for humanoid robots for human friendly communication introduction of learning system, mood vector and second order equations of emotion. In: IEEE International Conference on Robotics and Automation, ICRA 2003, vol. 3. IEEE (2003)

    Google Scholar 

  16. Kwon, D.-S., et al.: Emotion interaction system for a service robot. In: The 16th IEEE International Symposium on Robot and Human interactive Communication, RO-MAN 2007. IEEE (2007)

    Google Scholar 

  17. Lee, W.-H., Kim, J.-H.: Hierarchical emotional episodic memory for social human robot collaboration. Auton. Robots 42(5), 1087–1102 (2018)

    Article  Google Scholar 

  18. Carpenter, G.A., et al.: Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps. IEEE Trans. Neural Netw. 3(5), 698–713 (1992)

    Article  Google Scholar 

  19. Park, G.-M., Kim, J.-H.: Deep adaptive resonance theory for learning biologically inspired episodic memory. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE (2016)

    Google Scholar 

  20. Park, G.-M., et al.: Deep art neural model for biologically inspired episodic memory and its application to task performance of robots. IEEE Trans. Cybern. 48(6), 1786–1799 (2018)

    Article  Google Scholar 

  21. Yun, J., et al.: Cost-efficient 3D face reconstruction from a single 2D image. In: 2017 19th International Conference on Advanced Communication Technology (ICACT). IEEE (2017)

    Google Scholar 

  22. Cho, S., Lee, W.-H., Kim, J.-H.: Implementation of human-robot VQA interaction system with dynamic memory networks. In: 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE (2017)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2017R1A2A1A17069837).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jong-Hwan Kim .

Editor information

Editors and Affiliations

Appendix: Hardware and Software Implementation Details

Appendix: Hardware and Software Implementation Details

For the hardware design, we developed a humanoid type robotic platform aiming at sufficient interaction with humans. The hardware robot is named Mybot, developed in the RIT Laboratory at KAIST (Fig. 6).

Fig. 6.
figure 6

Mybot, the interactive robotic hardware platform.

Mybot has an upper body including two arms (10DoFs for each arm) and one trunk (2DoFs), and a lower body including an omnidirectional wheel and power supply as shown in Fig. 6. It is running on Linux 16.04 operating system and controlled by an Odroid. The body is connected to the robotic head via TCP/IP communication.

Tablet PC was used for robotic head which function as image receiver, voice detection, touch detection in our experiment since it has various input sensors and output interfaces. Especially for the touch detection module, the touch sensor on the tablet PC recognizes mouse clicks and mouse movements as the touch input, mouse movements are tightly restricted during the experiment.

A tablet computer with Windows10 64bits OS, i5 6th CPU, 8G RAM is used, and it has 12.3″ 2763 * 1824 resolution display with touchable screen. The device also has a 5 M pixels front camera, a microphone, and a speaker equipped.

For the neck frame, 3 actuators are used for 3 DOFs motion: pan, tilt, and yaw of the robotic head. The actuators are ROBOTIS MX-64R motors that operate at 15 V and have around 80 RPM speed and 80 kg * cm stall torque. The actuators are connected to the robotic head (tablet) via USB to Dynamixel interface.

For the software design, our team used visual C++ MFC programming to implement the proposed framework. As we had to use some cloud application in recognition part and language part, we used socket server and socket client to access to the API. We used Internet access to 4 cloud applications to give the framework functionality.

Face detection module uses OpenCV libraries with the Haar Cascades method. The user face identification module classifies user’s face so the robot can identifies which user is interacting with. For the identification algorithm, ARTMAP is applied, which is a supervised learning version of Adaptive Resonance Theory (ART) network [18]. The reason for applying ARTMAP is that facial learning and recognition should be conducted in real time, and feasible performance can be achieved even with small number of samples. More technically, the robot takes a 640 × 360 image, and crops and resizes the image in the range where the user’s face is located. Then, the image is vectorized in one dimensional vector, and used as the input vector of ARTMAP. The result video of the user identification in real time is available at https://youtu.be/Ik_FwL2WYK8.

The facial expression recognition module recognizes user’s facial expression. The module uses Google Cloud Vision which provides recognition API for four human emotions: joy, sorrow, anger, and surprise; and with four levels: very unlikely, unlikely, likely, and very likely. The advantage of Google Cloud Vision is that it is available for any user face.

Speech recognition module uses Google Speech to Text cloud application which has the function of converting the speech of a user into a text file. Additionally, it supports multi-lingual recognition services including Korean language service with the state-of-the-art performance.

Social relationships cannot develop or sustain without daily conversation. Thus, dialogue generator module takes text data from the speech recognition module and delivers the text to Yally’s Natural Conversation cloud application (http://www.yally.com/en/), and gets answer text from it. The generated answers are everyday life conversation rather than specific conversation.

Text to speech synthesizer module uses Naver Text to Speech cloud application (Clova Speech Synthesis). This module is directly linked to the Lip Sync. module in the communication part, so it signals when the lip synchronization should start.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, WH., Yoo, SM., Choi, JW., Kim, UH., Kim, JH. (2019). Human Robot Social Interaction Framework Based on Emotional Episodic Memory. In: Kim, JH., Myung, H., Lee, SM. (eds) Robot Intelligence Technology and Applications. RiTA 2018. Communications in Computer and Information Science, vol 1015. Springer, Singapore. https://doi.org/10.1007/978-981-13-7780-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-7780-8_9

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-7779-2

  • Online ISBN: 978-981-13-7780-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics