single-jc.php

JACIII Vol.14 No.5 pp. 453-463
doi: 10.20965/jaciii.2010.p0453
(2010)

Paper:

A Model for Generating Facial Expressions Using Virtual Emotion Based on Simple Recurrent Network

Yuki Matsui*, Masayoshi Kanoh**, Shohei Kato*,
Tsuyoshi Nakamura*, and Hidenori Itoh*

*Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan

**School of Information Science and Technology, Chukyo University, 101 Tokodachi, Kaizu-cho, Toyota 470-0393, Japan

Received:
November 5, 2009
Accepted:
March 23, 2010
Published:
July 20, 2010
Keywords:
human-robot interaction, simple recurrent network, facial expression, emotion, Ifbot
Abstract
We propose an interactive facial expression model using the Simple Recurrent Network (SRN) for achieving interactions through facial expressions between robots and human beings. The proposed model counts humans in the root system as receivers of facial expressions to achieve a dynamic system bi-directionally affecting humans and robots. Robots typically generate only static changes in facial expression using motion files, so they seem bored, unnatural, and strange to their users. We use interactions between robots and people to diversity the inputs of robots and use emotional state transitions of robots to reduce uniformities in output facial expressions. This paper discusses a dynamic system that causes the proposed model to learn emotional facial expressions based on those of humans. Next, we regard internal states generated by the proposed model as virtual emotions and show that mixed emotions can be expressed by users’ inputs from the virtual emotional space. Moreover, based on the results of a questionnaire, we see that facial expressions adopted in the virtual emotional space of the proposed model received high rates of approval from the users.
Cite this article as:
Y. Matsui, M. Kanoh, S. Kato, T. Nakamura, and H. Itoh, “A Model for Generating Facial Expressions Using Virtual Emotion Based on Simple Recurrent Network,” J. Adv. Comput. Intell. Intell. Inform., Vol.14 No.5, pp. 453-463, 2010.
Data files:
References
  1. [1] T. Minato, M. Shimada, S. Itakura K. Lee, and H. Ishiguro, “Does Gaze Reveal the Human Likeness of an Android?,” Proc. of the 4th IEEE Int. Conf. on Development and Learning, pp. 106-111, 2005.
  2. [2] T. Hashimoto, S. Hiramatsu, T. Tsuji, and H. Kobayashi, “Development of the Face Robot SAYA for Rich Facial Expressions,” SICEICASE Int. Joint Conf. 2006, pp. 5423-5428, 2006.
  3. [3] Business Design Laboratory Co. Ltd., “Communication Robot ifbot,”
    http://www.ifbot.net.
  4. [4] M. Kanoh, S. Kato, and H. Itoh, “Facial Expressions Using Emotional Space in Sensitivity Communication Robot “ifbot,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1586-1591, 2004.
  5. [5] M. Kanoh, S. Iwata, S. Kato, and H. Itoh, “Emotive Facial Expressions of Sensitivity Communication Robot “Ifbot”,” Kansei Engineering Int., Vol.5, No.3, pp. 35-42, 2005.
  6. [6] M. Gotoh, M. Kanoh, S. Kato, T. Kunitachi, and H. Itoh, “Face Generator for Sensibility Robot based on Emotional Regions,” The 36th Int. Symposium on Robotics, 2005.
  7. [7] J. L. Elman, “Finding structure in time,” Cognitive Science, Vol.14, pp. 179-211, 1990.
  8. [8] J. L. Elman, E. A. Bates, M. H. Johnson, D. Parisi, and K. Plunkett, “Rethinking Innateness: A Connectionist Perspective on Development,” Bradford Books, 1996.
  9. [9] H. Yamada, K. Suzuki, and S. Hashimoto, “Interrelating physical feature of facial expression and its impression,” HCS2000-47, 2001.
  10. [10] R. R. Cornelius, “The Science of Emotion; Research and Tradition in The Psychology of Emotion,” Prentice Hall College Div., 1995.
  11. [11] P. Ekman, “Unmasking the Face,” Prentice-Hall, 1975.
  12. [12] J. P. Guilford. “Psychometric Methods,” 2nd edition, McGraw-Hill Inc., 1954.
  13. [13] T. Kanda, T. Hirano, D. Eaton, and H. Ishiguro, “A practical experiment with interactive humanoid robots in a human society,” Third IEEE Int. Conf. on Humanoid Robots (Humanoids 2003), 2003.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024