Hostname: page-component-8448b6f56d-t5pn6 Total loading time: 0 Render date: 2024-04-23T16:21:32.064Z Has data issue: false hasContentIssue false

A gaze control of socially interactive robots in multiple-person interaction

Published online by Cambridge University Press:  03 October 2016

Sang-Seok Yun*
Affiliation:
Korea Institute of Science and Technology (KIST), Seoul, Korea
*
*Corresponding author. E-mail: yssmecha@gmail.com

Summary

This paper proposes a computational model for selecting a suitable interlocutor of socially interactive robots in a situation interacting with multiple persons. To support this, a hybrid approach incorporating gaze control criteria and perceptual measurements for social cues is applied to the robot. For the perception part, representative non-verbal behaviors indicating human-interaction intent are designed based on the psychological analysis of human–human interaction, and these behavioral features are quantitatively measured by core perceptual components including visual, auditory, and spatial modalities. In addition, each aspect of recognition performance is improved through temporal confidence reasoning as a post-processing step. On the other hand, two factors of the physical space and conversational intimacy are tactically applied to the model calculation as a way of strengthening social gaze control effect of the robot. Interaction experiments with performance evaluation are given to verify that the proposed model is suitable to assess intended behaviors of individuals and perform gaze behavior about multiple persons. By showing a success rate of 93.3% in human decision-making criteria, it confirms a potential to establish socially acceptable gaze control in multiple-person interaction.

Type
Articles
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Present address: Hwarangno 14-gil 5, Seongbuk-gu, Seoul 02792, Republic of Korea.

References

1. Argyle, M., Social Interaction (Methuen, London, 1969).Google Scholar
2. Blakemore, S.-J. and Decety, J., “From the perception of action to the understanding of intention,” Nat. Rev. Neurosci. 2, 561567 (2001).Google Scholar
3. Fong, T., Nourbakhsh, I. and Dautenhahn, K., “A survey of socially interactive robots,” Robot. Auton. Syst. 42, 143166 (2003).Google Scholar
4. Kirchner, N., Alempijevic, A. and Dissanayake, G., “Nonverbal Robot-Group Interaction Using an Limited Gaze Cue,” Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. (HRI 2011), Lausanne, Switzerland, IEEE Press, New York, NY (2011) pp. 497–504.Google Scholar
5. Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S. and Stone, M., “Animated Conversation: Rule-based Generation of Facial Expression, Gesture & Spoken Intonation for Multiple Conversational Agents,” Proceedings of the 21st Annual Conference on Computer Graphics Interactive Techniques (SIGGRAPH 1994), Orlando, USA, ACM, New York, NY (1994) pp. 413–420.Google Scholar
6. Rehm, M. and Andre, E., “Where do They Look? Gaze Behaviors of Multiple Users Interacting with An Embodied Conversational Agent,” Proceedings of the International Workshop on Intelligent Virtual Agents (IVA 2005), Kos, Greece, (2005), pp. 241–252.Google Scholar
7. Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H. and Hagita, N., “Footing in Human-Robot Conversations: How Robot Might Shape Participant Roles Using Gaze Cues,” Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2009), San Diego, USA, IEEE Press, New York, NY (2009) pp. 61–68.Google Scholar
8. Matsusaka, Y., Fujie, S. and Kobayashi, T., “Modeling of Conversational Strategy for the Robot Participating in the Group Conversation,” Proceedings of the European Conference on Speech Communication and Technology. Aalborg, Denmark (INTERSPEECH 2001) pp. 2173–2176.CrossRefGoogle Scholar
9. Bohus, D. and Horvitz, E., “Facilitating Multiparty Dialog with Gaze, Gesture, and Speech,” Proceedings of the International Conference on Multimodal Interfaces (ICMI 2010), Beijing, China, (2010), pp. 5:1–5:8.CrossRefGoogle Scholar
10. Kondo, Y., Takemura, K., Takamatsu, J. and Ogasawara, T., “A gesture-centric android system for multi-party human-robot interaction,” J. Human-Robot Interact. 2 (1), 133151 (2013).Google Scholar
11. Lim, G. H. and Suh, I. H., “Robust robot knowledge instantiation for intelligent service robots,” Intel. Serv. Robot. 3, 115123 (2010).Google Scholar
12. Okuno, H. G., Nakadai, K., Hidai, K., Mizoguchi, H. and Kitano, H., “Human-Robot Interaction Through Real-Time Auditory and Visual Multiple-Talker Tracking,” Proceedings of the IEEE Internatonal Conference on Intelligent Robots and Systems (IROS 2001), St. Louis, USA, IEEE Press, New York, NY (2001) pp. 1402–1409.Google Scholar
13. Trafton, J. G., Bugajska, M. D., Fransen, B. R. and Ratwani, R. M., “Integrating Vision and Audition within a Cognitive Architecture to track Conversations,” Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2008), Amsterdam, Netherlands, IEEE Press, New York, NY (2008) pp. 201–208.Google Scholar
14. Lang, S., Kleinehagenbrock, M., Hohenner, S., Fritsch, J., Fink, G. A. and Sagerer, G., “Providing the Basis for Human-Robot-Interaction: A Multi-Modal Attention System for a Mobile Robot,” Proceedings of the International Conference on Multimodal Interfaces (ICMI 2003), Vancouver, Canada, ACM, New York (2003) pp. 28–35.Google Scholar
15. Fritsch, J., Kleinehagenbrock, M., Lang, S., Plötz, T., Fink, G. A. and Sagerer, G., “Multi-modal anchoring for human-robot interaction,” Robot. Auton. Syst. 43, 133147 (2003).Google Scholar
16. Michalowski, M. P., Sabanovic, S. and Simmons, R., “A Spatial Model of Engagement for a Social Robot,” Proceedings of the IEEE International Workshop on Advanced Motion Control (ACM 2006), IEEE Press, New York, NY (2006) pp. 762–767.Google Scholar
17. Bohus, D. and Horvitz, E., “Dialog in the Open World: Platform and Applications,” Proceedings of the International Conference on Multimodal Interfaces (ICMI 2009), Beijing, China, ACM, New York (2009) pp. 31–38.Google Scholar
18. Gockley, R., Bruce, A., Forlizzi, J., Michalowski, M., Mundell, A., Rosenthal, S., Sellner, B., Simmons, R., Snipes, K., Schultz, A. C. and Wang, J., “Designing Robots for Long-Term Social Interaction,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS 2005), Alberta, Canada, IEEE Press, New York, NY (2005) pp. 2199–2204.Google Scholar
19. Bellotto, N. and Hu, H., “Multimodal Perception and Recognition of Humans with a Mobile Service Robot,” Proceedings of the International Conference on Robotics and Biomimetics, Guangxi, China, (ROBIO 2009) pp. 401–406.CrossRefGoogle Scholar
20. Torta, E., Heumen, J., Piunti, F., Romeo, L. and Cuijpers, R., “Evaluation of unimodal and multimodal communication cues for attracting attention in human-robot interaction,” Int. J. Soc. Robot. 2, 8996 (2015).Google Scholar
21. Kirchner, N., Alempijevic, A. and Dissanayake, G., “Nonverbal Robot-Group Interaction Using an Imitated Gaze Cue,” Proceedings of the IEEE International Conference on Human-Robot Interaction (HRI 2011), Lausanne, Switzerland, IEEE Press, New York, NY (2011) pp. 497–504.Google Scholar
22. Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J. and Ishiguro, H., “Conversational gaze mechanisms for humanlike robots,” ACM Trans. Interact. Intell. Syst. 1 (2), 133 (2012).Google Scholar
23. Zaraki, A., Mazzei, D., Giuliani, M. and Rossi, D. D., “Designing and evaluating a social gaze-control system for a humanoid robot,” IEEE Trans. Human-Machine Syst. 44 (2), 157168 (2014).CrossRefGoogle Scholar
24. Nakano, Y. I., Yoshino, T., Yatsushiro, M. and Takase, Y., “Generating robot gaze on the basis of participation roles and dominance estimation in multiparty interaction,” ACM Trans. Inter. Intell. Syst. 5 (4), 157168 (2015).Google Scholar
25. Weis, S., Theory and Measurement of Social Intelligence as a Cognitive Performance Construct Ph.D. Dissertation, (Otto-von-Guericke University, Magdebrug, Germany, 2008).Google Scholar
26. Ho, W. C., Dautenhahn, K., Lim, M. Y. and Casse, K. D., “Modeling Human Memory in Robotic Companions for Personalisation and Long-term Adaptation in HRI,” Proceedings of the 1st International Conference on Biologically Inspired Cognitive Architectures, BICA, Virginia, USA (2000) pp. 64–71.Google Scholar
27. Hall, E. T., The Hidden Dimension: Man's Use of Space in Public and Private (Doubleday, New York, 1966).Google Scholar
28. Patterson, M. L., “Spatial factors in social interactions,” Human Relations, 21, 351361 (1968).CrossRefGoogle Scholar
29. Walters, M. L., Dautenhahn, K., Boekhorst, R., Koay, K. L., Syrdal, D. S. and Nehaniv, C. L., “An Empirical Framework for Human-Robot Proxemics,” Proceedings of the New Frontiers in Human-Computer Interaction, Edinburgh, Scotland, (AISB 2009) pp. 144–149.Google Scholar
30. Hartson, R., “Cognitive, physical, sensory, and functional affordances in interaction design,” Behav. Inf. Technol. 22 (3), 315338 (2003).Google Scholar
31. Bruce, A., Nourbakhsh, I. and Simmons, R., “The Role of Expressiveness and Attention in Human-Robot Interaction,” Proceedings of the IEEE International Conference on Robots and Automation (ICRA 2002), Washington, USA, IEEE Press, New York, NY (2002) pp. 4138–4142.Google Scholar
32. Mehrabian, A., Nonverbal Communication (Aldine-Atherton, New York, 1972).Google Scholar
33. Egolf, D. and Chester, S. L., The Nonverbal Factor: Exploring the Other Side of Communication (iUniverse, New York, 2007).Google Scholar
34. Andersen, P.A., The cognitive valence theory of intimate communication In Palmer, M.T. and Barnett, G.A. (Eds.), Progress in Communication Sciences, vol. 14: Mutual influence in interpersonal communication theory and research in cognition, affect, and behavior (pp. 3972) (Ablex, Norwood, NJ, 1988).Google Scholar
35. Yoon, J. and Kim, D., “Frontal Face Classifier Using Adaboost with Mct Features,” Proceedings of the International Conference on Control, Automation, Robotics and Vision, Singapore, (ICARCV 2010) pp. 2084–2087.Google Scholar
36. OpenNI NITE, Available at: http://openni.ru/files/nite/.Google Scholar
37. Lee, B., Kim, H., Choi, J., Kim, S. and Cho, N. I., “DSP integration of sound source localization and multi-channel wiener filter,” Proceedings of the International Conference on Robotics & Automation (ICRA 2010), Alaska, USA, IEEE Press, New York, NY (2010) pp. 4830–4835.Google Scholar
38. Jun, B., Lee, J. and Kim, D., “A novel illumination-robust face recognition using statistical and non-statistical method,” Pattern Recog. Lett. 32 (2), 329336 (2011).Google Scholar
39. Triantaphyllou, E., Multi-Criteria Decision Making Methods: A Comparative Study (Kluwer, Norwell, MA, 2000).Google Scholar
40. Oh, K.-G., Jung, C.-Y., Lee, Y.-G. and Kim, S.-J., “Real-Time Lip Synchronization Between Text-to-Speech (TTS) System and Robot Mouth,” Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN 2010), Viareggio, Italy, IEEE press, New York (2010), pp. 620–625.Google Scholar
Supplementary material: File

Yun supplementary material

Yun supplementary material 1

Download Yun supplementary material(File)
File 14.5 KB