Skip to main content
Log in

How a Minimally Designed Robot can Help Implicitly Maintain the Communication Protocol

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

With an increasing demand for minimally designed robots, the research field of human–robot interaction (HRI) has to meet new and challenging requirements. One of these challenges is in the difference between the user’s retained mental model consisting of the instructions triggering the robot’s different behaviors and the robot’s previously taught instructions by the user. More specifically, we mention here the divergence between what was remembered by the non-expert user or believed taught to the robot in a previous HRI instance and what was actually taught to it. This divergence could lead to a waste of time when the robot is reused before it could be used effectively to achieve a task. Some users may not have the patience to reteach the robot a new version of instructions or what we call a communication protocol (CP) if they realize that they have forgotten the previous version of CP. In our previous work, we studied how a non-expert user could establish a CP in a context that required mutual adaptation using a minimally designed robot named sociable dining table (SDT). SDT is a dish robot, placed on a table which behaves according to knocks issued by the human. The human knocks on the table to convey an instruction in order to make the SDT undertake a specific behavior. The SDT had to learn through the received knocking how to choose the correct behavior. We remarked, based on previous experiments, that a CP could be built incrementally during the HRI. The formed CP was not only personalized to the pair of the non-expert user and robot, but also to the HRI instance. This means that the CP changed each time the human started a new interaction session with the SDT. The main reason behind the change was the non-expert users’ forgetfulness of the previously established communication protocol (PECP) and their issuing of a different set of new instructions to the SDT rather than maintaining the old instructions and continuing to teach the robot new skills. In the current study, we investigate how we can modify the way the minimally designed robot communicates back to the human so that the CP could be maintained and time wasted constructing a new CP could be avoided. This paper describes feedback strategies combining inarticulate utterances (IUs) with the minimally designed robot’s visible behaviors, to trigger an increased remembrance of the PECP. The results provide confirmatory evidence that using IUs combined with the minimally designed robot’s visible behaviors assist in driving non-expert users to maintain the PECP and avoid time wastage and task achievement failure.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. Reinforcement learning refers to a class of machine learning problems. The aim is to learn from experience, what to do in different situations, so as to optimize a quantitative reward over time.

  2. Here we mention about a reward signal that should be assigned in such a way so the robot could clearly distinguish between the right and wrong behaviors. A wrong behavior is associated with a negative reward signal, while a good behavior is associated with a positive reward signal.

  3. By expressive feedback strategy, we mean a feedback that makes it easy for the human to identify the wrong instructions given by him to the robot. In the current work, what we mean by new expressive feedback is a combination of the robot’s visible behaviors with the IUs.

  4. It is a cued recall because the robot tries to help the human to remember the rules by presenting cues which are in a sound format.

  5. E1 refers to the first ensemble (E) of IUs that could be generated to indicate that the robot is about to execute the forward behavior. The IUs used in the ensemble E1 do not figure in the E2 to avoid any kind of confusion. For example, the user may listen (if we suppose that A, B and C are IUs) to one of the three IUs A, B or C when the robot is about to execute the forward behavior, etc.

  6. Continuous-knocking was related to the presence of contiguous disagreements about the shared rules.

  7. https://goo.gl/forms/Vr6LVRDS99wBFMHX2.

  8. By “implicit” in this context we mean that the robot has not to tell directly to the human that they introduced a different instruction’s version than the actual previously taught instruction. The human can remember indirectly when the IU triggers their memory during the robot’s reuse, the previously taught instruction.

References

  1. Breazeal C (2003) Towards sociable robots. Robot Auton Syst 42(4):167–175

    Article  MATH  Google Scholar 

  2. Breazeal C, Hoffman G, Andrea L (2004) Teaching and working with robots as a collaboration. Int Joint Conf Auton Agents Multiagent Syst 3:1030–1037

    Google Scholar 

  3. Breazeal C, Thomaz AL (2008) Learning from human teachers with socially guided exploration. In: International conference on robotics and automation, pp 3539–3544

  4. Andrew DP, Kaelbling LP, Warren WH (1998) Ecological robotics. Adapt Behav 6(4):473–507

    Google Scholar 

  5. Goodrich MA, Schultz AC (2007) Human–robot interaction: a survey. Found Trends Hum Comput Interact 1(3):203–275

    Article  MATH  Google Scholar 

  6. Krauss RM, Chen Y, Chawla P (1996) Nonverbal behavior and nonverbal communication: what do conversational hand gestures tell us? Adv Exp Soc Psychol 28:389–450

    Article  Google Scholar 

  7. Argyle M, Ingham R, McCallin M (1973) The different functions of gaze. Semiotica 7:19–32

    Article  Google Scholar 

  8. Mataric MJ (1997) Reinforcement learning in the multi-robot domain. Auton Robot 4(1):73–83

    Article  Google Scholar 

  9. Russel SJ, Norvig P, Davis E (2010) Artificial intelligence: a modern approach. Prentice Hall, Upper Saddle River

    Google Scholar 

  10. Sebastian T, Mitchell MT (1995) The biology and technology of intelligent autonomous agents lifelong robot learning. Robot Auton Syst 15(1):25–46

    Google Scholar 

  11. Thomaz AL, Breazeal C (2008) Teachable robots: understanding human teaching behavior to build more effective robot learners. Artif Intell 172:716–737

    Article  Google Scholar 

  12. Matsumoto N, Fujii H, Okada M (2006) Minimal design for human-agent communication. Artif Life Robot 10(1):49–54

    Article  Google Scholar 

  13. Matsumoto N, Fujii H, Okada M (2005) Minimal design strategy for embodied communication agents. Robot Hum Interact Commun (RO-MAN) 335–340

  14. Youssef K, De Silva PRS, Okada M (2014) Sociable dining table: incremental meaning acquisition based on mutual adaptation process. In: Conference on Social Robotics, pp 206–216

  15. Forlizzi J, DiSalvo C (2006) Service robots in the domestic environment: a study of the Roomba vacuum in the home. In: ACM SIGCHI/SIGART human–robot interaction, pp 258–265

  16. Komatsu T, Kurosawa R, Yamada S (2012) How does the difference between users’ expectations and perceptions about a robotic agent affect their behavior? Int J Soc Robot 4(2):109–116

    Article  Google Scholar 

  17. Reed SK (2010) Cognition: theories and application. Wadsworth Cengage Learning, Belmont

    Google Scholar 

  18. Sternberg RJ (2003) Cognitive theory. Thomson Wadsworth, Belmont

    Google Scholar 

  19. Koulouri T, Lauria S (2009) Exploring mis-communication and collaborative behavior in human-robot interaction. In: Proceedings of the SIGDIAL, pp 111–119

  20. Skantze G (2005) Exploring human error recovery strategies: implications for spoken dialogue systems. Speech Commun 45(3):325–341

    Article  Google Scholar 

  21. Skantze G (2007) Error handling in spoken dialogue systems-managing uncertainty, grounding and miscommunication. Doctoral thesis in speech communicatio. KTH royal institute of technology

  22. Gonsior B, Sosnowski S, Buss M, Wollherr D, Kuhnlenz K (2012) An emotional adaption approach to increase helpfulness towards a robot. In: International conference on intelligent robots and systems (IROS), pp 2429–2436

  23. Azhar MQ (2012) Toward an argumentation-based dialogue framework for human-robot collaboration. In: Proceedings of the 14th ACM international conference on multimodal interaction, pp 305–308

  24. Pietro B, Martin C, Massimiliano G (2011) An introduction to argumentation semantics. Knowl Eng Rev 26(4):199–218

    Google Scholar 

  25. Cakmak M, Thomaz AL (2012) Designing robot learners that ask good questions. Hum Robot Interact Conf 26(4):17–24

    Google Scholar 

  26. Gordon B, Matthias S (2015) Sorry, I can’t do that: developing mechanisms to appropriately reject directives in human-robot interactions. In: Proceedings of the 2015 AAAI fall symposium on AI and HRI

  27. Brown P, Levinson SC (1987) Politeness: some universals in language usage. Studies in interactional sociolinguistics. Cambridge University Press, Cambridge

    Google Scholar 

  28. Goffman E (1959) The presentation of self in everyday life. Anchor, New York

    Google Scholar 

  29. Hinds PJ, Roberts TL, Jones H (2004) Whose job is it anyway? a study of human–robot interaction in a collaborative task. Hum Comput Interact 19(1):151–181

    Article  Google Scholar 

  30. Mao LR (1994) Beyond politeness theory: face revisited and renewed. J Pragmat 21(5):451–486

    Article  Google Scholar 

  31. Takanori K, Seiji Y (2009) Effects of adaptation gap on users’ variation of impression about artificial agents. Trans Jpn Soc Artif Intell 24(2):232–240

    Google Scholar 

  32. Cakmak M, Chao C, Thomaz AL (2010) Designing Interactions for Robot Active Learners. IEEE Trans Auton Ment Dev 2:108–118

    Article  Google Scholar 

  33. Moon A, Parker CAC, Croft EA, Van der Loos HFM (2013) Design and impact of hesitation gestures during human-robot resource conflicts. J Hum Robot Interact 2(3):18–40

    Article  Google Scholar 

  34. Rik B, Ron D, Gijsbert B, Daniel HJW, Pim H (2014) Do robot performance and behavioral style affect human trust? a multi-method approach. Int J Soc Robot 6(4):519–531

    Article  Google Scholar 

  35. Cynthia Breazeal (2002) Designing sociable robots. MIT Press, Cambridge

    Google Scholar 

  36. Read R, Belpaeme T (2010)? Interpreting non-linguistic utterances by robots: studying the influence of physical appearance. In: Proceedings of the 3rd international workshop on affective interaction in natural environments, pp 65–70

  37. Read R, Belpaeme T (2012) How to use non-linguistic utterances to convey emotion in child-robot interaction. In: Proceedings of the 7th international conference on human-robot interaction (HRI’12), pp 219–220

  38. Paivio A (1990) Mental representations: a dual coding approach. Oxford University Press. doi:10.1093/acprof:oso/9780195066661.001.0001

  39. Weiner R (2000) Creativity and beyond: cultures, values, and change. State University of New York Press, New York

    Google Scholar 

  40. Saunders R, Gemeinboeck P (2014) Accomplice: creative robotics and embodied computational creativity. In: Proceedings of the artificial intelligence and the simulation of behavior symposium (AISB)

  41. Bauwens V, Fink J (2012) Will your household adopt your new robot? Interact 19(2):60–64

    Article  Google Scholar 

  42. Weiss A, Igelsbock J, Tscheligi M, Bauer A, Kuhnlenz K, Wollherr D, Buss M (2010) Robots asking for directions: the willingness of passers-by to support robots. In: International conference on human-robot interaction (HRI), pp 23–30

  43. Salvini P, Ciaravella G, Yu W, Ferri G, Manzi A, Mazzolai B, Laschi C, Oh SR, Dario P (2010) How safe are service robots in urban environments? bullying a robot. In: Proceedings of IEEE international symposium on robot and human interactive communication

  44. Lamb CW, Hair JF, McDaniel C (2004) Marketing, 7th edn. South Western, Mason

    Google Scholar 

  45. Pratkanis AR, Pratkanis A, Aronson E (2001) Age of propaganda: the everyday use and abuse of persuasion. Holt Paperback, New York

    Google Scholar 

  46. Vijayakumar L, Sethu D, Schaal S (2005) Incremental online learning in high dimensions. Neural Comput 17:2602–2634

    Article  MathSciNet  Google Scholar 

  47. Sutton RS, Barto AG (1998) Introduction to reinforcement learning. MIT Press, Cambridge

    Google Scholar 

  48. Suzuki N, Kazuhiko K, Yugo T, Okada M (2003) Effects of echoic mimicry using hummed sounds on human-computer interaction. Speech Commun 40:559–573

    Article  Google Scholar 

  49. Read R, Belpaeme T (2013) People interpret robotic non-linguistic utterances categorically. Hum Robot Interact (HRI), pp 209–210

  50. Ross MJ, Shaffer HL, Cohen A, Freudberg R, Manley HJ (1974) Average magnitude difference function pitch extractor. IEEE Trans Acoust Speech Signal Process 22(5):353362

    Article  Google Scholar 

  51. McCroskey JC, Teven JJ (1999) Goodwill: a reexamination of the construct and its measurement. Commun Monogr 66:90–103

    Article  Google Scholar 

  52. Erbert LA, Floyd K (2004) Affectionate expressions as face-threatening acts: receiver assessments. Commun Stud 55:254–270

    Article  Google Scholar 

  53. Bartneck C, Kulic D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. J Soc Robot 1:71–81

    Article  Google Scholar 

  54. Min KL, Sara K, Jodi F, Siddhartha S, Paul R (2010) Gracefully mitigating breakdowns in robotic services. In: Proceedings of human–robot interaction

  55. Johanna MJ, Adrienne LR, Tomer S, Adriana G, Amanda EG, Ellen L, Daniel SP, Eric EN (2015) Forgetting the best when predicting the worst preliminary observations on neural circuit function in adolescent social anxiety. Dev Cognit Neurosci 13:21–31

    Article  Google Scholar 

  56. Aziz-Zadeh L, Sheng T, Gheytanchi A (2010) Common premotor regions for the perception and production of prosody and correlations with empathy and prosodic ability. PLoS ONE 5(1):1–8

    Article  Google Scholar 

  57. Lilienfeld SO, Widows MR (2005) PPI-R : psychopathic personality inventory revised : professional manual. Psychological Assessment Resources, p 160

Download references

Acknowledgements

This research is supported by Grant-in-Aid for scientific research of KIBAN-B (26280102) from the Japan Society for the Promotion of science (JSPS).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Khaoula Youssef.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Youssef, K., Okada, M. How a Minimally Designed Robot can Help Implicitly Maintain the Communication Protocol. Int J of Soc Robotics 9, 431–448 (2017). https://doi.org/10.1007/s12369-017-0398-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-017-0398-7

Keywords

Navigation