Abstract
In task-oriented robot domains, a human is often designated as a supervisor to monitor the robot and correct its inferences about its state during execution. However, supervision is expensive in terms of human effort. Instead, we are interested in robots asking non-supervisors in the environment for state inference help. The challenge with asking non-supervisors for help is that they may not always understand the robot’s state or question and may respond inaccurately as a result. We identify four different types of state information that a robot can include to ground non-supervisors when it requests help—namely context around the robot, the inferred state prediction, prediction uncertainty, and feedback about the sensors used for the predicting the robot’s state. We contribute two wizard-of-oz’d user studies to test which combination of this state information increases the accuracy of non-supervisors’ responses. In the first study, we consider a block-construction task and use a toy robot to study questions regarding shape recognition. In the second study, we use our real mobile robot to study questions regarding localization. In both studies, we identify the same combination of information that increases the accuracy of responses the most. We validate that our combination results in more accurate responses than a combination that a set of HRI experts predicted would be best. Finally, we discuss the appropriateness of our found best combination of information to other task-driven robots.
Similar content being viewed by others
References
von Ahn L, Dabbish L (2004) Labeling images with a computer game. ACM conference on human factors in computing systems. In: CHI 2004, pp 319–326
Antifakos S, Schwaninger A, Schiele B (2004) Evaluating the effects of displaying uncertainty in context-aware applications. In: UbiComp 2004, pp 54–69
Argall B, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(5):469–483
Asoh H, Hayamizu S, Hara I, Motomura Y, Akaho S, Matsui T (1997) Socially embedded learning of the office-conversant mobile robot jijo-2. In: 15th international joint conference on artificial intelligence, pp 880–885
Asoh H, Motomura Y, Hara I, Akaho S, Hayamizu S, Matsui T (1996) Acquiring a probabilistic map with dialogue-based learning. In: Proceedings of ROBOLEARN-96, pp 11–18
Banbury S, Seldcon S, Endsley M, Gordon T, Tatlock K (1998) Being certain about uncertainty: How the representation of system reliability affects pilot decision making. In: Human factors and ergonomics society 42nd annual meeting
Bellotti V, Edwards K (2001) Intelligibility and accountability: human considerations in context-aware systems. Hum-Comput Interact 16(2):193–212
Biswas J, Veloso M (2010) Wifi localization and navigation for autonomous indoor mobile robots. In: ICRA 2010, pp 4379–4384
Clark H (2008) Talking as if. In: HRI’08: proceedings of the 3rd ACM/IEEE international conference on human robot interaction, pp 393–394
Clark H, Wilkes-Gibbs D (1986) Referring as a collaborative process. Cognition 22:1–39
Cohn D, Atlas L, Ladner R (1994) Improving generalization with active learning. Mach Learn 15(2):201–221
Culotta A, Kristjansson T, McCallum A, Viola P (2006) Corrective feedback and persistent learning for information extraction. Artif Intell 170(14–15):1101–1122
Eagle M, Leiter E (1964) Recall and recognition in intentional and incidental learning. J Exp Psychol 68:58–63
Erickson T, Kellogg WA (2001) Social translucence: an approach to designing systems that support social processes. ACM Trans Comput-Hum Interact 7(1):59–83
Faulring A, Myers B, Mohnkern K, Schmerl B, Steinfeld A, Zimmerman J, Smailagic A, Hansen J, Siewiorek D (2010) Agent-assisted task management that reduces email overload. In: IUI’10: proceeding of the 14th international conference on intelligent user interfaces, pp 61–70
Fong TW, Thorpe C, Baur C (2003) Robot, asker of questions. Robot Auton Syst 42(3–4):235–243
Green P, Wei-Haas L (1985) The rapid development of user interfaces: Experience with the wizard of oz method. Hum Factors Ergon Soc Annu Meet 29(5):470–474
Horvitz E (1999) Principles of mixed-initiative user interfaces. In: CHI’99: proceedings of the SIGCHI conference on Human factors in computing systems, pp 159–166
Lee MK, Kielser S, Forlizzi J, Srinivasa S, Rybski P (2010) Gracefully mitigating breakdowns in robotic services. In: HRI’10: 5th ACM/IEEE international conference on human robot interaction, pp 203–210
Mankoff J, Abowd G, Hudson S (2000) Oops: a toolkit supporting mediation techniques for resolving ambiguity in recognition-based interfaces. Comput Graph 24(6):819–834
Mcnee S, Lam SK, Guetzlaff C, Konstan JA, Riedl J (2003) Confidence displays and training in recommender systems. In: Proceedings of the 9th IFIP TC13 international conference on humancomputer interaction (INTERACT). IOS Press, Amsterdam, pp 176–183
Mitchell T (1997) Machine learning. McGraw Hill, New York
Raghavan H, Madani O, Jones R (2006) Active learning with feedback on features and instances. J Mach Learn Res 7:1655–1686
Rosenthal S, Biswas J, Veloso M (2010) An effective personal mobile robot agent through a symbiotic human-robot interaction. In: AAMAS’10: 9th international joint conference on autonomous agents and multiagent systems, pp 915–922
Rosenthal S, Dey AK, Veloso M (2009) How robots’ questions affect the accuracy of the human responses. In: The international symposium on robot-human interactive communication, pp 1137–1142
Rosenthal S, Veloso M, Dey AK (2011) Is someone in this office available to help me? proactively seeking help from spatially-situated humans. Journal of Intelligent and Robotic Systems pp. 1–17
Scaffidi C (2009) Topes: Enabling end-user programmers to validate and reformat data. Carnegie Mellon Technical Report CMU-ISR-09-105
Shadbolt N, Burton AM (1989) The empirical study of knowledge elicitation techniques. SIGART Bull 108:15–18
Shilman M, Tan DS, Simard P (2006) Cuetip: a mixed-initiative interface for correcting handwriting errors. In: UIST’06: Proceedings of the 19th annual ACM symposium on user interface software and technology, pp 323–332
Shiomi M, Sakamoto D, Takayuki K, Ishi CT, Ishiguro H, Hagita N (2008) A semi-autonomous communication robot: a field trial at a train station. In: HRI’08: 3rd ACM/IEEE international conference on human robot interaction, pp 303–310
Stumpf S, Rajaram V, Li L, Burnett M, Dietterich T, Sullivan E, Drummond R, Herlocker J (2007) Toward harnessing user feedback for machine learning. In: IUI’07: proceedings of the 12th international conference on intelligent user interfaces, pp 82–91
Stumpf S, Sullivan E, Fitzhenry E, Oberst I, Wong W, Burnett M (2008) Integrating rich user feedback into intelligent user interfaces. In: IUI’08: proceedings of the 13th international conference on Intelligent user interfaces, pp 50–59
Weiss A, Igelsböck J, Tscheligi M, Bauer A, Kühnlenz K, Wollherr D, Buss M (2010) Robots asking for directions: the willingness of passers-by to support robots. In: HRI’10: 5th ACM/IEEE international conference on human robot interaction, pp 23–30
Yanco H, Drury JL, Scholtz J (2004) Beyond usability evaluation: analysis of human-robot interaction at a major robotics competition. Hum-Comput Interact 19(1):117–149
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rosenthal, S., Veloso, M. & Dey, A.K. Acquiring Accurate Human Responses to Robots’ Questions. Int J of Soc Robotics 4, 117–129 (2012). https://doi.org/10.1007/s12369-012-0138-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-012-0138-y