ABSTRACT
In this study, we designed a strategy using speech and head gaze and a set of voice commands for cooperative conveyance by a human and a robot. In the designed strategy, the human turns his or her head to face the robot and gives one of twelve spoken commands in the set. In order to start and stop the robot moving, the human sends nonverbal cues by changing his or her point of gaze. We developed a mobile robot that interacts with a human based on the strategy and the command set, which was evaluated with ten young novices. The results of this study imply that most young people can quickly learn how to cooperate with our robot to move objects using speech and head gaze.
Supplemental Material
- Berger, E., Vogt, D., Haji-Ghassemi, N., Jung, B., & Amor, H. B. 2013. Inferring guidance information in cooperative human-robot tasks. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 124--129.Google Scholar
- Liefeng Bo, Xiaofeng Ren, and Dieter Fox. 2013. Unsupervised feature learning for RGB-D based object recognition. In Experimental Robotics, 387--402.Google Scholar
- Jean-David Boucher, Ugo Pattacini, Amélie Lelong, Gérard Bailly, Frédéric Elisei, Sascha Fagel, Peter Ford Dominey ,and Jocelyne Ventre-Dominey. 2012. I reach faster when I see you look: Gaze effects in humanhuman and human-robot face-to-face cooperation. Frontiers in Neurorobotics, 6, 3: 1--11.Google ScholarCross Ref
- Judee. K. Burgoon and Laura K. Guerrero, and Kory Floyd. 2010. Nonverbal Communication, Allyn & Bacon.Google Scholar
- Widodo Budiharto. 2014. Robust vision-based detection and grasping object for manipulator using SIFT keypoint detector. In Proceedings of the 2014 International Conference on Advanced Mechatronic Systems (ICAMechS 2014), 448--452.Google ScholarCross Ref
- M. W. M. Gamini Dissanayake, Paul Newman, Steven Clark, Hugh F. Durrant-Whyte, and M. Csorba. 2001. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation 17, 3: 229--241.Google ScholarCross Ref
- Ederyn Williams. 1977. Experimental comparisons of face-to-face and mediated communication. A review. Psychological Bulletin, 84, 5: 963--976.Google ScholarCross Ref
- Arnab Ghosh, Amit Konar, and R. Janarthanan. 2012. Multi-robot cooperative box-pushing problem using multi-objective particle swarm optimization technique. In Proceedings of the 2012 World Congress on Information and Communication Technologies (WICT 2012), 272--277.Google ScholarCross Ref
- Joy E. Hanna and Susan E. Brennan. 2007. Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language, 57, 4: 596--615.Google ScholarCross Ref
- Koji Ishimura and Toru Namerikawa. 2011. Cooperative conveyance by vehicle swarms with dynamic network topology. In Proceedings of the 2011 SICE Annual Conference (SICE2011), 694--699.Google Scholar
- Kazuhiro Kosuge and Yasuhisa Hirata. 2004. Humanrobot interaction. In Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics (ROBIO 2004), 8--11.Google ScholarCross Ref
- Kazuhiro Yokoyama, Hiyoyuki Handa, Takakatsu Isozumi, Yutaro Fukase, Kenji Kaneko, Fumio Kanehiro, Yoshihiro Kawai, Fumiaki Tomita, and Hirohisa Hirukawa. 2003. Cooperative works by a human and a humanoid robot. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (ICRA'03), 2985--2991.Google ScholarCross Ref
Index Terms
- Human-Robot Cooperative Conveyance Using Speech and Head Gaze
Recommendations
Spontaneous spoken dialogues with the furhat human-like robot head
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionFurhat [1] is a robot head that deploys a back-projected animated face that is realistic and human-like in anatomy. Furhat relies on a state-of-the-art facial animation architecture allowing accurate synchronized lip movements with speech, and the ...
Combining dynamic head pose-gaze mapping with the robot conversational state for attention recognition in human-robot interactions
Recognizing the visual focus of attention in the HRI context.Relying on head pose since eye gaze estimation is often impossible to achieve.Inspired from the behavioral models for body, head and gaze dynamics in gaze shifts.Exploiting the robot ...
On the Benefit of Independent Control of Head and Eye Movements of a Social Robot for Multiparty Human-Robot Interaction
Human-Computer InteractionAbstractThe human gaze direction is the sum of the head and eye movements. The coordination of these two segments has been studied and models of the contribution of head movement to the gaze of virtual agents or robots have been proposed. However, these ...
Comments