skip to main content
10.1145/2974804.2980486acmotherconferencesArticle/Chapter ViewAbstractPublication PageshaiConference Proceedingsconference-collections
poster

Human-Robot Cooperative Conveyance Using Speech and Head Gaze

Published:04 October 2016Publication History

ABSTRACT

In this study, we designed a strategy using speech and head gaze and a set of voice commands for cooperative conveyance by a human and a robot. In the designed strategy, the human turns his or her head to face the robot and gives one of twelve spoken commands in the set. In order to start and stop the robot moving, the human sends nonverbal cues by changing his or her point of gaze. We developed a mobile robot that interacts with a human based on the strategy and the command set, which was evaluated with ten young novices. The results of this study imply that most young people can quickly learn how to cooperate with our robot to move objects using speech and head gaze.

Skip Supplemental Material Section

Supplemental Material

haipp1014-file3.mp4

mp4

62.1 MB

haipp1014-file3.mp4

mp4

62.1 MB

References

  1. Berger, E., Vogt, D., Haji-Ghassemi, N., Jung, B., & Amor, H. B. 2013. Inferring guidance information in cooperative human-robot tasks. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 124--129.Google ScholarGoogle Scholar
  2. Liefeng Bo, Xiaofeng Ren, and Dieter Fox. 2013. Unsupervised feature learning for RGB-D based object recognition. In Experimental Robotics, 387--402.Google ScholarGoogle Scholar
  3. Jean-David Boucher, Ugo Pattacini, Amélie Lelong, Gérard Bailly, Frédéric Elisei, Sascha Fagel, Peter Ford Dominey ,and Jocelyne Ventre-Dominey. 2012. I reach faster when I see you look: Gaze effects in humanhuman and human-robot face-to-face cooperation. Frontiers in Neurorobotics, 6, 3: 1--11.Google ScholarGoogle ScholarCross RefCross Ref
  4. Judee. K. Burgoon and Laura K. Guerrero, and Kory Floyd. 2010. Nonverbal Communication, Allyn & Bacon.Google ScholarGoogle Scholar
  5. Widodo Budiharto. 2014. Robust vision-based detection and grasping object for manipulator using SIFT keypoint detector. In Proceedings of the 2014 International Conference on Advanced Mechatronic Systems (ICAMechS 2014), 448--452.Google ScholarGoogle ScholarCross RefCross Ref
  6. M. W. M. Gamini Dissanayake, Paul Newman, Steven Clark, Hugh F. Durrant-Whyte, and M. Csorba. 2001. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation 17, 3: 229--241.Google ScholarGoogle ScholarCross RefCross Ref
  7. Ederyn Williams. 1977. Experimental comparisons of face-to-face and mediated communication. A review. Psychological Bulletin, 84, 5: 963--976.Google ScholarGoogle ScholarCross RefCross Ref
  8. Arnab Ghosh, Amit Konar, and R. Janarthanan. 2012. Multi-robot cooperative box-pushing problem using multi-objective particle swarm optimization technique. In Proceedings of the 2012 World Congress on Information and Communication Technologies (WICT 2012), 272--277.Google ScholarGoogle ScholarCross RefCross Ref
  9. Joy E. Hanna and Susan E. Brennan. 2007. Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language, 57, 4: 596--615.Google ScholarGoogle ScholarCross RefCross Ref
  10. Koji Ishimura and Toru Namerikawa. 2011. Cooperative conveyance by vehicle swarms with dynamic network topology. In Proceedings of the 2011 SICE Annual Conference (SICE2011), 694--699.Google ScholarGoogle Scholar
  11. Kazuhiro Kosuge and Yasuhisa Hirata. 2004. Humanrobot interaction. In Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics (ROBIO 2004), 8--11.Google ScholarGoogle ScholarCross RefCross Ref
  12. Kazuhiro Yokoyama, Hiyoyuki Handa, Takakatsu Isozumi, Yutaro Fukase, Kenji Kaneko, Fumio Kanehiro, Yoshihiro Kawai, Fumiaki Tomita, and Hirohisa Hirukawa. 2003. Cooperative works by a human and a humanoid robot. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (ICRA'03), 2985--2991.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Human-Robot Cooperative Conveyance Using Speech and Head Gaze

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      HAI '16: Proceedings of the Fourth International Conference on Human Agent Interaction
      October 2016
      414 pages
      ISBN:9781450345088
      DOI:10.1145/2974804

      Copyright © 2016 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 October 2016

      Check for updates

      Qualifiers

      • poster

      Acceptance Rates

      HAI '16 Paper Acceptance Rate29of182submissions,16%Overall Acceptance Rate121of404submissions,30%
    • Article Metrics

      • Downloads (Last 12 months)5
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader