skip to main content
10.1145/2157689.2157827acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
abstract

Situation understanding bot through language and environment

Published: 05 March 2012 Publication History

Abstract

This video shows a demonstration of a fully autonomous robot, an iRobot ATRV-JR, which can be given commands using natural language. Users type commands to the robot on a tablet computer, which are then parsed and processed using semantic analysis. This information is used to build a plan representing the high level autonomous behaviors the robot should perform [2][1]. The robot can be given commands to be executed immediately (e.g., "Search the floor for hostages.") as well as standing orders for use over the entire run (e.g., "Let me know if you see any bombs.").
In the scenario shown in the video, the robot is asked to identify and defuse bombs, as well as to report if it finds any hostages or bad guys. Users can also query the robot through this interface. The robot conveys information to the user through text and a graphical interface on a tablet computer. The system can add icons to the map displayed and highlight areas of the map to convey concepts such as "I am here".
The video contains segments taken from a continuous 20 minute long run, shown at 4x speed. This work is a demonstration of a larger project called Situation Understanding Bot Through Language and Environment (SUBTLE). For more information, see www.subtlebot.org.

Supplementary Material

MOV File (vid134.mov)

References

[1]
J. Allbeck and H. Kress-Gazit. Constraints-Based Complex Behavior in Rich Environments. Proceedings of the 10th International Conference on Intelligent Virtual Agents, Sept. 2010.
[2]
H. Kress-Gazit, G. Fainekos, and G. Pappas. Temporal Logic-based Reactive Mission and Motion Planning. IEEE Transactions on Robotics, 25(6):1370--1381, 2009.

Cited By

View all
  • (2021)Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative ActivitiesFrontiers in Robotics and AI10.3389/frobt.2021.6623558Online publication date: 29-Apr-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '12: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
March 2012
518 pages
ISBN:9781450310635
DOI:10.1145/2157689

Sponsors

In-Cooperation

  • IEEE-RAS: Robotics and Automation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 March 2012

Permissions

Request permissions for this article.

Check for updates

Author Tag

  1. natural language interface

Qualifiers

  • Abstract

Conference

HRI'12
Sponsor:
HRI'12: International Conference on Human-Robot Interaction
March 5 - 8, 2012
Massachusetts, Boston, USA

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Upcoming Conference

HRI '25
ACM/IEEE International Conference on Human-Robot Interaction
March 4 - 6, 2025
Melbourne , VIC , Australia

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative ActivitiesFrontiers in Robotics and AI10.3389/frobt.2021.6623558Online publication date: 29-Apr-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media