skip to main content
10.1145/2909824.3020212acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Is a Robot a Better Walking Partner If It Associates Utterances with Visual Scenes?

Published: 06 March 2017 Publication History

Abstract

We aim to develop a walking partner robot with the capability to select small-talk topics that are associative to visual scenes. We first collected video sequences from five different locations and prepared a dataset about small-talk topics associated to visual scenes. Then we developed a technique to associate the visual scenes with the small-talk topics. We converted visual scenes into lists of words using an off-the-shelf vision library and formed a topic space with a Latent Dirichlet Allocation (LDA) method in which a list of words is transformed to a topic vector. Finally, the system selects the most similar utterance in the topic vectors. We tested our developed technique with a dataset, which successfully selected 72% appropriate utterances, and conducted a user study outdoors where participants took a walk with a small robot on their shoulder and engaged in small talk. We confirmed that the participants more highly perceived the robot with our developed technique because it selected appropriate utterances than a robot that randomly selected utterances. Further, they also felt that the former type of robot is a better walking partner.

Supplementary Material

suppl.mov (hrifp1120.mp4)
Supplemental video

References

[1]
H.-M. Gross, et al., Roreas: Robot Coach for Walking and Orientation Training in Clinical Post-Stroke Rehabilitation-Prototype Implementation and Evaluation in Field Trials, Autonomous Robots, pp. 1--20, 2016.
[2]
D. Hebesberger, C. Dondrup, T. Koertner, C. Gisinger and J. Pripfl, Lessons Learned from the Deployment of a Long-Term Autonomous Robot as Companion in Physical Therapy for Older Adults with Dementia: A Mixed Methods Study, The Eleventh ACM/IEEE International Conference on Human Robot Interation, pp. 27--34, 2016.
[3]
W. Burgard, et al., The Interactive Museum Tour-Guide Robot, National Conf. on Artificial Intelligence (AAAI1998), pp. 11--18, 1998.
[4]
R. Siegwart, et al., Robox at Expo.02: A Large Scale Installation of Personal Robots, Robotics and Autonomous Systems, vol. 42, pp. 203--222, 2003.
[5]
M. Shiomi, T. Kanda, H. Ishiguro and N. Hagita, Interactive Humanoid Robots for a Science Museum, IEEE Intelligent Systems, vol. 22, pp. 25--32, 2007.
[6]
H.-M. Gross, et al., Toomas: Interactive Shopping Guide Robots in Everyday Use - Final Implementation and Experiences from Long-Term Field Trials, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS2009), pp. 2005--2012, 2009.
[7]
D. Karreman, G. Ludden and V. Evers, Visiting Cultural Heritage with a Tour Guide Robot: A User Evaluation Study in-the-Wild, International Conference on Social Robotics, pp. 317--326, 2015.
[8]
R. Triebel, et al., Spencer: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports, Field and Service Robotics, pp. 607--622, 2016.
[9]
E. W. Y. Kwong, C. K. Y. Lai, E. Spicciolato and M. C. M. Wong, Views of Adults on Shopping Trolleys: Implications for the Development of a Shopping Trolley, The Ergonomics Open Journal, vol. 3, pp. 32--37, 2010.
[10]
Y. Iwamura, M. Shiomi, T. Kanda, H. Ishiguro and N. Hagita, Do Elderly People Prefer a Conversational Humanoid as a Shopping Assistant Partner in Supermarkets?, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2011), pp. 449--456, 2011.
[11]
J. Pineau, M. Montemerlo, M. Pollack, N. Roy and S. Thrun, Towards Robotic Assistants in Nursing Homes: Challenges and Results, Robotics and Autonomous Systems, vol. 42, pp. 271--281, 2003.
[12]
K. Dautenhahn, et al., How May I Serve You? A Robot Companion Approaching a Seated Person in a Helping Context, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2006), pp. 172--179, 2006.
[13]
H. Kuzuoka, Y. Suzuki, J. Yamashita and K. Yamazaki, Reconfiguring Spatial Formation Arrangement by Robot Body Orientation, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2010), pp. 285--292, 2010.
[14]
J. Mumm and B. Mutlu, Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2011), pp. 331--338, 2011.
[15]
D. Feil-Seifer and M. Mataric', Distance-Based Computational Models for Facilitating Robot Interaction with Children, Journal of Human-Robot Interaction, vol. 1, pp. 55--77, 2012.
[16]
E. Pacchierotti, H. I. Christensen and P. Jensfelt, Evaluation of Passing Distance for Social Robots, IEEE Int. Symposium on Robot and Human Interactive Communication (RO-MAN2006), pp. 315--320, 2006.
[17]
E. A. Sisbot, L. F. Marin-Urias, R. Alami and T. Simeon, A Human Aware Mobile Robot Motion Planner, IEEE Transactions on Robotics, vol. 23, pp. 874--883, 2007.
[18]
R. Kelley, et al., Understanding Human Intentions Via Hidden Markov Models in Autonomous Mobile Robots, Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, pp. 367--374, 2008.
[19]
S. Satake, et al., How to Approach Humans?: Strategies for Social Robots to Initiate Interaction, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2009), pp. 109--116, 2009.
[20]
M. A. Yousuf, Y. Kobayashi, Y. Kuno, A. Yamazaki and K. Yamazaki, How to Move Towards Visitors: A Model for Museum Guide Robots to Initiate Conversation, IEEE Int. Symposium on Robot and Human Interactive Communication (RO-MAN2013), pp. 587--592, 2013.
[21]
R. Gockley, J. Forlizzi and R. Simmons, Natural Person-Following Behavior for Social Robots, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2007), pp. 17--24, 2007.
[22]
E. Sviestins, N. Mitsunaga and T. Kanda, Speed Adaptation for a Robot Walking with a Human, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2007), pp. 349--356, 2007.
[23]
Y. Kobayashi, et al., Robotic Wheelchair Moving with Caregiver Collaboratively Depending on Circumstances, Extended Abstracts on ACM Conference on Human Factors in Computing Systems (CHI2011), 2011.
[24]
C. L. Sidner, C. Lee and N. Lesh, Engagement by Looking: Behaviors for Robots When Collaborating with People, Seventh Workshop on the Semantics and Pragmatics of Dialogue, Kruiff-Korbayova and Kosny, pp. 123--130, 2003.
[25]
B. Mutlu, J. Forlizzi and J. Hodgins, A Storytelling Robot: Modeling and Evaluation of Human-Like Gaze Behavior, IEEE-RAS Int. Conf. on Humanoid Robots (Humanoids'06), pp. 518--523, 2006.
[26]
C. Rich, B. Ponsler, A. Holroyd and C. L. Sidner, Recognizing Engagement in Human-Robot Interaction, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2010), pp. 375--382, 2010.
[27]
Y. Kobayashi, et al., Choosing Answerers by Observing Gaze Responses for Museum Guide Robots, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2010), pp. 109--110, 2010.
[28]
B. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro and N. Hagita, Footing in Human-Robot Conversations: How Robots Might Shape Participant Roles Using Gaze Cues, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2009), pp. 61--68, 2009.
[29]
J. Weizenbaum, ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM, 9.1: 36--45, 1966.
[30]
S. Matsumoto and M. Araki, Scoring of response based on suitability of dialogue-act and content similarity, Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, 2016.
[31]
C. Torrey, S. R. Fussell and S. Kiesler, How a Robot Should Give Advice, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2013), pp. 275--282, 2013.
[32]
T. W. Bickmore and R. W. Picard, Establishing and Maintaining Long-Term Human-Computer Relationships, ACM Transactions on Computer-Human Interaction (TOCHI2005), vol. 12, pp. 293--327, 2005.
[33]
M. K. Lee, et al., Personalization in Hri: A Longitudinal Field Experiment, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2012), pp. 319--326, 2012.
[34]
I. Leite, G. Castellano, A. Pereira, C. Martinho and A. Paiva, Empathic Robots for Long-Term Interaction: Evaluating Social Presence, Engagement and Perceived Support in Children, International Journal of Social Robotics, vol. 6, pp. 329--341, 2014.
[35]
J. F. Maas, T. Spexard, J. Fritsch, B. Wrede and G. Sagerer, Biron, What's the Topic? A Multi-Modal Topic Tracker for Improved Human-Robot Interaction, ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication, pp. 26--32, 2006.
[36]
D. M. Blei, A. Y. Ng and M. I. Jordan, Latent Dirichlet Allocation, Journal of Machine Learning Research, vol. 3, pp. 993--1022, 2003.
[37]
Google Cloud Vision Api, https://cloud.google.com/vision/.
[38]
M. Hoffman, F. R. Bach and D. M. Blei, Online Learning for Latent Dirichlet Allocation, Advances in Neural Information Processing Systems, pp. 856--864, 2010.
[39]
R. Rehurek and P. Sojka, Software Framework for Topic Modelling with Large Corpora, Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45--50, 2010.
[40]
J. Tang, Z. Meng, X. Nguyen, Q. Mei and M. Zhang, Understanding the Limiting Factors of Topic Modeling Via Posterior Contraction Analysis, International Conference on Machine Learning (ICML), pp. 190--198, 2014.
[41]
L. D. Riek, T.-C. Rabinowitch, B. Chakrabarti and P. Robinson, How Anthropomorphism Affects Empathy toward Robots, Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, pp. 245--246, 2009.
[42]
M. Imai and M. Narumi, Robot Behavior for Encouraging Immersion in Interaction, Proceedings of Complex Systems Intelligence and Modern Technological Applications (CSIMTA 2004), pp. 591--598, 2004.
[43]
T. Kashiwabara, H. Osawa, K. Shinozawa and M. Imai, Teroos: A Wearable Avatar to Enhance Joint Activities, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2001--2004, 2012.
[44]
Open Jtalk, http://open-jtalk.sourceforge.net/.
[45]
N. Sakata, T. Kurata, T. Kato, M. Kourogi and H. Kuzuoka, Wacl: Supporting Telecommunications Using Wearable Active Camera with Laser Pointer, Seventh IEEE International Symposium on Wearable Computers (ISWC), pp. 53--56, 2003.
[46]
S. Kratz, D. Kimber, W. Su, G. Gordon and D. Severns, Polly: Being There through the Parrot and a Guide, Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services, pp. 625--630, 2014.

Cited By

View all
  • (2021)A Study on the Amount of Speech in Partner Robots for Purposeless Walking目的のない散歩のためのパートナーロボットにおける発話量の検討Journal of Japan Society for Fuzzy Theory and Intelligent Informatics10.3156/jsoft.33.1_58233:1(582-592)Online publication date: 15-Feb-2021
  • (2021)An Expansion and Application of Human Coexistence Robot System Using Smart DevicesJournal of Advanced Computational Intelligence and Intelligent Informatics10.20965/jaciii.2021.p023425:2(234-241)Online publication date: 20-Mar-2021
  • (2021)Does a wearing change perception toward a robot?2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)10.1109/RO-MAN50785.2021.9515366(963-968)Online publication date: 8-Aug-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '17: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction
March 2017
510 pages
ISBN:9781450343367
DOI:10.1145/2909824
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 March 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. association of utterance and visual scene
  2. walking partner robot

Qualifiers

  • Research-article

Funding Sources

  • Tateishi Science and Technology Foundation

Conference

HRI '17
Sponsor:

Acceptance Rates

HRI '17 Paper Acceptance Rate 51 of 211 submissions, 24%;
Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)0
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2021)A Study on the Amount of Speech in Partner Robots for Purposeless Walking目的のない散歩のためのパートナーロボットにおける発話量の検討Journal of Japan Society for Fuzzy Theory and Intelligent Informatics10.3156/jsoft.33.1_58233:1(582-592)Online publication date: 15-Feb-2021
  • (2021)An Expansion and Application of Human Coexistence Robot System Using Smart DevicesJournal of Advanced Computational Intelligence and Intelligent Informatics10.20965/jaciii.2021.p023425:2(234-241)Online publication date: 20-Mar-2021
  • (2021)Does a wearing change perception toward a robot?2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)10.1109/RO-MAN50785.2021.9515366(963-968)Online publication date: 8-Aug-2021
  • (2019)Personal partner agents for cooperative intelligenceProceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction10.5555/3378680.3378792(570-571)Online publication date: 11-Mar-2019
  • (2019)A Survey of Users' Expectations Towards On-body Companion RobotsProceedings of the 2019 on Designing Interactive Systems Conference10.1145/3322276.3322316(621-632)Online publication date: 18-Jun-2019
  • (2019)Personal Partner Agents for Cooperative Intelligence2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)10.1109/HRI.2019.8673179(570-571)Online publication date: Mar-2019
  • (2019)Walking partner robot chatting about sceneryAdvanced Robotics10.1080/01691864.2019.1610062(1-14)Online publication date: 29-Apr-2019
  • (2018)Enhancing Multiparty Cooperative MovementsProceedings of the 20th ACM International Conference on Multimodal Interaction10.1145/3242969.3242983(409-417)Online publication date: 2-Oct-2018
  • (2018)Will Older Adults Accept a Humanoid Robot as a Walking Partner?International Journal of Social Robotics10.1007/s12369-018-0503-6Online publication date: 13-Nov-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media