Skip to main content

Exploring the Effect of Explanations During Robot-Guided Emergency Evacuation

  • Conference paper
  • First Online:
Social Robotics (ICSR 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12483))

Included in the following conference series:

Abstract

Humans tend to overtrust emergency robots during emergencies [12]. Here we consider how a robot’s explanations influence a person’s decision to follow the robot’s evacuation directions when those directions differ from the movement of the crowd. The experiments were conducted in a simulated emergency environment with an emergency guide robot and animated human looking Non-Player Characters (NPC). Our results show that explanations increase the tendency to follow the robot, even if these messages are uninformative. We also perform a preliminary study investigating different explanation designs for effective interventions, demonstrating that certain types of explanations can increase or decrease evacuation time. This paper contributes to our understanding of human compliance to robot instructions and methods for examining human compliance through the use of explanations during high risk, emergency situations.

This material is based upon work supported by the National Science Foundation under Grant No. CNS-1830390. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Breazeal, C.L.: Designing Sociable Robots. MIT press, Cambridge (2004)

    Book  Google Scholar 

  2. Desai, M., et al.: Effects of changing reliability on trust of robot systems. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 73–80. ACM (2012)

    Google Scholar 

  3. Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2019)

    Google Scholar 

  4. Gunning, D.: Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web (2017)

    Google Scholar 

  5. Kuligowski, E.D.: Modeling human behavior during building fires (2008)

    Google Scholar 

  6. Langer, E.J., Blank, A., Chanowitz, B.: The mindlessness of ostensibly thoughtful action: the role of “placebic" information in interpersonal interaction. J. Pers. Soc. Psychol. 36(6), 635 (1978)

    Article  Google Scholar 

  7. Nayyar, M., Wagner, A.R.: Effective robot evacuation strategies in emergencies. In: 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–6 (2019)

    Google Scholar 

  8. Ososky, S., Schuster, D., Phillips, E., Jentsch, F.G.: Building appropriate trust in human-robot teams. In: AAAI Spring Symposium: Trust and Autonomous Systems (2013)

    Google Scholar 

  9. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)

    Article  Google Scholar 

  10. Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. ICSR 2015. LNCS (LNAI), vol. 9388, pp. 574–583. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25554-5_57

    Chapter  Google Scholar 

  11. Robinette, P., Howard, A.M., Wagner, A.R.: Effect of robot performance on human-robot trust in time-critical situations. IEEE Trans. Hum. Mach. Syst. 47(4), 425–436 (2017)

    Article  Google Scholar 

  12. Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 101–108. IEEE Press (2016)

    Google Scholar 

  13. Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: Comparing automatically generated explanations. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 109–116. IEEE (2016)

    Google Scholar 

Download references

Acknowledgment

This work was supported by the National Science Foundation grant CNS-1830390. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mollik Nayyar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nayyar, M., Zoloty, Z., McFarland, C., Wagner, A.R. (2020). Exploring the Effect of Explanations During Robot-Guided Emergency Evacuation. In: Wagner, A.R., et al. Social Robotics. ICSR 2020. Lecture Notes in Computer Science(), vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62056-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62055-4

  • Online ISBN: 978-3-030-62056-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics