Skip to main content

Appropriately Representing Military Tasks for Human-Machine Teaming Research

  • Conference paper
  • First Online:
HCI International 2020 – Late Breaking Papers: Virtual and Augmented Reality (HCII 2020)

Abstract

The use of simulation has become a popular way to develop knowledge and skills in aviation, medicine, and several other domains. Given the promise of human-robot teaming in many of these same contexts, the amount of research in human-autonomy teaming has increased over the last decade. The United States Air Force Academy (USAFA), for example, has developed several testbeds to explore human-autonomy teaming in and out of the laboratory. Fidelity requirements have been carefully established in order to assess important factors in line with the goals of the research. This paper describes how appropriate fidelity is established across a range of human-autonomy research objectives. We provide descriptions of testbeds ranging from robots in the laboratory to higher-fidelity flight simulations and real-world driving. We conclude with a description and guideline for selecting appropriate levels of fidelity given a research objective in human-machine teaming research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Sheridan, T.B.: Adaptive automation, level of automation, allocation authority, supervisory control, and adaptive control: distinctions and modes of adaptation. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 41(4), 662–667 (2011)

    Article  Google Scholar 

  2. Lyons, J.B., et al.: Comparing trust in auto-GCAS between experienced and novice air force pilots. Ergon. Des. 25(4), 4–9 (2017)

    Google Scholar 

  3. Ilachinski, A.: Artificial Intelligence and Autonomy: Opportunities and Challenges (No. DIS-2017-U-016388-Final). Center for Naval Analyses, Arlington, United States (2017)

    Google Scholar 

  4. Kaber, D.B.: Issues in human–automation interaction modeling: presumptive aspects of frameworks of types and levels of automation. J. Cogn. Eng. Decis. Making 12(1), 7–24 (2018)

    Article  Google Scholar 

  5. Hancock, P.A.: Imposing limits on autonomous systems. Ergonomics 60(2), 284–291 (2017)

    Article  Google Scholar 

  6. Scharre, P.: Army of None: Autonomous Weapons and the Future of War. WW Norton & Company, New York (2018)

    Google Scholar 

  7. Kott, A., Alberts, D.S.: How do you command an army of intelligent things? Computer 50(12), 96–100 (2017)

    Article  Google Scholar 

  8. Endsley, M.R.: Autonomous Horizons: System Autonomy in the Air Force-A Path to the Future. United States Air Force Office of the Chief Scientist, AF/ST TR, 15-01 (2015)

    Google Scholar 

  9. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: Situation awareness, mental workload, and trust in automation: viable, empirically supported cognitive engineering constructs. J. Cogn. Eng. Decis. Making 2(2), 140–160 (2008)

    Article  Google Scholar 

  10. Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors 49(1), 57–75 (2007)

    Article  Google Scholar 

  11. Roscoe, S.N., Williams, A.C.: Aviation psychology (1980)

    Google Scholar 

  12. Munshi, F., Lababidi, H., Alyousef, S.: Low-versus high-fidelity simulations in teaching and assessing clinical skills. J. Taibah Univ. Med. Sci. 10(1), 12–15 (2015)

    Google Scholar 

  13. Usoh, M., et al.: Walking> walking-in-place> flying, in virtual environments. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, pp. 359–364, July 1999

    Google Scholar 

  14. Alexander, A.L., Brunyé, T., Sidman, J., Weil, S.A.: From gaming to training: a review of studies on fidelity, immersion, presence, and buy-in and their effects on transfer in PC-based simulations and games. DARWARS Train. Impact Group 5, 1–14 (2005)

    Google Scholar 

  15. Dion, D.P., Smith, B.A., Dismukes, P.: The Cost/Fidelity Balance: Scalable Simulation Technology-A New Approach to High-Fidelity Simulator Training at Lower Cost. MS AND T, 38-45 (1996)

    Google Scholar 

  16. Wong, Y.J., Steinfeldt, J.A., LaFollette, J.R., Tsao, S.C.: Men’s tears: football players’ evaluations of crying behavior. Psychol. Men Masc. 12(4), 297 (2011)

    Article  Google Scholar 

  17. Taylor, H.L., Lintern, G., Koonce, J.M.: Quasi-transfer as a predictor of transfer from simulator to airplane. J. Gen. Psychol. 120(3), 257–276 (1993)

    Article  Google Scholar 

  18. Taylor, H.L., Lintern, G., Koonce, J.M., Kaiser, R.H., Morrison, G.A.: Simulator scene detail and visual augmentation guidance in landing training for beginning pilots. SAE Trans. 100, 2337–2345 (1991)

    Google Scholar 

  19. Flexman, R.E., Stark, E.A.: Training simulators. In: Handbook of Human Factors, vol. 1, pp. 1012–1037 (1987)

    Google Scholar 

  20. McClernon, C.K., McCauley, M.E., O’Connor, P.E., Warm, J.S.: Stress training improves performance during a stressful flight. Hum. Factors 53(3), 207–218 (2011)

    Article  Google Scholar 

  21. Lievens, F., Patterson, F.: The validity and incremental validity of knowledge tests, low-fidelity simulations, and high-fidelity simulations for predicting job performance in advanced-level high-stakes selection. J. Appl. Psychol. 96(5), 927 (2011)

    Article  Google Scholar 

  22. Massoth, C., et al.: High-fidelity is not superior to low-fidelity simulation but leads to overconfidence in medical students. BMC Med. Educ. 19(1), 29 (2019). https://doi.org/10.1186/s12909-019-1464-7

    Article  Google Scholar 

  23. Salas, E., Bowers, C.A., Rhodenizer, L.: It is not how much you have but how you use it: toward a rational use of simulation to support aviation training. Int. J. Aviat. Psychol. 8(3), 197–208 (1998)

    Article  Google Scholar 

  24. Choi, W., et al.: Engagement and learning in simulation: recommendations of the Simnovate engaged learning domain group. BMJ Simul. Technol. Enhanc. Learn. 3(Suppl 1), S23-S32 (2017)

    Google Scholar 

  25. Tossell, C., et al.: Human factors capstone research at the united states air force academy. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 63, no. 1, pp. 498–502. SAGE Publications, Los Angeles, November 2019

    Google Scholar 

  26. Bishop, J., et al.: CHAOPT: a testbed for evaluating human-autonomy team collaboration using the video game overcooked! 2. In: 2020 Systems and Information Engineering Design Symposium (SIEDS), pp. 1–6. IEEE, April 2020

    Google Scholar 

  27. Tanibe, T., Hashimoto, T., Karasawa, K.: We perceive a mind in a robot when we help it. PloS One 12(7), 1–12 (2017)

    Google Scholar 

  28. Bartneck, C., Forlizzi, J.: A design-centred framework for social human-robot interaction. In: Proceedings of the Ro-Man 2004, Kurashiki, pp. 591–594 (2004)

    Google Scholar 

  29. Steinfeld, A., Jenkins, O.C., Scassellati, B.: The oz of wizard: simulating the human for interaction research. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp. 101–108, March 2009

    Google Scholar 

  30. Lorenz, G.T., et al.: Assessing control devices for the supervisory control of autonomous wingmen. In: 2019 Systems and Information Engineering Design Symposium (SIEDS), pp. 1–6. IEEE, April 2019

    Google Scholar 

  31. Tomzcak, K., et al.: Let Tesla park your Tesla: driver trust in a semi-automated car. In: 2019 Systems and Information Engineering Design Symposium (SIEDS), pp. 1–6. IEEE, April 2019

    Google Scholar 

  32. Tenhundfeld, N.L., de Visser, E.J., Ries, A.J., Finomore, V.S., Tossell, C.C.: Trust and distrust of automated parking in a Tesla model X. Hum. Factors 62, 194–210 (2019). 0018720819865412

    Google Scholar 

  33. Tenhundfeld, N.L., de Visser, E.J., Haring, K.S., Ries, A.J., Finomore, V.S., Tossell, C.C.: Calibrating trust in automation through familiarity with the autoparking feature of a Tesla model X. J. Cogn. Eng. Decis. Making 13(4), 279–294 (2019)

    Article  Google Scholar 

  34. Haring, K., Nye, K., Darby, R., Phillips, E., de Visser, E., Tossell, C.: I’m not playing anymore! A study comparing perceptions of robot and human cheating behavior. In: Salichs, M., et al. (eds.) ICSR 2019. LNCS (LNAI), vol. 11876, pp. 410–419. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35888-4_38

    Chapter  Google Scholar 

  35. Peterson, J., Cohen, C., Harrison, P., Novak, J., Tossell, C., Phillips, E.: Ideal warrior and robot relations: stress and empathy’s role in human-robot teaming. In: 2019 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, pp. 1–6 (2019)

    Google Scholar 

  36. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  Google Scholar 

  37. de Visser, E.J., et al.: Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 12, 459–478 (2020). https://doi.org/10.1007/s12369-019-00596-x

  38. Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 101–108. IEEE, March 2016

    Google Scholar 

  39. Wagner, A.R., Borenstein, J., Howard, A.: Overtrust in the robotic age. Commun. ACM 61(9), 22–24 (2018)

    Article  Google Scholar 

  40. Okamura, K., Yamada, S.: Adaptive trust calibration for supervised autonomous vehicles. In: Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 92–97, September 2018

    Google Scholar 

  41. Berka, C., et al.: EEG correlates of task engagement and mental workload in vigilance, learning, and memory tasks. Aviat. Space Environ. Med. 78(5), B231–B244 (2007)

    Google Scholar 

  42. Ekman, P., Friesen, W.V.: Facial Action Coding Systems. Consulting Psychologists Press, Palo Alto (1978)

    Google Scholar 

  43. Walliser, J.C., de Visser, E.J., Wiese, E., Shaw, T.H.: Team structure and team building improve human-machine teaming with autonomous agents. J. Cogn. Eng. Decis. Making 13(4), 258–278 (2019)

    Article  Google Scholar 

  44. Demir, M., McNeese, N.J., Cooke, N.J.: Team situation awareness within the context of human-autonomy teaming. Cogn. Syst. Res. 46, 3–12 (2017)

    Article  Google Scholar 

  45. Phillips, E., Ososky, S., Grove, J., Jentsch, F.: From tools to teammates: toward the development of appropriate mental models for intelligent robots. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 55, no. 1, pp. 1491–1495. SAGE Publications, Los Angeles, September 2011

    Google Scholar 

  46. Garreau, J.: Bots on the ground. Washington Post 6 (2007)

    Google Scholar 

  47. Wen, J., Stewart, A., Billinghurst, M., Dey, A., Tossell, C., Finomore, V.: He who hesitates is lost (… in thoughts over a robot). In: Proceedings of the Technology, Mind, and Society, pp. 1–6 (2018)

    Google Scholar 

  48. Wen, J., Stewart, A., Billinghurst, M., Tossell, C.: Band of brothers and bolts: caring about your robot teammate. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1853–1858. IEEE, October 2018

    Google Scholar 

  49. Tomova, L., Majdandžić, J., Hummer, A., Windischberger, C., Heinrichs, M., Lamm, C.: Increased neural responses to empathy for pain might explain how acute stress increases prosociality. Soc. Cogn. Affect. Neurosci. 12(3), 401–408 (2017)

    Article  Google Scholar 

  50. National Adult Spelling Bee Practice. https://www.vocabulary.com/lists/144082. Accessed 23 Feb 2020

  51. Phillips, E., Zhao, X., Ullman, D., Malle, B.F.: What is human-like? Decomposing robots’ human-like appearance using the Anthropomorphic roBOT (ABOT) Database. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 105–113, February 2018

    Google Scholar 

  52. Kim, B., Bruce, M., Brown, L., de Visser, E., Phillips, E.: A comprehensive approach to validating the uncanny valley using the Anthropomorphic RoBOT (ABOT) database. In: 2020 Systems and Information Engineering Design Symposium (SIEDS), pp. 1–6, April 2020

    Google Scholar 

  53. Haring, K.S., et al.: Conflict mediation in human-machine teaming: using a virtual agent to support mission planning and debriefing. In: 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–7. IEEE, October 2019

    Google Scholar 

  54. Bellas, A., et al.: Rapport building with social robots as a method for improving mission debriefing in human-robot teams. In: 2020 Systems and Information Engineering Design Symposium (SIEDS), pp. 160–163. IEEE, April 2020

    Google Scholar 

  55. Haring, K.S., et al.: Robot authority in human-machine teams: effects of human-like appearance on compliance. In: Chen, J., Fragomeni, G. (eds.) HCII 2019. LNCS, vol. 11575, pp. 63–78. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21565-1_5

    Chapter  Google Scholar 

  56. Giubilini, A., Savulescu, J.: The artificial moral advisor. The “Ideal Observer” meets artificial intelligence. Philos. Technol. 31(2), 169–188 (2018)

    Google Scholar 

  57. Malle, B.F.: Integrating robot ethics and machine morality: the study and design of moral competence in robots. Ethics Inf. Technol. 18(4), 243–256 (2016). https://doi.org/10.1007/s10676-015-9367-8

    Article  Google Scholar 

  58. Savulescu, J., Maslen, H.: Moral enhancement and artificial intelligence: moral AI?. In: Romportl, J., Zackova, E., Kelemen, J. (eds.) Beyond Artificial Intelligence. TIEI, vol. 9, pp. 79–95. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-09668-1_6

  59. Coovert, M.D., Arbogast, M.S., de Visser, E.J.: The cognitive Wingman: considerations for trust, humanness, and ethics when developing and applying AI systems. In: McNeese, S., Endsley (eds.) Handbook of Distributed Team Cognition. CRC Press Taylor & Francis, Boca Raton (in press)

    Google Scholar 

  60. Costa, A., et al.: Your morals depend on language. PLoS One 9(4), e94842 (2014)

    Article  MathSciNet  Google Scholar 

  61. Greene, J.D., Morelli, S.A., Lowenberg, K., Nystrom, L.E., Cohen, J.D.: Cognitive load selectively interferes with utilitarian moral judgment. Cognition 107(3), 1144–1154 (2008)

    Article  Google Scholar 

  62. Sütfeld, L.R., Gast, R., König, P., Pipa, G.: Using virtual reality to assess ethical decisions in road traffic scenarios: applicability of value-of-life-based models and influences of time pressure. Front. Behav. Neurosci. 11, 122 (2017)

    Google Scholar 

  63. Tinghög, G., et al.: Intuition and moral decision-making – the effect of time pressure and cognitive load on moral judgment and altruistic behavior. PLoS One 11(10), e0164012 (2016)

    Article  Google Scholar 

  64. Cook, M.L.: The Moral Warrior: Ethics and Service in the U.S. Military. SUNY Press, Albany (2004)

    Google Scholar 

  65. Williams, T., Zhu, Q., Wen, R., de Visser, E.J.: The confucian matador: three defenses against the mechanical bull. In: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 25–33, March 2020

    Google Scholar 

  66. Jackson, R.B., Williams, T.: Language-capable robots may inadvertently weaken human moral norms. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 401–410 (2019)

    Google Scholar 

  67. Rosemont Jr, H., Ames, R.T.: Confucian Role Ethics: A Moral Vision for the 21st Century? Vandenhoeck & Ruprecht, Göttingen (2016)

    Google Scholar 

  68. Groom, V., Nass, C.: Can robots be teammates?: Benchmarks in human–robot teams. Interact. Stud. 8(3), 483–500 (2007)

    Article  Google Scholar 

  69. Murphy, R.R.: Disaster Robotics. MIT Press, Cambridge (2014)

    Google Scholar 

  70. Ho, N.T., Sadler, G.G., Hoffmann, L.C., Lyons, J.B., Johnson, W.W.: Trust of a military automated system in an operational context. Milit. Psychol. 29(6), 524–541 (2017)

    Article  Google Scholar 

  71. Kim, B., et al.: How early task success affects attitudes toward social robots. In: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 287–289, March 2020

    Google Scholar 

  72. Schellin, H., et al.: Man’s new best friend? Strengthening human-robot dog bonding by enhancing the Doglikeness of Sony’s Aibo. In: 2020 Systems and Information Engineering Design Symposium (SIEDS), pp. 1–6. IEEE, April 2020

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Cadets Jessica Broll and Makenzie Hockensmith for their contributions to this work. The views expressed in this document are the authors and may not reflect the official position of the USAF Academy, USAF, or U.S. Government. The material is based upon work supported by the Air Force Office of Scientific Research under award number 16RT0881.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chad C. Tossell .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tossell, C.C., Kim, B., Donadio, B., de Visser, E.J., Holec, R., Phillips, E. (2020). Appropriately Representing Military Tasks for Human-Machine Teaming Research. In: Stephanidis, C., Chen, J.Y.C., Fragomeni, G. (eds) HCI International 2020 – Late Breaking Papers: Virtual and Augmented Reality. HCII 2020. Lecture Notes in Computer Science(), vol 12428. Springer, Cham. https://doi.org/10.1007/978-3-030-59990-4_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59990-4_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59989-8

  • Online ISBN: 978-3-030-59990-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics