Abstract
Human-swarm interaction (HSwI) research investigates interactions between human operators and robotic swarms. Swarms comprise assets, which operate as a unified group to complete goals like target foraging and shape configuration for asset movement optimization. Though the algorithmic specifications of swarm operations make them robust to individual asset loss, it is unknown how viewing asset degradations affects operator trust towards swarms. To investigate this relationship, modifications to an extant simulator of swarm foraging behaviors were implemented to portray functional asset degradation. Participants viewed recordings of swarms foraging, each comprising a randomized percentage of asset degradation. After each recording, participants rated their intentions to rely on the swarms in a target foraging task. Results showed an effect of differential asset loss on participants’ intentions to rely on swarms. Post hoc analyses showed that participants had greater intentions to rely on swarms in a future target foraging task when 5% and 15% of assets were degraded compared to 20% and 50%. Limitations and ideas for future research on trust in HSwI during target foraging tasks are discussed in detail.
Distribution A. Approved for public release; distribution unlimited. 88ABW-2020-0330; Cleared 31 JAN 2020.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kolling, A., Walker, P., Chakraborty, N., Sycara, K., Lewis, M.: Human interaction with robot swarms: a survey. IEEE Trans. Hum.-Mach. Syst. 46(1), 9–26 (2015)
Nam, C., Walker, P., Lewis, M., Sycara, K.: Predicting trust in human control of swarms via inverse reinforcement learning. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 528–533. IEEE, Torino (2017)
de Visser, E.J., Pak, R., Shaw, T.H.: From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics 61(10), 1409–1427 (2018)
Walker, P., Nunnally, S., Lewis, M., Kolling, A., Chakraborty, N., Sycara, K.: Neglect benevolence in human control of swarms in the presence of latency. In: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 3009–3014. IEEE, Torino (2012)
Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
Nam, C., Li, H., Li, S., Lewis, M., Sycara, K.: Trust of humans in supervisory control of swarm robots with varied levels of autonomy. In: 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 825–830. IEEE, Torino (2018)
Geels, K., Rice, S., Schwark, J., Hunt, H., Sandry, J.: Applying the reliance-compliance model to system-wide trust theory in an aviation task. In 16th International Symposium on Aviation Psychology, pp. 215–220. Curran, New York (2011)
Keller, D., Rice, S.: System-wide versus component-specific trust using multiple aids. J. Gen. Psychol. 137(1), 114–128 (2010)
Nagavalli, S., Chien, S.Y., Lewis, M., Chakraborty, N., Sycara, K.: Bounds of neglect benevolence in input timing for human interaction with robotic swarms. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 197–204. ACM, New York (2015)
Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015)
Lyons, J.B., Guznov, S.Y.: Individual differences in human–machine trust: a multi-study look at the perfect automation schema. Theor. Issues Ergon. Sci. 20(4), 440–458 (2019)
Singmann, H., Bolker, B., Westfall, J., Aust, F.: afex: analysis of factorial experiments. R package version 0.13–145 (2015)
R Core Team R: A language and environment for statistical computing. R Foundation for Statistical Computing (2018). https://www.R-project.org/
Merritt, S.M., Unnerstall, J.L., Lee, D., Huber, K.: Measuring individual differences in the perfect automation schema. Hum. Factors 57(5), 740–753 (2015)
Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)
Lyons, J.B., et al.: Shaping trust through transparent design: theoretical and experimental guidelines. In: Savage-Knepshield, P., Chen, J. (eds.) Advances in Human Factors in Robots and Unmanned Systems. AISC, vol. 499, pp. 127–136. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-41959-6_11
Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)
Lee, J., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10), 1243–1270 (1992)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply
About this paper
Cite this paper
Capiola, A. et al. (2020). The Effects of Asset Degradation on Human Trust in Swarms. In: Chen, J.Y.C., Fragomeni, G. (eds) Virtual, Augmented and Mixed Reality. Design and Interaction. HCII 2020. Lecture Notes in Computer Science(), vol 12190. Springer, Cham. https://doi.org/10.1007/978-3-030-49695-1_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-49695-1_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49694-4
Online ISBN: 978-3-030-49695-1
eBook Packages: Computer ScienceComputer Science (R0)