skip to main content
10.1145/3597512.3599719acmotherconferencesArticle/Chapter ViewAbstractPublication PagestasConference Proceedingsconference-collections
extended-abstract

Verifiably Safe and Trusted Human-AI Systems: A Socio-technical Perspective

Published: 11 July 2023 Publication History

Abstract

Replacing human decision-making with machine decision-making results in challenges associated with stakeholders’ trust in AI systems that interact with and keep the human user in the loop. We refer to such systems as Human-AI Systems (HAIS) and argue that technical safety and social trustworthiness of a HAIS are key to its wide-spread adoption by society. To develop a verifiably safe and trusted HAIS, it is important to understand how different stakeholders perceive an autonomous system (AS) as trusted, and how the context of application affects their perceptions. Technical approaches to meet trust and safety concerns are widely investigated and under-used in the context of measuring users’ trust in autonomous AI systems. Interdisciplinary socio-technical approaches, grounded in social science (trust) and computer science (safety), are less considered in HAIS investigations. This paper aims to elaborate on the need for the application of formal methods, for ensuring safe behaviour of HAIS, based on the real-life understanding of users about trust, and analysing trust dynamics. This work puts forward core challenges in this area and presents a research agenda on verifiably safe and trusted human-AI systems.

References

[1]
Jean-Raymond Abrial. 2010. Modeling in Event-B: System and Software Engineering. Cambridge University Press. https://doi.org/10.1017/CBO9781139195881
[2]
Michael E. Akintunde, Elena Botoeva, Panagiotis Kouvaros, and Alessio Lomuscio. 2020. Verifying Strategic Abilities of Neural-symbolic Multi-agent Systems. In Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning. 22–32. https://doi.org/10.24963/kr.2020/3
[3]
Michael E. Akintunde, Elena Botoeva, Panagiotis Kouvaros, and Alessio Lomuscio. 2022. Formal verification of neural agents in non-deterministic environments. Auton. Agents Multi Agent Syst. 36, 1 (2022), 6. https://doi.org/10.1007/s10458-021-09529-3
[4]
Alberto F Alesina and Eliana La Ferrara. 2000. The determinants of trust.
[5]
Michael G Anderson and Johnathan C Mun. 2021. Technology Trust: System Information Impact on Autonomous Systems Adoption in High-Risk Applications. Defense Acquisition Research Journal: A Publication of the Defense Acquisition University 28, 1 (2021).
[6]
Melanie J Ashleigh and Neville A Stanton. 2001. Trust: Key elements in human supervisory control domains. Cognition, Technology & Work 3, 2 (2001), 92–100.
[7]
Ronald Ashri, Sarvapali D Ramchurn, Jordi Sabater, Michael Luck, and Nicholas R Jennings. 2005. Trust evaluation through relationship analysis. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems. 1005–1011.
[8]
Jerry Ball, Christopher Myers, Andrea Heiberg, Nancy J Cooke, Michael Matessa, Mary Freiman, and Stuart Rodgers. 2010. The synthetic teammate project. Computational and Mathematical Organization Theory 16, 3 (2010), 271–299.
[9]
Armin Biere, Alessandro Cimatti, Edmund M Clarke, Ofer Strichman, and Yunshan Zhu. 2009. Bounded model checking.Handbook of satisfiability 185, 99 (2009), 457–481.
[10]
Andrea Brunello, Angelo Montanari, and Mark Reynolds. 2019. Synthesis of LTL formulas from natural language texts: State of the art and research directions. In 26th International symposium on temporal representation and reasoning (TIME 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 17:1–17:19.
[11]
Michael A Campion, Gina J Medsker, and A Catherine Higgs. 1993. Relations between work group characteristics and effectiveness: Implications for designing effective work groups. Personnel psychology 46, 4 (1993), 823–847.
[12]
A. K. Chopra and M. P. Singh. 2018. Sociotechnical Systems and Ethics in the Large. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES’18). Association for Computing Machinery, 48––53.
[13]
A. Cimatti, E. Clarke, E. Giunchiglia, F. Giunchiglia, M. Pistore, M. Roveri, R. Sebastiani, and A. Tacchella. 2002. NuSMV 2: An OpenSource Tool for Symbolic Model Checking. In Computer Aided Verification. Springer, 359–364.
[14]
Edmund M Clarke, William Klieber, Miloš Nováček, and Paolo Zuliani. 2011. Model checking and the state explosion problem. In LASER Summer School on Software Engineering. Springer, 1–30.
[15]
Karen Clarke, Gillian Hardstone, Mark Rouncefield, and Ian Sommerville. 2006. Trust in technology: A socio-technical perspective. Vol. 36. Springer Science & Business Media.
[16]
A. Claviére, E. Asselin, C. Garion, and C. Pagetti. 2021. Safety Verification of Neural Network Controlled Systems. In 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). 47–54.
[17]
Jason A Colquitt, Brent A Scott, and Jeffery A LePine. 2007. Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance.Journal of applied psychology 92, 4 (2007), 909.
[18]
Nancy Cooke, Mustafa Demir, and Lixiao Huang. 2020. A framework for human-autonomy team research. In International Conference on Human-Computer Interaction. Springer, 134–146.
[19]
Haydee M Cuevas, Stephen M Fiore, Barrett S Caldwell, and Laura Strater. 2007. Augmenting team cognition in human-automation teams performing in complex operational environments. Aviation, space, and environmental medicine 78, 5 (2007), B63–B70.
[20]
D. Daniele. 2019. Models of anticipation within the responsible research and innovation framework: The two RRI approaches and challenge of human rights. NanoEthics 13, 1 (2019).
[21]
Mustafa Demir, Aaron D Likens, Nancy J Cooke, Polemnia G Amazeen, and Nathan J McNeese. 2018. Team coordination and effectiveness in human-autonomy teaming. IEEE Transactions on Human-Machine Systems 49, 2 (2018), 150–159.
[22]
Mustafa Demir, Nathan J McNeese, and Nancy J Cooke. 2017. Team situation awareness within the context of human-autonomy teaming. Cognitive Systems Research 46 (2017), 3–12.
[23]
Mustafa Demir, Nathan J McNeese, and Nancy J Cooke. 2019. The evolution of human-autonomy teams in remotely piloted aircraft systems operations. Frontiers in Communication 4 (2019), 50.
[24]
Mustafa Demir, Nathan J McNeese, Nancy J Cooke, Jerry T Ball, Christopher Myers, and Mary Frieman. 2015. Synthetic teammate communication and coordination with humans. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 59. SAGE Publications Sage CA: Los Angeles, CA, 951–955.
[25]
Louise Dennis, Michael Fisher, Marija Slavkovik, and Matt Webster. 2016. Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems 77 (2016), 1–14.
[26]
Dana Dghaym, Asieh Salehi, and Colin Snook. 2016. Using Rodin and BMotionStudio for Public Engagement. In Developer Workshop, 2016. 23.
[27]
Kurt T Dirks. 1999. The effects of interpersonal trust on work group performance.Journal of applied psychology 84, 3 (1999), 445.
[28]
Mateusz Dolata and Kiram Ben Aleya. 2022. Morphological Analysis for Design Science Research: The Case of Human-Drone Collaboration in Emergencies. In The Transdisciplinary Reach of Design Science Research: 17th International Conference on Design Science Research in Information Systems and Technology, DESRIST 2022, St Petersburg, FL, USA, June 1–3, 2022, Proceedings. Springer-Verlag, 17–29.
[29]
N. Drawel, J. Bentahar, and E. Shakshuki. 2017. Reasoning about Trust and Time in a System of Agents. Procedia Computer Science 109 (12 2017), 632–639.
[30]
N. Drawel, A. Laarej, J. Bentahar, and M. El Menshawy. 2022. Transformation-based model checking temporal trust in multi-agent systems. Journal of Systems and Software 192 (2022), 111383.
[31]
Mary T Dzindolet, Scott A Peterson, Regina A Pomranky, Linda G Pierce, and Hall P Beck. 2003. The role of trust in automation reliance. International journal of human-computer studies 58, 6 (2003), 697–718.
[32]
E. Allen Emerson. 1990. Temporal and Modal Logic. In Formal Models and Semantics. Elsevier, 995–1072.
[33]
M. R. Endsley and D. B. Kaber. 1999. Level of automation effects on performance, situation awareness and workload in a dynamic control task.Ergonomics 42, 3 (1999), 462–92.
[34]
Emre Erdogan, Frank Dignum, Rineke Verbrugge, and Pınar Yolum. 2022. Abstracting Minds: Computational Theory of Mind for Human-Agent Collaboration. In HHAI2022: Augmenting Human Intellect. IOS Press, 199–211.
[35]
S. Graf and H. Saïdi. 1997. Construction of abstract state graphs with PVS. In Proceedings of the 9th International Conference on Computer Aided Verification (CAV’97). Springer Berlin Heidelberg, 72–83.
[36]
Wendy Hall and Jérôme Pesenti. 2017. Growing the artificial intelligence industry in the UK. Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy. Part of the Industrial Strategy UK and the Commonwealth (2017).
[37]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
[38]
R. R. Hoffman, S. T.Mueller, G. Klein, and J. Litman. 2019. Metrics for Explainable AI: Challenges and Prospects. arxiv:1812.04608 [cs.AI]
[39]
Ming Hou. 2021. Enabling Trust in Autonomous Human-Machine Teaming. In 2021 IEEE International Conference on Autonomous Systems (ICAS). 1–1. https://doi.org/10.1109/ICAS49788.2021.9551153
[40]
X. Huang, M. Kwiatkowska, and M. Olejnik. 2019. Reasoning about Cognitive Trust in Stochastic Multiagent Systems. ACM Trans. Comput. Logic 20, 4, Article 21 (jul 2019), 64 pages.
[41]
Agnieszka Hynnekleiv and Margareta Lützhöft. 2021. Designing for trustworthiness, training for trust. An overview of trust issues in human autonomy teaming.
[42]
A. Irfan, K. D. Julian, H. Wu, C. Barrett, M. J. Kochenderfer, B. Meng, and J. Lopez. 2020. Towards verification of neural networks for small unmanned aircraft collision avoidance. In In Proceedings of the 39th Digital Avionics Systems Conference (DASC20). IEEE, 1––10.
[43]
Nicholas R. Jennings, Luc Moreau, David Nicholson, Sarvapali D. Ramchurn, Stephen J. Roberts, Tom Rodden, and Alex Rogers. 2014. Human-agent collectives. Commun. ACM 57, 12 (2014), 80–88.
[44]
Sarah A Jessup, Tamera R Schneider, Gene M Alarcon, Tyler J Ryan, and August Capiola. 2019. The measurement of the propensity to trust automation. In International Conference on Human-Computer Interaction. Springer, 476–489.
[45]
David B Kaber and Mica R Endsley. 2004. The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical issues in ergonomics science 5, 2 (2004), 113–153.
[46]
Ioana Koglbauer, Jürgen Holzinger, Arno Eichberger, and Cornelia Lex. 2018. Autonomous emergency braking systems adapted to snowy road conditions improve drivers’ perceived safety and trust. Traffic injury prevention 19, 3 (2018), 332–337.
[47]
Spencer C Kohn, Ewart J De Visser, Eva Wiese, Yi-Ching Lee, and Tyler H Shaw. 2021. Measurement of trust in automation: A narrative review and reference guide. Frontiers in Psychology 12 (2021).
[48]
M. Kwiatkowska, G. Norman, and D. Parker. 2011. PRISM 4.0: Verification of Probabilistic Real-time Systems. In Proc. 23rd International Conference on Computer Aided Verification (CAV’11)(LNCS, Vol. 6806), G. Gopalakrishnan and S. Qadeer (Eds.). Springer, 585–591.
[49]
Lukas Ladenberger, Jens Bendisposto, and Michael Leuschel. 2009. Visualising event-B models with B-motion studio. In International Workshop on Formal Methods for Industrial Critical Systems. Springer, 202–204.
[50]
John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
[51]
John D Lee and Neville Moray. 1994. Trust, self-confidence, and operators’ adaptation to automation. International journal of human-computer studies 40, 1 (1994), 153–184.
[52]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
[53]
A. Lomuscio, H. Qu, and F. Raimondi. 2017. MCMAS: A Model Checker for the Verification of Multi-Agent Systems. Software Tools for Technology Transfer 19, 1 (2017), 9–30.
[54]
Poornima Madhavan and Douglas A Wiegmann. 2007. Similarities and differences between human–human and human–automation trust: an integrative review. Theoretical Issues in Ergonomics Science 8, 4 (2007), 277–301.
[55]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An Integrative Model of Organizational Trust. The Academy of Management Review 20, 3 (1995), 709–734.
[56]
Nathan J McNeese, Mustafa Demir, Nancy J Cooke, and Christopher Myers. 2018. Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human factors 60, 2 (2018), 262–273.
[57]
JS Metcalfe, AR Marathe, B Haynes, VJ Paul, GM Gremillion, K Drnec, C Atwater, JR Estepp, JR Lukos, EC Carter, 2017. Building a framework to manage trust in automation. In Micro-and nanotechnology sensors, systems, and applications IX, Vol. 10194. SPIE, 351–361.
[58]
Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501–507.
[59]
Bonnie M Muir. 1994. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37, 11 (1994), 1905–1922.
[60]
Bonnie M Muir and Neville Moray. 1996. Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 3 (1996), 429–460.
[61]
Pradeep K Murukannaiah, Nirav Ajmeri, Catholijn M Jonker, and Munindar P Singh. 2020. New foundations of ethical multiagent systems. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. 1706–1710.
[62]
Thor Myklebust, Tor Stålhane, Gunnar D Jenssen, and Irene Wærø. 2020. Autonomous cars, trust and safety case for the public. In 2020 Annual Reliability and Maintainability Symposium (RAMS). IEEE, 1–6.
[63]
National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. The National Academies Press.
[64]
Linda Onnasch, Christopher D Wickens, Huiyang Li, and Dietrich Manzey. 2014. Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human factors 56, 3 (2014), 476–488.
[65]
Thomas O’Neill, Nathan McNeese, Amy Barron, and Beau Schelble. 2022. Human–autonomy teaming: A review and analysis of the empirical literature. Human factors 64, 5 (2022), 904–938.
[66]
Raja Parasuraman. 2000. Designing automation for human use: empirical studies and quantitative models. Ergonomics 43, 7 (2000), 931–951.
[67]
Raja Parasuraman, Robert Molloy, and Indramani L Singh. 1993. Performance consequences of automation-induced’complacency’. The International Journal of Aviation Psychology 3, 1 (1993), 1–23.
[68]
Raja Parasuraman and Victor Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Human factors 39, 2 (1997), 230–253.
[69]
Raja Parasuraman, Thomas B Sheridan, and Christopher D Wickens. 2000. A model for types and levels of human interaction with automation. IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans 30, 3 (2000), 286–297.
[70]
Simon E. Parkin and Luca Viganò (Eds.). 2022. Socio-Technical Aspects in Security - 11th International Workshop, STAST 2021, Virtual Event, October 8, 2021, Revised Selected Papers. Lecture Notes in Computer Science, Vol. 13176. Springer.
[71]
Menisha Patel, Helena Webb, Marina Jirotka, Alan Davoust, Ross Gales, Michael Rovatsos, and Ansgar Koene. 2019. Harnessing interdisciplinarity to promote the ethical design of AI systems. In ECIAIR 2019 European Conference on the Impact of Artificial Intelligence and Robotics, Oxford: UK, Vol. 246.
[72]
Maria Pena, Jared Carrillo, Nonna Milyavskaya, and Thomas Chan. 2021. Human Factors Linked with Initial and Continuous Trust in Autonomous Systems: A Literature Review. Innovation in Aging 5, Suppl 1 (2021), 657.
[73]
Sarvapali D. Ramchurn, Sebastian Stein, and Nicholas R. Jennings. 2021. Trustworthy human-AI partnerships. iScience 24, 8 (2021), 102891.
[74]
Maria Jesus Saenz, Elena Revilla, and Cristina Simón. 2020. Designing AI systems with human-machine teams. MIT Sloan Management Review 61, 3 (2020), 1–5.
[75]
Eduardo Salas, Terry L Dickinson, Sharolyn A Converse, and Scott I Tannenbaum. 1992. Toward an understanding of team performance and training. (1992).
[76]
Paul M Salmon, Neville A Stanton, Guy Walker, Daniel Jenkins, Laura Rafferty, and Kirsten Revell. 2018. user Trust in new battle Management Technology: The effect of Mistrust on situation awareness. In Trust in Military Teams. CRC Press, 183–195.
[77]
Jinjuan She, Jack Neuhoff, and Qingcong Yuan. 2021. Shaping pedestrians’ trust in autonomous vehicles: an effect of communication style, speed information, and adaptive strategy. Journal of Mechanical Design 143, 9 (2021).
[78]
Olga Simon, Barbara Neuhofer, and Roman Egger. 2020. Human-robot interaction: Conceptualising trust in frontline teams through LEGO® Serious Play®. Tourism management perspectives 35 (2020), 100692.
[79]
M. P. Singh. 2014. Norms as a Basis for Governing Sociotechnical Systems. ACM Trans. Intell. Syst. Technol. 5, 1 (jan 2014).
[80]
Colin Snook and Michael Butler. 2008. UML-B and Event-B: an integration of languages and tools. In The IASTED International Conference on Software Engineering - SE2008 (12/02/08 - 14/02/08).
[81]
Sonia Sousa, Paulo Dias, and David Lamas. 2014. A model for Human-computer trust: A key contribution for leveraging trustful interactions. In 2014 9th Iberian Conference on Information Systems and Technologies (CISTI). IEEE, 1–6.
[82]
Neville Stanton. 2011. Trust in military teams. Ashgate Publishing, Ltd.
[83]
Moshe Y Vardi. 2001. Branching vs. linear time: Final showdown. In International conference on tools and algorithms for the construction and analysis of systems. Springer, 1–22.
[84]
Guy H Walker, Neville A Stanton, and Daniel P Jenkins. 2017. Command and control: the sociotechnical perspective. CRC Press.
[85]
Michael Winikoff. 2017. Towards trusting autonomous systems. In International workshop on engineering multi-agent systems. Springer, 3–20.
[86]
R. Yan, G. Santos, X. Duan, D. Parker, and M. Kwiatkowska. 2022. Finite-horizon equilibria for neuro-symbolic concurrent stochastic games. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI38). Proceedings of Machine Learning Research, 2170–2180.

Cited By

View all
  • (2024)Exploring the philosophy and practice of AI literacy in higher education in the Global South: a scoping reviewاستكشاف فلسفة وممارسة الذكاء الاصطناعي في التعليم العالي في الجنوب: مراجعة استطلاعيةCybrarians Journal10.70000/cj.2024.73.601(1-21)Online publication date: 25-Dec-2024
  • (2024)"Trust equals less death - it's as simple as that" : Developing a Socio-technical Framework for Trustworthy Defence and Security Automated SystemsProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686071(1-10)Online publication date: 16-Sep-2024
  • (2024)Technology for Environmental Policy: Exploring Perceptions, Values, and Trust in a Citizen Carbon Budget AppProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686065(1-13)Online publication date: 16-Sep-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
TAS '23: Proceedings of the First International Symposium on Trustworthy Autonomous Systems
July 2023
426 pages
ISBN:9798400707346
DOI:10.1145/3597512
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 July 2023

Check for updates

Author Tags

  1. Human-AI Systems
  2. Safety
  3. Trust
  4. Verification

Qualifiers

  • Extended-abstract
  • Research
  • Refereed limited

Conference

TAS '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)80
  • Downloads (Last 6 weeks)9
Reflects downloads up to 25 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring the philosophy and practice of AI literacy in higher education in the Global South: a scoping reviewاستكشاف فلسفة وممارسة الذكاء الاصطناعي في التعليم العالي في الجنوب: مراجعة استطلاعيةCybrarians Journal10.70000/cj.2024.73.601(1-21)Online publication date: 25-Dec-2024
  • (2024)"Trust equals less death - it's as simple as that" : Developing a Socio-technical Framework for Trustworthy Defence and Security Automated SystemsProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686071(1-10)Online publication date: 16-Sep-2024
  • (2024)Technology for Environmental Policy: Exploring Perceptions, Values, and Trust in a Citizen Carbon Budget AppProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686065(1-13)Online publication date: 16-Sep-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media