Skip to main content

Comparing Socio-technical Design Principles with Guidelines for Human-Centered AI

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14735))

Included in the following conference series:

  • 1271 Accesses

Abstract

Human-centered AI (HCAI) refers to guidelines or principles that aim on ethically oriented design of systems. We compare HCAI-guidelines with principles of socio-technical systems that emerged in the context of conventional information technology. The comparison leads to a revision of socio-technical heuristics by including aspects of AI-usage. The comparison reveals that continuous evolution is a basic characteristic of socio-technical systems, and that human oversight or interventions and the subsequent appropriation of AI-systems lead to continuous adaptation and re-design of the systems, if autonomy is collaboratively exercised. From a socio-technical point of view, the crucial requirement of transparency has not only to be fulfilled with technical features, but also by contributions of the whole system including human actors. It will be promising for using AI, if not only technical features, but organizational and social practices are socio-technically designed in a way that compensates shortcomings of AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Shneiderman, B.: Human-Centered AI. Oxford University Press, Oxford (2022)

    Google Scholar 

  2. Dellermann, D., Calma, A., Lipusch, N., Weber, T., Weigel, S., Ebel, P.: The future of human-AI collaboration: a taxonomy of design knowledge for hybrid intelligence systems. In: Proceedings of the 52nd Hawaii International Conference on System Sciences (HICSS) (2019)

    Google Scholar 

  3. Garibay, O.O., et al.: Six human-centered artificial intelligence grand challenges. Int. J. Hum.-Compute. Interact. 39(3), 391–437 (2023). https://doi.org/10.1080/10447318.2022.2153320

    Article  Google Scholar 

  4. Dwivedi, Y.K., et al.: Opinion paper: ‘so what if ChatGPT wrote it?’ Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71, 102642 (2023). https://doi.org/10.1016/j.ijinfomgt.2023.102642

    Article  Google Scholar 

  5. Bingley, W.J., et al.: Where is the human in human-centered AI? Insights from developer priorities and user experiences. Comput. Hum. Behav. 141, 107617 (2023). https://doi.org/10.1016/j.chb.2022.107617

    Article  Google Scholar 

  6. European Commission, Directorate-General for Communications Networks, Content and Technology, Ethics guidelines for trustworthy AI (2019). https://data.europa.eu/doi/10.2759/346720. Accessed 23 May 2021

  7. Weisz, J.D., Muller, M., He, J., Houde, S.: Toward general design principles for generative AI applications. arXiv, 13 January 2023. http://arxiv.org/abs/2301.05578. Accessed 26 Oct 2023.

  8. Cherns, A.: Principles of sociotechnical design revisited. Hum. Relat. 40(3), 153–162 (1987)

    Article  Google Scholar 

  9. Cherns, A.: The principles of sociotechnical design. Hum. Relat. 29(8), 783–792 (1976)

    Article  Google Scholar 

  10. Mumford, E.: Designing Human Systems for New Technology: The ETHICS Method. Manchester Business School (1983). https://books.google.de/books?id=JTjxIwAACAAJ

  11. Clegg, C.W.: Sociotechnical principles for system design. Appl. Ergon. 31(5), 463–477 (2000). https://doi.org/10.1016/S0003-6870(00)00009-0

    Article  Google Scholar 

  12. Herrmann, T., Jahnke, I., Nolte, A.: A problem-based approach to the advancement of heuristics for socio-technical evaluation. Behav. Inf. Technol. 41(14), 3087–3109 (2022). https://doi.org/10.1080/0144929X.2021.1972157

    Article  Google Scholar 

  13. Herrmann, T.: Promoting human competences by appropriate modes of interaction for human-centered-AI. In: Degen, H., Ntoa, S. (eds.) HCII 2022. LNCS, vol. 13336, pp. 35–50. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-05643-7_3

    Chapter  Google Scholar 

  14. Chatila, R., Havens, J.C.: The IEEE global initiative on ethics of autonomous and intelligent systems. In: Aldinhas Ferreira, M.I., Silva Sequeira, J., Virk, G.S., Tokhi, M.O., Kadar, E.E. (eds.) Robotics and Well-Being. ISCASE, vol. 95, pp. 11–16. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12524-0_2

    Chapter  Google Scholar 

  15. De Visser, E.J., Pak, R., Shaw, T.H.: From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics 61(10), 1409–1427 (2018). https://doi.org/10.1080/00140139.2018.1457725

    Article  Google Scholar 

  16. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

    Article  Google Scholar 

  17. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. SSRN J. (2020). https://doi.org/10.2139/ssrn.3518482

    Article  Google Scholar 

  18. Usmani, U.A., Happonen, A., Watada, J.: Human-centered artificial intelligence: designing for user empowerment and ethical considerations. In: 2023 5th International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA). IEEE, Istanbul, Turkiye, June 2023, pp. 1–7 (2023). https://doi.org/10.1109/HORA58378.2023.10156761

  19. Shneiderman, B.: Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered ai systems. ACM Trans. Interact. Intell. Syst. 10(4), 1–31 (2020). https://doi.org/10.1145/3419764

    Article  Google Scholar 

  20. Shneiderman, B.: Responsible AI: bridging from ethics to practice. Commun. ACM 64(8), 32–35 (2021). https://doi.org/10.1145/3445973

    Article  Google Scholar 

  21. Hofeditz, L., Mirbabaie, M., Ortmann, M.: Ethical challenges for human–agent interaction in virtual collaboration at work. Int. J. Hum.–Comput. Interact. 1–17 (2023). https://doi.org/10.1080/10447318.2023.2279400

  22. Kieslich, K., Keller, B., Starke, C.: Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data Soc. 9(1), 205395172210929 (2022). https://doi.org/10.1177/20539517221092956

  23. Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., López De Prado, M., Herrera-Viedma, E., Herrera, F.: Connecting the dots in trustworthy artificial intelligence: from AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf. Fusion 99, 101896 (2023). https://doi.org/10.1016/j.inffus.2023.101896

    Article  Google Scholar 

  24. Georgieva, I., Lazo, C., Timan, T., Van Veenstra, A.F.: From AI ethics principles to data science practice: a reflection and a gap analysis based on recent frameworks and practical experience. AI Ethics 2(4), 697–711 (2022). https://doi.org/10.1007/s43681-021-00127-3

    Article  Google Scholar 

  25. Noble, S.M., Dubljević, V.: Ethics of AI in organizations. In: Human-Centered Artificial Intelligence, pp. 221–239. Elsevier, Amsterdam (2022). https://doi.org/10.1016/B978-0-323-85648-5.00019-0

  26. Reinhardt, K.: Trust and trustworthiness in AI ethics. AI Ethics 3(3), 735–744 (2023). https://doi.org/10.1007/s43681-022-00200-5

    Article  Google Scholar 

  27. Amershi, S., Cakmak, M., Knox, W.B., Kulesza, T.: Power to the people: the role of humans in interactive machine learning. AI Mag. 35(4), 105–120 (2014)

    Google Scholar 

  28. Jarrahi, M.H.: Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making. Bus. Horiz. 61(4), 577–586 (2018)

    Article  Google Scholar 

  29. Fogliato, R., et al.: Who goes first? Influences of human-AI workflow on decision making in clinical imaging. arXiv, 19 May 2022. http://arxiv.org/abs/2205.09696. Accessed 03 June 2022

  30. Schmidt, A., Herrmann, T.: Intervention user interfaces: a new interaction paradigm for automated systems. Interactions 24(5), 40–45 (2017)

    Google Scholar 

  31. Rakova, B., Yang, J., Cramer, H., Chowdhury, R.: Where responsible AI meets reality: practitioner perspectives on enablers for shifting organizational practices. Proc. ACM Hum.-Comput. Interact. 5(CSCW1), 1–23 (2021)

    Article  Google Scholar 

  32. Cai, C.J., et al.: Human-centered tools for coping with imperfect algorithms during medical decision-making. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2019)

    Google Scholar 

  33. Cai, C.J., Winter, S., Steiner, D., Wilcox, L., Terry, M.: ‘Hello AI’: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proc. ACM Hum.-Comput. Interact. 3(CSCW), 1–24 (2019). https://doi.org/10.1145/3359206

  34. Schneider, J., Meske, C., Kuss, P.: Foundation models: a new paradigm for artificial intelligence. Bus. Inf. Syst. Eng. (2024). https://doi.org/10.1007/s12599-024-00851-0

    Article  Google Scholar 

  35. Herrmann, T., Pfeiffer, S.: Keeping the organization in the loop as a general concept for human-centered AI: the example of medical imaging. In: Proceedings of the 56th Hawaii International Conference on System Sciences (HICSS), pp. 5272–5281 (2023)

    Google Scholar 

  36. Ackermann, M.S., Goggins, S.P., Herrmann, T., Prilla, M., Stary, C.: Designing Healthcare That Works – A Socio-technical Approach. Academic Press, United Kingdom, United States (2018)

    Google Scholar 

  37. Okamura, K., Yamada, S.: Adaptive trust calibration for human-AI collaboration. PLoS ONE 15(2), e0229132 (2020). https://doi.org/10.1371/journal.pone.0229132

    Article  Google Scholar 

  38. Herrmann, T., Pfeiffer, S.: Keeping the organization in the loop: a socio-technical extension of human-centered artificial intelligence. AI Soc. 38, 1523–1542 (2023). https://doi.org/10.1007/s00146-022-01391-5

    Article  Google Scholar 

  39. Herrmann, T., Lentzsch, C., Degeling, M.: Intervention and EUD. In: Malizia, A., Valtolina, S., Morch, A., Serrano, A., Stratton, A. (eds.) IS-EUD 2019. LNCS, vol. 11553, pp. 67–82. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-24781-2_5

    Chapter  Google Scholar 

  40. Herrmann, T.: Collaborative appropriation of AI in the context of interacting with AI. In: Degen, H., Ntoa, S. (eds.) HCII 2023. LNCS, vol. 14051, pp. 249–260. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-35894-4_18

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Herrmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Herrmann, T. (2024). Comparing Socio-technical Design Principles with Guidelines for Human-Centered AI. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2024. Lecture Notes in Computer Science(), vol 14735. Springer, Cham. https://doi.org/10.1007/978-3-031-60611-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-60611-3_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-60613-7

  • Online ISBN: 978-3-031-60611-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics