Skip to main content

Trustworthy AI Services in the Public Sector: What Are Citizens Saying About It?

  • Conference paper
  • First Online:
Requirements Engineering: Foundation for Software Quality (REFSQ 2021)

Abstract

[Motivation] Artificial intelligence (AI) creates many opportunities for public institutions, but the unethical use of AI in public services can reduce citizens’ trust. [Question] The aim of this study was to identify what kind of requirements citizens have for trustworthy AI services in the public sector. The study included 21 interviews and a design workshop of four public AI services. [Results] The main finding was that all the participants wanted public AI services to be transparent. This transparency requirement covers a number of questions that trustworthy AI services must answer, such as about their purposes. The participants also asked about the data used in AI services and from what sources the data were collected. They pointed out that AI must provide easy-to-understand explanations. We also distinguished two other important requirements: controlling personal data usage and involving humans in AI services. [Contribution] For practitioners, the paper provides a list of questions that trustworthy public AI services should answer. For the research community, it illuminates the transparency requirement of AI systems from the perspective of citizens.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Fast, E., Horvitz, E.: Long-term trends in the public perception of artificial intelligence. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 963–969 (2010)

    Google Scholar 

  2. Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., Floridi, L.: Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach. Sci. Eng. Ethics 24, 505–528 (2017)

    Google Scholar 

  3. Mehr, H.: Artificial Intelligence for Citizen Services and Government. Harvard Ash Center Technology & Democracy (2017)

    Google Scholar 

  4. AI HLEG, Policy and investment recommendations for trustworthy AI. European Commission (2019)

    Google Scholar 

  5. AI Now Institute: AI Now Report 2018 (2018)

    Google Scholar 

  6. AI Now Institute: Automated Decision Systems Examples of Government Use Cases (2019)

    Google Scholar 

  7. New York City’s algorithm task force is fracturing. https://www.theverge.com/2019/4/15/18309437/new-york-city-accountability-task-force-law-algorithm-transparency-automation. Accessed 6 Nov 2020

  8. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(2), 389–399 (2019)

    Article  Google Scholar 

  9. A Consortium of Finnish organisations seeks for a shared way to proactively inform citizens on AI use. https://www.espoo.fi/en-US/A_Consortium_of_Finnish_organisations_se(167195). Accessed 6 Nov 2020

  10. Amershi, S., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–13 (2019)

    Google Scholar 

  11. Rzepka, C., Berger, B.: User interaction with AI-enabled systems: a Systematic review of IS research. In: Proceedings of the 39th International Conference on Information Systems, pp. 13–16 (2018)

    Google Scholar 

  12. Leslie, D.: Understanding artificial intelligence ethics and safety: a guide for the responsible design and implementation of AI systems in the public sector. Alan Turing Institute (2019)

    Google Scholar 

  13. Carter, N., Bryant-Lukosius, D., Dicenso, A., Blythe, J., Neville, A.: The use of triangulation in qualitative research. Oncol. Nurs. Forum 41(5), 545–547 (2014)

    Article  Google Scholar 

  14. Kaplowitz, M., Hoehn, J.: Do focus groups and individual interviews reveal the same information for natural resource valuation? Ecol. Econ. 36(2), 237–247 (2001)

    Article  Google Scholar 

  15. Schlosser, C., Jones, S., Maiden, N.: Using a creativity workshop to generate requirements for an event database application. In: Paech, B., Rolland, C. (eds.) REFSQ 2008. LNCS, vol. 5025, pp. 109–122. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69062-7_10

    Chapter  Google Scholar 

  16. Drobotowicz, K: Guidelines for designing trustworthy AI services in the public sector. Master’s thesis, Aalto University, Department of Computer Science (2020). http://urn.fi/URN:NBN:fi:aalto-202008235015

  17. DiCicco-Bloom, B., Crabtree, B.: The qualitative research interview. Med. Educ. 4(4), 314–321 (2006)

    Article  Google Scholar 

  18. Kitzinger, J.: Qualitative research: introducing focus groups. BMJ 311(7000), 299–302 (1995)

    Article  Google Scholar 

  19. Michanek, J., Breiler, A.: The Idea Agent: The Handbook on Creative Processes, 2nd edn. Routledge, Abingdon (2013)

    Book  Google Scholar 

  20. Lazar, J., Feng, J., Hochheiser, H.: Research Methods in Human-Computer Interaction, 2nd edn. Morgan Kaufmann, Burlington (2017)

    Google Scholar 

  21. Charmaz, K., Hochheiser, H.: Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis, Thousand Oaks (2006)

    Google Scholar 

  22. Turilli, M., Floridi, L.: The ethics of information transparency. Ethics Inf. Technol. 11(2), 105–112 (2009). https://doi.org/10.1007/s10676-009-9187-9

    Article  Google Scholar 

  23. Hosseini, M., Shahri, A., Phalp, K., Ali, R.: Foundations for transparency requirements engineering. In: Daneva, M., Pastor, O. (eds.) REFSQ 2016. LNCS, vol. 9619, pp. 225–231. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30282-9_15

    Chapter  Google Scholar 

  24. Chazette, L., Karras, O., Schneider, K.: Do End-Users want explanations? Analyzing the role of explainability as an emerging aspect of non-functional requirements. In: Proceedings of the IEEE International Conference on Requirements Engineering, pp. 223–233 (2019)

    Google Scholar 

  25. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  26. Chazette, L., Schneider, K.: Explainability as a non-functional requirement: challenges and recommendations. Requirements Eng. 25(4), 493–514 (2020). https://doi.org/10.1007/s00766-020-00333-1

    Article  Google Scholar 

  27. Schaefer, K., Chen, J., Szalma, J., Hancock, P.: A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58(3), 377–400 (2016)

    Article  Google Scholar 

  28. Lee, J., See, K.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  MathSciNet  Google Scholar 

  29. Leading the way into the age of artificial intelligence Final report of Finland’s Artificial Intelligence Programme 2019. Publications of the Ministry of Economic Affairs and Employment (2019)

    Google Scholar 

  30. Koski, O.: Work in the age of artificial intelligence: Four perspectives on the economy, employment, skills and ethics. Publications of the Ministry of Economic Affairs and Employment of Finland (2018)

    Google Scholar 

  31. Artificial Intelligence From Finland, e-Book of Business Finland (2020). https://www.magnetcloud1.eu/b/businessfinland/AI_From_Finland_eBook/

Download references

Acknowledgements

We thank the Saidot team from spring 2019 for starting the project and assisting in the data collection of this study, J. Mattila for co-organizing and conducting parts of the interviews, and our participants for sharing their experiences.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karolina Drobotowicz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Drobotowicz, K., Kauppinen, M., Kujala, S. (2021). Trustworthy AI Services in the Public Sector: What Are Citizens Saying About It?. In: Dalpiaz, F., Spoletini, P. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2021. Lecture Notes in Computer Science(), vol 12685. Springer, Cham. https://doi.org/10.1007/978-3-030-73128-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-73128-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-73127-4

  • Online ISBN: 978-3-030-73128-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics