Abstract
High-risk artificial intelligence (AI) are systems that can endanger the fundamental rights of individuals. Due to their complex characteristics, users often wrongly perceive their risks, trusting too little or too much. To further understand trust from the users’ perspective, we investigate what factors affect their propensity to trust Facial Recognition Systems (FRS), a high-risk AI, in Mozambique. The study uses mixed methods, with a survey (N = 120) and semi-structured interviews (N = 13). The results indicate that users’ perceptions of the FRS’ robustness and principles of use affect their propensity to trust it. This relationship is moderated by external issues and how the system attributes are communicated. The findings from this study shed light on aspects that should be addressed when developing AI systems to ensure adequate levels of trust.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mozambique’s Digital Transformation. https://www.trade.gov/market-intelligence/mozambiques-digital-transformation
on Artificial Intelligence, H.L.E.G.: Assessment list for trustworthy artificial intelligence (Altai) for self-assessment, July 2020. https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
Bach, T.A., Khan, A., Hallock, H., Beltrão, G., Sousa, S.: A systematic literature review of user trust in AI-enabled systems: an HCI perspective. International Journal of Human-Computer Interaction, pp. 1–16 (2022). https://doi.org/10.1080/10447318.2022.2138826
Bazeley, P.: Issues in mixing qualitative and quantitative approaches to research. Appl. Qual. Methods Market. Manage. Res. 141, 156 (2004)
Beltrão, G., Sousa, S.: Factors influencing trust in WhatsApp: a cross-cultural study. In: Stephanidis, C., et al. (eds.) HCII 2021. LNCS, vol. 13094, pp. 495–508. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90238-4_35
Bostrom, R.P., Heinen, J.S.: MIS problems and failures: a socio-technical perspective. part i: The causes. MIS Q. 17–32 (1977). https://doi.org/10.2307/248710
Commission, E.: Annexes to the proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (2021). https://ec.europa.eu/newsroom/dae/redirection/document/75789
Commission, E.: Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence(artificial intelligence act) and amending certain union legislative acts (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975 &uri=CELEX%3A52021PC0206
Crawford, K.: Halt the use of facial-recognition technology until it is regulated. Nature 572(7771), 565–566 (2019)
Creswell, J.W., Clark, V.L.P.: Designing and Conducting Mixed Methods Research. Sage Publications (2017)
De Visser, E., et al.: Towards a theory of longitudinal trust calibration in human-robot teams. Int. J. Soc. Robot. 12(2), 459–478 (2020). https://doi.org/10.1007/s12369-019-00596-x
Fimberg, K., Sousa, S.: The impact of website design on users’ trust perceptions. In: Markopoulos, E., Goonetilleke, R., Ho, A., Luximon, Y. (eds.) Advances in Creativity, Innovation, Entrepreneurship and Communication of Design. AHFE 2020. AISC, vol. 1218, pp. 267–274. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51626-0_34
Gebru, B., Zeleke, L., Blankson, D., Nabil, M., Nateghi, S., Homaifar, A., Tunstel, E.: A review on human-machine trust evaluation: human-centric and machine-centric perspectives. IEEE Trans. Human Mach. Syst. (2022). https://doi.org/10.1080/0144929X.2019.1656779
Gulati, S., Sousa, S., Lamas, D.: Design, development and evaluation of a human-computer trust scale. Behav. Inf. Technol. 38(10), 1004–1015 (2019). https://doi.org/10.1080/0144929X.2019.1656779
Hao, K.: South Africa’s private surveillance machine is fueling a digital apartheid (2022). https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/
Johnson, K.: How Wrongful Arrests Based on AI Derailed 3 Men’s Lives (2022). https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/?mc_cid=187337992f
Johnson, K.: Iran Says Face Recognition Will ID Women Breaking Hijab Laws (2023). https://www.wired.com/story/iran-says-face-recognition-will-id-women-breaking-hijab-laws/
Li, L., Mu, X., Li, S., Peng, H.: A review of face recognition technology. IEEE Access 8, 139110–139120 (2020). https://doi.org/10.1109/ACCESS.2020.3011028
Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995). https://doi.org/10.5465/amr.1995.9508080335
Miles, M.B., Huberman, A.M., Saldaña, J.: Qualitative Data Analysis: A Methods Sourcebook. Sage publications (2018)
Miles, S., Rowe, G.: The laddering technique. Doing social psychology research, pp. 305–343 (2004)
Pinto, A., Sousa, S., Silva, C., Coelho, P.: Adaptation and validation of the HCTM scale into human-robot interaction portuguese context: a study of measuring trust in human-robot interactions. In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, pp. 1–4 (2020). https://doi.org/10.1145/3419249.3420087
Putnam, R.D.: Bowling alone: America’s declining social capital. In: The City Reader, pp. 188–196. Routledge (2015)
Reynolds, T.J., Gutman, J.: Laddering theory, method, analysis, and interpretation. In: Understanding consumer decision making, pp. 40–79. Psychology Press (2001). https://doi.org/10.4324/9781410600844-9
Roser, M., Ritchie, H., Ortiz-Ospina, E.: Internet. Our World in Data (2015). https://ourworldindata.org/internet
Sousa, S., Kalju, T., et al.: Modeling trust in COVID-19 contact-tracing apps using the human-computer trust scale: online survey study. JMIR Hum. Factors 9(2), e33951 (2022). https://doi.org/10.2196/33951
Sousa, S., Lamas, D., Dias, P.: A model for human-computer trust. In: Zaphiris, P., Ioannou, A. (eds.) LCT 2014. LNCS, vol. 8523, pp. 128–137. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07482-5_13
Urquhart, L.D., McGarry, G., Crabtree, A.: Legal provocations for HCI in the design and development of trustworthy autonomous systems. In: Nordic Human-Computer Interaction Conference, pp. 1–12 (2022). https://doi.org/10.1145/3546155.3546690
Acknowledgements
This study was partly funded by the Trust and Influence Programme (FA8655-22-1-7051), European Office of Aerospace Research and Development, and US Air Force Office of Scientific Research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Beltrão, G., Sousa, S., Lamas, D. (2023). Trust in Facial Recognition Systems: A Perspective from the Users. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14142. Springer, Cham. https://doi.org/10.1007/978-3-031-42280-5_24
Download citation
DOI: https://doi.org/10.1007/978-3-031-42280-5_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-42279-9
Online ISBN: 978-3-031-42280-5
eBook Packages: Computer ScienceComputer Science (R0)