Abstract
In this paper, I argue that the only kind of trust that should be allocated to AI technologies, if any at all, is epistemic trust. Epistemic trust is defined by Wilholt (Br J Philos Sci 64:233–253, 2013) as the kind of trust that is allocated strictly in virtue of the capacities of the receiver as a provider of information or as a conveyer of knowledge. If, as Alvarado (2022, http://philsci-archive.pitt.edu/id/eprint/21243) argues, AI is first and foremost designed and deployed as an epistemic technology—a technology that is designed, developed and deployed to particularly and especially expand our capacities as knowers—then it follows that we trust AI, when we trust it, exclusively in its capacities as a provider of information and epistemic enhancer. Trusting it otherwise may betray conceptual confusion. As I will show, it follows that trust in AI cannot be modeled after any general kind of interpersonal trust, it cannot be modeled after trust in other technologies such as pharmaceuticals, and it cannot be modeled after the kind of trust we allocate to medical practitioners in their capacities as providers of care. It also follows that, even after it is established that epistemic trust is the only legitimate kind of trust to allocate to epistemic technologies, whether or not AI can, in fact, be trusted remains an open question.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
An easy way to understand the importance of these distinct capacities is by placing them in contrast to the kinds of artifacts that enhance other kinds of capacities. A bulldozer for example, enhances physical abilities of human beings. So does an elliptical shaker in a chemistry laboratory. As Humphreys mentions, the telescope and the microscope do something distinct, they enhance our perceptual abilities and these are directly linked to epistemic endeavors.
Alvarado draws heavily form the concept of epistemic enhancer coined by Humphreys [41]. Without getting too much into details we can say that epistemic technologies are a subset of epistemic enhancers. This is an argument made by Alvarado elsewhere and for which there is no space (nor need) to expand on in this paper. This will become clearer as the paper progresses.
I thank one of the anonymous reviewers for this paper for requesting a more explicit definition of the term ‘epistemic’, as well as Nico Formanek who invited me to do the same for the term ‘AI’.
There is a sense in which if we had full certainty that someone or something would do a given task for which we are relying on them for, then trust is not needed. Some recent developments in cryptocurrency speak to this role of transparency in transactions: if all transactions are fully transparent, then trust need not play a role in the market.
While some may object and suggest that someone can trust something to fail or someone to betray them, there is a sense in which when I trust someone to betray me, the positive expectation is placed on the fact that they will betray me not on the betrayal itself or the person doing the betrayal. Hence the trust is placed on the assessment of the process or the person and not on the process or the person per se. In other words, I still hold a positive expectation that my belief will turn out to be true, even if the consequences of that belief have negative outcomes.
While the conversation about trust in AI has recently resurfaced in light of recent sociological and policy-oriented incentives to investigate it, the philosophy of trust has created a vast and rich debate on the subject going at least as far back as the mid 1980s (Baier [5]) and the ethics of care approach.
An easy way to see how this is the case is to think of some artifacts that are explicitly individuated for what they do through their name, e.g., a corkscrew. Although not all artifacts are explicitly named in virtue of the function they carry out, one would find that almost all are conceptually individuated so: e.g., a fan, a watch, etc.
See Goldman [34] as well as Dretske [20] for an accessible account of the distinction between epistemic and non-epistemic elements of a problem. It is also worth noting here that while some frameworks in epistemology deem epistemic transfers of things such as knowledge, to be a socially negotiated endeavor [47, 51]]. In this paper I am assuming that there are epistemic tasks and knowledge-related transfers that are directly related to knowledge-acquisition yet are nevertheless more basic than the kind envisioned by pedagogical endeavors of the kind that require social considerations. As a user, when I trust a computer to yield the number that I need as an input to an equation, I am requesting information from a machine that is directly related to knowledge creation, but I am not engaged in a pedagogical learning experience with the machine. The machine is not strictly speaking teaching me anything. I thank anonymous reviewer #1 for seeking clarification on this latter point.
Humphreys example here may prove to be limited after all. It is clear that perception is not like digging or running, but it nevertheless has an essential physical component to it: vision, touch, hearing, etc., all are heavily constrained and constituted by physical interactions. Our perceptual abilities are strongly conditioned by biochemical affordances and these are in turn constrained by physical interactions. However, the point remains that there are some activities that are more epistemic in nature than others, and some that are not e.g., running vs. thinking, carrying vs. calculating, etc.
Humphreys admits of three kinds of epistemic enhancement: extrapolation, conversion and augmentation. For a detail analysis of what these are see Humphreys ([41] p.3–6).
Roughly, an epistemic technology is an epistemic enhancer (if an epistemic enhancer is not a success term) but not all epistemic enhancers are epistemic technologies: some are epistemic enhancers are epistemic methods, epistemic processes (brain processes (?)) or epistemic practices (blind experiments, etc.) Whether these and other more abstract methods should be categorized as technologies is an important and interesting question. However, the focus of this paper is orthogonal to such questions.
This is important to note because, as noted above, some may want to make the case that in this respect a computational method is no different from a microscope, or perhaps a light switch at a laboratory. Both of which in one way or another enable and increase the carrying out of epistemic tasks such as inquiry and experiment.
Thanks to the generous anonymous reviewer for inviting me to restate this point.
I thank the anonymous reviewer that brought this element of temporal dynamism in trust to my attention.
Views of this kind, often confuse the fact that some things or practices exist and are widespread with their permissibility or justifiability. That is, they often conflate the sociological status of a phenomena with its ethical evaluation (see the work of Creel and Helman [17], for an example of this conflation) Creel and Hellman (2021) for example, suggest that since arbitrariness is widespread in hiring practices it is not strictly speaking morally harmful on its own. Rather, their argument concludes, arbitrariness is only harmful when it affects large groups consistently and at the same time. Crucial to their argument is the point that because arbitrariness is widespread and not legally penalized in most hiring instances this signals moral acceptability. This sociological reading of phenomena is widespread in the literature surrounding philosophy of AI, from XAI (Páez [67]) to opacity [22, 50] (Ratti and Galles 2022). Nevertheless, it continues to be epistemically, normatively, and argumentatively suspect. To see that the existence of a phenomena or its widespread availability are not the same as its justification (either moral or epistemic), it suffices to think about corruption and corrupt practices. They exist. They are widespread. Yet we still call them corrupt for a reason. Namely, because they ought not to be acceptable.
It is important to differentiate this view from those that are generally skeptical of the possibility of trusting AI simply on the basis of it being an artifact. For example, Ryan [71] suggests that there are three paradigms in the literature of trust: a rational account, an affective account and the normative account. According to this view, AI, in virtue of being an artifact only qualifies to be allocated the rational kind of trust. That is, the kind of trust that implies the person trusting does so as a product of a logical deliberative process such as a cost-benefit analysis. However, Ryan argues, contrary to either the affective account of trust—which entails a confidence in the goodwill and the emotive motivations of the trustee—and the normative account of trust—which entails expectations about the trustee doing the right thing—that the rational kind of trust is not even a kind of trust. Rather, it is merely a form of reliance. This is because it makes no assumptions about the motives, aims or any relational stances on the part of the trustee. Hence, their argument suggests, trust is not something that one can allocate to AI.
References
Alvarado, R.: AI as an Epistemic Technology. (2022). http://philsci-archive.pitt.edu/id/eprint/21243
Alvarado, R.: Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics 36(2), 121–133 (2022)
Alvarado, R.: Computer simulations as scientific instruments. Found. Sci. 27(3), 1183–1205 (2022)
Andras, P., Esterle, L., Guckert, M., Han, T.A., Lewis, P.R., Milanovic, K., et al.: Trusting intelligent machines: deepening trust within socio-technical systems. IEEE Technol. Soc. Mag. 37(4), 76–83 (2018)
Baier, A.C.: What do women want in a moral theory? Nous, 53–63 (1985)
Barberousse, A., Vorms, M.: About the warrants of computer-based empirical knowledge. Synthese 191(15), 3595–3620 (2014)
Bjerring, J.C., Busch, J.: Artificial intelligence and patient-centered decision-making. Philos. Technol. 34(2), 349–371 (2021)
Blanco, S.: Trust and explainable AI: promises and limitations. ETHICOMP 2022, 246 (2022)
Braun, M., Bleher, H., Hummel, P.: A leap of faith: is there a formula for “Trustworthy” AI? Hastings Cent. Rep. 51(3), 17–22 (2021)
Burrell, J.: How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data soc. 3(1), 2053951715622512 (2016)
Carbonell, J.G., Michalski, R.S., Mitchell, T.M.: An overview of machine learning. Mach. Learn. 3–23 (1983)
Carter, J.A., Simion, M.: The ethics and epistemology of trust. Internet Encycl. Philos. (2020)
Chockley, K., & Emanuel, E. (2016). The end of radiology? Three threats to the future practice of radiology. Journal of the American College of Radiology, 13(12), 1415-1420.
Clark, C.C.: Trust in medicine. J. Med. Philos. 27(1), 11–29 (2002)
Cho, J.H., Xu, S., Hurley, P.M., Mackay, M., Benjamin, T., Beaumont, M.: Stram: measuring the trustworthiness of computer-based systems. ACM Comput. Surv. (CSUR) 51(6), 1–47 (2019)
Choung, H., David, P., Ross, A.: Trust in AI and its role in the acceptance of AI technologies. Int. J. Hum. Comput. Interact. (2022). https://doi.org/10.1080/10447318.2022.2050543
Creel, K., Hellman, D.: The algorithmic Leviathan: arbitrariness, fairness, and opportunity in algorithmic decision-making systems. Can J Philos. 1-18 (2022)
Danks, D.: The value of trustworthy AI. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 521–522 (2019)
Dietz, G., & Den Hartog, D.N.: Measuring trust inside organisations. Personnel review (2006)
Dretske, F.: Entitlement: epistemic rights without epistemic duties? Philos. Phenomenol. Res. 60(3), 591–606 (2000)
Durán, J.M., Formanek, N.: Grounds for trust: Essential epistemic opacityand computational reliabilism. Minds and Machines, 28(4), 645–666 (2018)
Durán, J.M., Jongsma, K.R.: Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethics 47(5), 329–335 (2021)
El Naqa, I., Murphy, M.J.: What is machine learning? In: Machine Learning in Radiation Oncology, pp. 3–11. Springer, Cham (2015)
European Society of Radiology (ESR), Codari, M., Melazzini, L., Morozov, S.P., van Kuijk, C.C., Sconfienza, L.M., Sardanelli, F.: Impact of artificial intelligence on radiology: a EuroAIM survey among members of the European Society of Radiology. Insights Imaging 10, 1–11 (2019)
Ferrario, A., Loi, M., Viganò, E.: In AI we trust incrementally: a multi-layer model of trust to analyze human-artificial intelligence interactions. Philos. Technol. 33(3), 523–539 (2020)
Ferrario, A., Loi, M., Viganò, E.: Trust does not need to be human: it is possible to trust medical AI. J. Med. Ethics 47(6), 437–438 (2021)
Ferrario, A., Loi, M.: The meaning of “Explainability fosters trust in AI”. Available at SSRN 3916396 (2021)
Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1(6), 261–262 (2019)
Gauker, C.: The principle of charity. Synthese, 1–25 (1986)
Gille, F., Jobin, A., Ienca, M.: What we talk about when we talk about trust: theory of trust for AI in healthcare. Intell. Based Med. 1, 100001 (2020)
Gillath, O., Ai, T., Branicky, M.S., Keshmiri, S., Davison, R.B., Spaulding, R.: Attachment and trust in artificial intelligence. Comput. Hum. Behav. 115, 106607 (2021)
Glikson, E., Woolley, A.W.: Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 14(2), 627–660 (2020)
Goldberg, S.C.: Trust and reliance 1. In: The Routledge Handbook of Trust and Philosophy, pp. 97–108. Routledge, London (2020)
Goldman, A.I.: Epistemic paternalism: communication control in law and society. J. Philos. 88(3), 113–131 (1991)
Grote, T., Berens, P.: On the ethics of algorithmic decision-making in healthcare. J. Med. Ethics 46(3), 205–211 (2020)
Hall, M.A., Dugan, E., Zheng, B., Mishra, A.K.: Trust in physicians and medical institutions: what is it, can it be measured, and does it matter? Milbank Q. 79(4), 613–639 (2001)
Hatherley, J.J.: Limits of trust in medical AI. J. Med. Ethics 46(7), 478–481 (2020)
Hengstler, M., Enkel, E., Duelli, S.: Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Chang. 105, 105–120 (2016)
Horsburgh, H.J.N.: The ethics of trust. Philos Q (1950-) 10(41), 343–354 (1960)
Hurlburt, G.: How much to trust artificial intelligence? IT Prof. 19(4), 7–11 (2017)
Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press.
Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615-626.
Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635 (2021)
Jha, S., Topol, E.J.: Adapting to artificial intelligence: radiologists and pathologistsas information specialists. Jama 316(22), 2353–2354 (2016)
Jöhnk, J., Weißert, M., Wyrtki, K.: Ready or not, AI comes—an interview study of organizational AI readiness factors. Bus. Inf. Syst. Eng. 63(1), 5–20 (2021)
Knowles, B., Richards, J.T.: The sanction of authority: Promoting public trust in AI. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 262–271 (2021)
Kuhn, T.S.: The Structure of Scientific Revolutions, vol. 111. University of Chicago Press, Chicago (1970)
Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 1
LaRosa, E., & Danks, D. (2018, December). Impacts on trust of healthcare AI. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 210–215).
London, A.J.: Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent. Rep. 49(1), 15–21 (2019)
Longino, H.E.: Science as social knowledge. In: Science as Social Knowledge. Princeton University Press, Princeton (2020)
Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021)
Mayo, R.C., Leung, J.W.: Impact of artificial intelligence on women’s imaging: cost-benefit analysis. Am. J. Roentgenol. 212(5), 1172–1173 (2019)
Mazurowski, M.A.: Artificial intelligence may cause a significant disruption to the radiology workforce. J. Am. Coll. Radiol. 16(8), 1077–1082 (2019)
Mcknight, D.H., Carter, M., Thatcher, J.B., Clay, P.F.: Trust in a specific technology: an investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2(2), 1–25 (2011)
McGlynn, A.N.: On epistemic alchemy. In Contemporary Perspectives on Scepticism and Perceptual Justification. Oxford University Press, Oxford
Mitchell, M.: Artificial Intelligence: A Guide for Thinking Humans. Penguin, London (2019)
Mittelstadt, B.D., Floridi, L.: The ethics of big data: current and foreseeable issues in biomedical contexts. Sci. Eng. 22(2), 303–341 (2016)
Mökander, J., Floridi, L.: Ethics-based auditing to develop trustworthy AI. Mind. Mach. 31(2), 323–327 (2021)
Morley, J., Machado, C., Burr, C., Cowls, J., Taddeo, M., Floridi, L.: The debate on the ethics of AI in health care: a reconstruction and critical review. Available at SSRN 3486518 (2019)
Morley, J., Machado, C.C., Burr, C., Cowls, J., Joshi, I., Taddeo, M., Floridi, L.: The ethics of AI in health care: a mapping review. Soc. Sci. Med. 260, 113172 (2020)
Nagendran, M., Chen, Y., Lovejoy, C.A., Gordon, A.C., Komorowski, M., Harvey, H., et al.: Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ (2020). https://doi.org/10.1136/bmj.m689
Nickel, P.J., Franssen, M., Kroes, P.: Can we make sense of the notion of trustworthy technology? Knowl. Technol. Policy 23(3), 429–444 (2010)
Nickel, P.J., Frank, L.: Trust in Medicine. In: Routledge Handbook of trust and Philosophy. Routledge, New York, pp 367–377 (2020)
Nundy, S., Montgomery, T., Wachter, R.M.: Promoting trust between patients and physicians in the era of artificial intelligence. JAMA 322(6), 497–498 (2019)
Oreskes, N.: Why Trust Science? Princeton University Press, Princeton (2021)
Páez, A.: The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459 (2019)
Ratti, E., Graves, M.: Explainable machine learning practices: opening another black box for reliable medical AI. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00141-z
Rousseau, D.M., Sitkin, S.B., Burt, R.S., Camerer, C.: Not so different after all: a cross-discipline view of trust. Acad. Manag. Rev. 23(3), 393–404 (1998)
Rossi, F.: Building trust in artificial intelligence. J Int. Aff. 72(1), 127–134 (2018)
Ryan, M.: In AI we trust: ethics, artificial intelligence, and reliability. Sci. Eng. Ethics 26(5), 2749–2767 (2020)
Sarle, W.S.: Neural networks and statistical models. In: Proceedings of the Nineteenth Annual SAS Users Group International Conference (1994)
Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247 (2021)
Scheman, N.: Trust and trustworthiness. In: The Routledge Handbook of Trust and Philosophy, pp. 28–40. Routledge, London (2020)
Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum Comput Stud. 146, 102551 (2021)
Siau, K., Wang, W.: Building trust in artificial intelligence, machine learning, and robotics. Cutter Bus. Technol. J. 31(2), 47–53 (2018)
Simion, M.: The ‘should’in conceptual engineering. Inquiry 61(8), 914–928 (2018)
Simion, M.: Conceptual engineering for epistemic norms. Inquiry (2019). https://doi.org/10.1080/0020174X.2018.1562373
Smuha, N.: Ethics guidelines for trustworthy AI. In: AI & Ethics, 2019/05/28–2019/05/28, Brussels (Digityser), Belgium (2019)
Stanton, B., Jensen, T. Trust and artificial intelligence (2021) (preprint)
Sutrop, M.: Should we trust artificial intelligence? Trames 23(4), 499–522 (2019)
Symons, J., Alvarado, R.: Can we trust big data? applying philosophy of science to software. Big Data Soc. 3(2), 2053951716664747 (2016)
Symons, J., Alvarado, R.: Epistemic entitlements and the practice of computer simulation. Mind. Mach. 29(1), 37–60 (2019)
Symons, J., Alvarado, R.: Epistemic injustice and data science technologies. Synthese 200(2), 1–26 (2022)
Taddeo, M., Floridi, L.: The case for e-trust. Ethics Inf. Technol. 13(1), 1–3 (2011)
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., Van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 272–283 (2020)
von Eschenbach, W.J.: Transparency and the black box problem: why we do not trust AI. Philos. Technol. 34(4), 1607–1622 (2021)
Wilholt, T.: Epistemic trust in science. Br. J. Philos. Sci. 64(2), 233–253 (2013)
Winner, L.: Autonomous Technology: Technics-Out-of-Control as a Theme in Political Thought. MIT Press, Cambridge (1978)
Winner, L.: Do artifacts have politics? Daedalus 109, 121–136 (1980)
Yan, Y., Zhang, J.W., Zang, G.Y., Pu, J.: The primary use of artificial intelligence in cardiovascular diseases: what kind of potential role does artificial intelligence play in future medicine? J. Geriatr. Cardiol. JGC 16(8), 585 (2019)
Acknowledgements
The author wishes to thank the two anonymous reviewers as well as Gabriela Arriagada Bruneau, Sara Blanco, Nico Formanek and Arzu Formanek, whose comments, questions and challenges helped enrich the final version of this paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No conflicts of interest to report from sole author.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Alvarado, R. What kind of trust does AI deserve, if any?. AI Ethics 3, 1169–1183 (2023). https://doi.org/10.1007/s43681-022-00224-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00224-x