Abstract
This paper introduces argumentative-generative models for statistical learning—i.e., generative statistical models seen from a Bayesian argumentation perspective—and shows how they support trustworthy artificial intelligence (AI). Generative Bayesian approaches are already very promising for achieving robustness against adversarial attacks, a fundamental component of trustworthy AI. This paper shows how Bayesian argumentation can help us achieve transparent assessments of epistemic uncertainty and testability of models, two necessary ingredients for trustworthy AI. We also discuss the limitations of this approach, notably those traditionally linked to Bayesian methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
“An intelligence that, at a given instant, could comprehend all the forces by which nature is animated and the respective situation of the beings that make it up” [17, p.2].
References
Amershi, S., et al.: Guidelines for human-AI interaction. In: Conference on Human Factors in Computing Systems - Proceedings, pp. 1–13. Association for Computing Machinery, New York, USA (2019)
Bansal, G., Nushi, B., Kamar, E., Lasecki, W., Weld, D., Horvitz, E.: Beyond accuracy: the role of mental models in human-AI team performance. In: HCOMP. AAAI (2019)
Bansal, G., Nushi, B., Kamar, E., Weld, D.S., Lasecki, W.S., Horvitz, E.: Updates in human-AI teams: understanding and addressing the performance/compatibility tradeoff. In: AAAI, pp. 2429–2437 (2019)
Bishop, C. M., Nasrabadi, N.M.: Pattern Recognition and Machine Learning. ISS, Springer, New York (2006). https://doi.org/10.1007/978-0-387-45528-0_9
Cerutti, F., Kaplan, L.M., Kimmig, A., Sensoy, M.: Probabilistic logic programming with beta-distributed random variables. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, pp. 7769–7776 (2019)
Copeland, A.E., et al.: What does it take to become an academic plastic surgeon in Canada: hiring trends over the last 50 years. Plastic Surgery (to appear)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a Large-Scale Hierarchical Image Database. In: CVPR09 (2009)
Depaoli, S., Van de Schoot, R.: Improving transparency and replication in Bayesian statistics: the WAMBS-checklist. Psychol. Methods 22(2), 240 (2017)
Hahn, U., Hornikx, J.: A normative framework for argument quality: argumentation schemes with a Bayesian foundation. Synthese 193(6), 1833–1873 (2015). https://doi.org/10.1007/s11229-015-0815-0
Hahn, U., Oaksford, M.: The rationality of informal argumentation: a Bayesian approach to reasoning fallacies. Psychol. Rev. 114(3), 704–732 (2007)
Hahn, U., Oaksford, M., Harris, A.J.L.: Testimony and argument: a Bayesian perspective. In: Zenker, F. (ed.) Bayesian Argumentation. SL, vol. 362, pp. 15–38. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-5357-0_2
Hemilä, H., Chalker, E.: Vitamin C may reduce the duration of mechanical ventilation in critically ill patients: a meta-regression analysis. J. Intensive Care 8(1), 15 (2020)
Hora, S.C.: Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management. Reliab. Eng. Syst. Saf. 54(2), 217–223 (1996)
Hüllermeier, E., Waegeman, W.: Aleatoric and epistemic uncertainty in machine learning: a tutorial introduction (2019)
Kaplan, L.M., Ivanovska, M.: Efficient belief propagation in second-order Bayesian networks for singly-connected graphs. Int. J. Approximate Reasoning 93, 132–152 (2018)
Kocielnik, R., Amershi, S., Bennett, P.N.: Will you accept an imperfect AI? exploring designs for adjusting end-user expectations of AI systems. In: Conference on Human Factors in Computing Systems - Proceedings, pp. 1–14. Association for Computing Machinery, New York, USA (2019)
Laplace, P.S.: A Philosophical Essay on Probabilities. Springer, Heidelberg (1825). Translator: Dale, A.I. (1995)
Mehmetoglu, M., Venturini, S.: Structural Equation Modelling with Partial Least Squares Using Stata and R. CRC Press, Boca Raton (2021)
Mohammadian, M., Javed, Z.: Intelligent evaluation of test suites for developing efficient and reliable software. Int. J. Parallel, Emergent Distrib. Syst. 1–30 (2019)
Pollock, J.L.: Defeasible reasoning with variable degrees of justification. Artif. Intell. 133(1–2), 233–282 (2001)
Popper, K.R.: Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge, 5th edn. (1989)
Sensoy, M., Kaplan, L., Cerutti, F., Saleki, M.: Uncertainty-aware deep classifiers using generative models. In: The Thirty-Forth AAAI Conference on Artificial Intelligence, AAAI 2020 (2020)
Sensoy, M., Kaplan, L.M., Kandemir, M.: Evidential deep learning to quantify classification uncertainty. In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 Dec 2018, Montréal, Canada, pp. 3183–3193 (2018)
Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., Sculley, D.: No classification without representation: assessing geodiversity issues in open data sets for the developing world. In: NIPS 2017 Workshop on Machine Learning for the Developing World (2017)
Shannon, C.E.: Communication in the presence of noise. Proceed. IRE 37(1), 10–21 (1949)
Tomsett, R., Braines, D., Harborne, D., Preece, A.D., Chakraborty, S.: Interpretable to whom? a role-based model for analyzing interpretable machine learning systems. In: 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018) (2018)
Toniolo, A., et al.: Supporting reasoning with different types of evidence in intelligence analysis. In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, vol. 2 (2015)
Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR 2011, pp. 1521–1528 (2011)
Vassiliades, A., Bassiliades, N., Patkos, T.: Argumentation and explainable artificial intelligence: a survey. Knowl. Eng. Rev. 36, e5 (2021)
Walton, D., Reed, C., Macagno, F.: Argumentation Schemes. Cambridge University Press, NY (2008)
Walton, D.: Rules for plausible reasoning. Informal Logic, 14(1) (1992)
Xu, B., Lin, B.: Investigating drivers of CO2 emission in China’s heavy industry: a quantile regression analysis. Energy 206, 118159 (2020)
Acknowledgments
This research was sponsored by the Italian Ministry of Research through a Rita Levi-Montalcini Personal Fellowship (D.M. n. 285, 29/03/2019) and by the U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K. Ministry of Defence or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cerutti, F. (2022). Supporting Trustworthy Artificial Intelligence via Bayesian Argumentation. In: Bandini, S., Gasparini, F., Mascardi, V., Palmonari, M., Vizzari, G. (eds) AIxIA 2021 – Advances in Artificial Intelligence. AIxIA 2021. Lecture Notes in Computer Science(), vol 13196. Springer, Cham. https://doi.org/10.1007/978-3-031-08421-8_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-08421-8_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08420-1
Online ISBN: 978-3-031-08421-8
eBook Packages: Computer ScienceComputer Science (R0)