Abstract
Instilling confidence in the abilities of machine learning systems in end-users is seen as critical to their success in real world problems. One way in which this can be achieved is by providing users with interpretable explanations of the system’s predictions. CBR systems have long been understood to have an inherent transparency that has particular advantages for explanations compared with other machine learning techniques. However simply supplying the most similar case is often not enough. In this paper we present a framework for providing interpretable explanations of CBR systems which includes dynamically created discursive texts explaining the feature-value relationships and a measure of confidence of the CBR system’s prediction being correct. We also present a means by which the trade-off between being overly confident or overly cautious can be evaluated and different methods compared. We have carried out a preliminary user evaluation of the framework and present our findings. It is clear from this evaluation that being right is important. It appears that caveats and notes of caution when the system is uncertain damage user confidence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Leake, D.: Case-Based Reasoning: Experiences, Lessons and Future Directions. AAAI/MIT Press (1996)
Cunningham, P., Doyle, D., Loughrey, J.: An evaluation of the usefulness of case-based explanation. In: Ashley, K.D., Bridge, D.G. (eds.) ICCBR 2003. LNCS, vol. 2689, pp. 122–130. Springer, Heidelberg (2003)
Nugent, C., Cunningham, P.: A case-based explantion system for black box systems. Artificial Intelligence Review (2005) (to appear)
McSherry, D.: Explanation in case-based reasoning: an evidential approach. In: 8th UK Workshop on Case-Based Reasoning, pp. 47–55 (2003)
Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning: Perspectives and goals. Artificial Intelligence Review (2005) (to appear)
Doyle, D., Cunningham, P., Bridge, D., Rahman, Y.: Explanation oriented retrieval. In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004. LNCS (LNAI), vol. 3155, pp. 157–168. Springer, Heidelberg (2004)
Cheetham, W., Price, J.: Measures of solution accuracy in case-based reasoning systems. In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004. LNCS (LNAI), vol. 3155, pp. 106–118. Springer, Heidelberg (2004)
Hosmer, D., Lemeshow, S.: Applied Logistic Regression, 2nd edn. Wiley, Chichester (2000)
Leake, D., Birnbaum, L., Hammond, K., Marlow, C., Yang, H.: An integrated interface for proactive, experience-based design support. In: Proceedings of the 2001 International Conference on Intelligent User Interfaces, pp. 101–108 (2001)
Flach, P., Blockeel, H., Ferri, C., Hernandez-Orallo, J., Struyf, J.: Decision support for data mining: introduction to ROC analysis and its application, pp. 81–90. Kluwer Academic Publishers, Dordrecht (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Nugent, C., Cunningham, P., Doyle, D. (2005). The Best Way to Instil Confidence Is by Being Right. In: Muñoz-Ávila, H., Ricci, F. (eds) Case-Based Reasoning Research and Development. ICCBR 2005. Lecture Notes in Computer Science(), vol 3620. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11536406_29
Download citation
DOI: https://doi.org/10.1007/11536406_29
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28174-0
Online ISBN: 978-3-540-31855-2
eBook Packages: Computer ScienceComputer Science (R0)