Abstract
Despite its popularity, the decision making process of a Deep Neural Network (DNN) model is opaque to users, making it difficult to understand the behaviour of the model. We present the design of a Web-based DNN interpretability framework which is based on the core notions in case-based reasoning approaches where exemplars (e.g., data points considered similar to a chosen data point) are utilised to help achieve effective interpretation. We demonstrate the framework via a Web based tool called Deep Explorer (DeX) and present the results of user acceptance studies. Our studies showed the effectiveness of the tool in gaining a better understanding of the decision making process of a DNN model as well as the efficacy of the case-based approach in improving DNN interpretability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A video presentation of the system is available from: https://youtu.be/E87X9U53sXg.
- 2.
We used MNIST [10] in our case study implementation.
References
Sturm, I., Lapuschkin, S., Samek, W., Müller, K.R.: Interpretable deep neural networks for single-trial EEG classification. J. Neurosci. Methods 274, 141–145 (2016)
Aamodt, A., Plaza, E.: Case-based reasoning: foundational issues, methodological variations, and system approaches. AI Commun. 7(1), 39–59 (1994)
Bien, J., Tibshirani, R.: Prototype selection for interpretable classification. Ann. Appl. Stat., 2403–2424 (2011)
Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems, pp. 2280–2288 (2016)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: IEEE Conference on Computer Vision, pp. 3449–3457 (2017)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: The 34th International Conference on Machine Learning, pp. 1885–1894 (2017)
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)
Wu, H., Wang, C., Yin, J., Lu, K., Zhu, L.: Sharing deep neural network models with interpretation. In: Conference on World Wide Web, pp. 177–186 (2018)
Deng, L.: The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012)
Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Andoni, A., Indyk, P., Laarhoven, T., Razenshteyn, I., Schmidt, L.: Practical and optimal LSH for angular distance. In: Advances in Neural Information Processing Systems, pp. 1225–1233 (2015)
Brooke, J.: Sus-a quick and dirty usability scale. In: Usability Evaluation in Industry, vol. 189, no. 194, pp. 4–7 (1996)
Acknowledgements
The authors thank all participants who took part in the application user study.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Nadeem, R., Wu, H., Paik, Hy., Wang, C. (2019). A Case Based Deep Neural Network Interpretability Framework and Its User Study. In: Cheng, R., Mamoulis, N., Sun, Y., Huang, X. (eds) Web Information Systems Engineering – WISE 2019. WISE 2020. Lecture Notes in Computer Science(), vol 11881. Springer, Cham. https://doi.org/10.1007/978-3-030-34223-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-34223-4_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34222-7
Online ISBN: 978-3-030-34223-4
eBook Packages: Computer ScienceComputer Science (R0)