Skip to main content

A Case Based Deep Neural Network Interpretability Framework and Its User Study

  • Conference paper
  • First Online:
Book cover Web Information Systems Engineering – WISE 2019 (WISE 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11881))

Included in the following conference series:

  • 2284 Accesses

Abstract

Despite its popularity, the decision making process of a Deep Neural Network (DNN) model is opaque to users, making it difficult to understand the behaviour of the model. We present the design of a Web-based DNN interpretability framework which is based on the core notions in case-based reasoning approaches where exemplars (e.g., data points considered similar to a chosen data point) are utilised to help achieve effective interpretation. We demonstrate the framework via a Web based tool called Deep Explorer (DeX) and present the results of user acceptance studies. Our studies showed the effectiveness of the tool in gaining a better understanding of the decision making process of a DNN model as well as the efficacy of the case-based approach in improving DNN interpretability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A video presentation of the system is available from: https://youtu.be/E87X9U53sXg.

  2. 2.

    We used MNIST [10] in our case study implementation.

References

  1. Sturm, I., Lapuschkin, S., Samek, W., Müller, K.R.: Interpretable deep neural networks for single-trial EEG classification. J. Neurosci. Methods 274, 141–145 (2016)

    Article  Google Scholar 

  2. Aamodt, A., Plaza, E.: Case-based reasoning: foundational issues, methodological variations, and system approaches. AI Commun. 7(1), 39–59 (1994)

    Article  Google Scholar 

  3. Bien, J., Tibshirani, R.: Prototype selection for interpretable classification. Ann. Appl. Stat., 2403–2424 (2011)

    Google Scholar 

  4. Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems, pp. 2280–2288 (2016)

    Google Scholar 

  5. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  6. Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: IEEE Conference on Computer Vision, pp. 3449–3457 (2017)

    Google Scholar 

  7. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: The 34th International Conference on Machine Learning, pp. 1885–1894 (2017)

    Google Scholar 

  8. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)

  9. Wu, H., Wang, C., Yin, J., Lu, K., Zhu, L.: Sharing deep neural network models with interpretation. In: Conference on World Wide Web, pp. 177–186 (2018)

    Google Scholar 

  10. Deng, L.: The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  11. Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  12. Andoni, A., Indyk, P., Laarhoven, T., Razenshteyn, I., Schmidt, L.: Practical and optimal LSH for angular distance. In: Advances in Neural Information Processing Systems, pp. 1225–1233 (2015)

    Google Scholar 

  13. Brooke, J.: Sus-a quick and dirty usability scale. In: Usability Evaluation in Industry, vol. 189, no. 194, pp. 4–7 (1996)

    Google Scholar 

Download references

Acknowledgements

The authors thank all participants who took part in the application user study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hye-young Paik .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nadeem, R., Wu, H., Paik, Hy., Wang, C. (2019). A Case Based Deep Neural Network Interpretability Framework and Its User Study. In: Cheng, R., Mamoulis, N., Sun, Y., Huang, X. (eds) Web Information Systems Engineering – WISE 2019. WISE 2020. Lecture Notes in Computer Science(), vol 11881. Springer, Cham. https://doi.org/10.1007/978-3-030-34223-4_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34223-4_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-34222-7

  • Online ISBN: 978-3-030-34223-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics