Skip to main content

DRIB: Interpreting DNN with Dynamic Reasoning and Information Bottleneck

  • Conference paper
  • First Online:
Data Science (ICPCSEE 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1628))

  • 698 Accesses

Abstract

The interpretability of deep neural networks has aroused widespread concern in the academic and industrial fields. This paper proposes a new method named the dynamic reasoning and information bottleneck (DRIB) to improve human interpretability and understandability. In the method, a novel dynamic reasoning decision algorithm was proposed to reduce multiply accumulate operations and improve the interpretability of the calculation. The information bottleneck was introduced to the DRIB model to verify the attribution correctness of the dynamic reasoning module. The DRIB reduces the burden approximately 50% by decreasing the amount of computation. In addition, DRIB keeps the correct rate at approximately 93%. The information bottleneck theory verifies the effectiveness of this method, and the credibility is approximately 85%. In addition, through visual verification of this method, the highlighted area can reach 50% of the predicted area, which can be explained more obviously. Some experiments prove that the dynamic reasoning decision algorithm and information bottleneck theory can be combined with each other. Otherwise, the method provides users with good interpretability and understandability, making deep neural networks trustworthy.

Supported by National Natural Science Foundation of China [61972183].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kim, Y.J., Bae, J.P., Chung, J.W., et al.: New polyp image classification technique using transfer learning of network-in-network structure in endoscopic images. Sci. Rep. 11(1), 1–8 (2021)

    Google Scholar 

  2. Fan, Q., Zhuo, W., Tang, C.K., et al.: Few-shot object detection with attention-RPN and multi-relation detector. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4013–4022 (2020)

    Google Scholar 

  3. Yadav, A., Vishwakarma, D.K.: Sentiment analysis using deep learning architectures: a review. Artif. Intell. Rev. 53(6), 4335–4385 (2019). https://doi.org/10.1007/s10462-019-09794-5

    Article  Google Scholar 

  4. Wu, M., Parbhoo, S., Hughes, M., et al.: Regional tree regularization for interpretability in deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 6413–6421

    Google Scholar 

  5. Kubara, K.J., Manczak, B., Dolicki, B., et al.: Towards transparent and explainable attention models. In: ML Reproducibility Challenge 2020 (2021)

    Google Scholar 

  6. Misheva, B.H., Osterrieder, J., Hirsa, A., et al.: Explainable AI in credit risk management. arXiv preprint arXiv:2103.00949 (2021)

  7. Torrent, N.L., Visani, G., Bagli, E.: PSD2 explainable AI model for credit scoring. arXiv preprint arXiv:2011.10367 (2020)

  8. Loquercio, A., Segu, M., Scaramuzza, D.: A general framework for uncertainty estimation in deep learning. IEEE Robot. Autom. Lett. 5(2), 3153–3160 (2020)

    Article  Google Scholar 

  9. Zhang, Q., Cao, R., Shi, F., et al.: Interpreting CNN knowledge via an explanatory graph. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  10. Bau, D., Zhou, B., Khosla, A., et al.: Network dissection: quantifying interpretability of deep visual representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6541–6549 (2017)

    Google Scholar 

  11. Nguyen, A., Clune, J., Bengio, Y., et al.: Plug & play generative networks: conditional iterative generation of images in latent space. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4467–4477 (2017)

    Google Scholar 

  12. Bau, D., Zhu, J.Y., Strobelt, H., et al.: Understanding the role of individual units in a deep neural network. Proc. Natl. Acad. Sci. 117(48), 30071–30078 (2020)

    Article  Google Scholar 

  13. Zhou, B., Bau, D., Oliva, A., et al.: Interpreting deep visual representations via network dissection. IEEE Trans. Pattern Anal. Mach. Intell. 41(9), 2131–2145 (2018)

    Article  Google Scholar 

  14. Fong, R., Vedaldi, A.: Net2Vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8730–8738 (2018)

    Google Scholar 

  15. Zhou, B., Khosla, A., Lapedriza, A., et al.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)

    Google Scholar 

  16. Selvaraju, R.R., Cogswell, M., Das, A., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  17. Wang, H., Wang, Z., Du, M., et al.: Score-CAM: score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 24–25 (2020)

    Google Scholar 

  18. Lage, I., Ross, A., Gershman, S.J., et al.: Human-in-the-loop interpretability prior. In: Advances in Neural Information Processing Systems, p. 31 (2018)

    Google Scholar 

  19. Subramanian, A., Pruthi, D., Jhamtani, H., et al.: SPINE: Sparse interpretable neural embeddings. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  20. Ho, D.: NBDT: neural-backed decision trees. Master’s thesis, EECS Department, University of California, Berkeley (2020)

    Google Scholar 

  21. Nauta, M., van Bree, R., Seifert, C.: Neural prototype trees for interpretable fine-grained image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14933–14943 (2021)

    Google Scholar 

  22. Fan, F., Wang, G.: Fuzzy logic interpretation of quadratic networks. Neurocomputing 374, 10–21 (2020)

    Article  Google Scholar 

  23. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: High-precision model-agnostic explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  24. Zhang, Q., Yang, Y., Ma, H., et al.: Interpreting CNNs via decision trees. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6261–6270 (2019)

    Google Scholar 

  25. Shen, W., Guo, Y., Wang, Y., et al.: Deep differentiable random forests for age estimation. IEEE Trans. Pattern Anal. Mach. Intell. 43(2), 404–419 (2019)

    Article  Google Scholar 

  26. Zhang, Q., Wu, Y.N., Zhu, S.C.: Interpretable convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8827–8836 (2018)

    Google Scholar 

  27. Wang, Y., Zhang, X., Hu, X., et al.: Dynamic network pruning with interpretable layerwise channel selection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 6299–6306 (2020)

    Google Scholar 

  28. Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5058–5066 (2017)

    Google Scholar 

  29. Li, H., Kadav, A., Durdanovic, I., et al.: Pruning filters for efficient ConvNets. arXiv preprint arXiv:1608.08710 (2016)

  30. He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397 (2017)

    Google Scholar 

  31. Liu, Z., Li, J., Shen, Z., et al.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744 (2017)

    Google Scholar 

  32. Lin, J., Rao, Y., Lu, J., et al.: Runtime neural pruning. In: Advances in Neural Information Processing Systems 30 (2017)

    Google Scholar 

  33. Gao, X., Zhao, Y., Dudziak, Ł., et al.: Dynamic channel pruning: feature boosting and suppression. arXiv preprint arXiv:1810.05331 (2018)

  34. He, Y., Kang, G., Dong, X., et al.: Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866 (2018)

  35. He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., Han, S.: AMC: AutoML for model compression and acceleration on mobile devices. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 815–832. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_48

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keyang Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Si, Y., Cheng, K., Jiang, Z., Zhou, H., Tahir, R. (2022). DRIB: Interpreting DNN with Dynamic Reasoning and Information Bottleneck. In: Wang, Y., Zhu, G., Han, Q., Wang, H., Song, X., Lu, Z. (eds) Data Science. ICPCSEE 2022. Communications in Computer and Information Science, vol 1628. Springer, Singapore. https://doi.org/10.1007/978-981-19-5194-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-5194-7_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-5193-0

  • Online ISBN: 978-981-19-5194-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics