Skip to main content

An Interpretability Algorithm of Neural Network Based on Neural Support Decision Tree

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13369))

  • 1692 Accesses

Abstract

In view of the poor interpretability of the current neural network models, the neural support decision tree model is used to enhance its interpretability. The model combines the characteristics of high recognition accuracy of neural network and strong interpretation of decision tree. We employ the ResNet18 model to solve the gradient disappearance problem with the increase of network depth. By constructing induction hierarchy and establishing hierarchy in weight space, a higher accuracy is obtained. The hierarchical structure derived from the model parameters is adopted to avoid over fitting. And the trained network weights are utilized to construct a tree structure to complete the tree monitoring loss training, and the classification network is retrained or finetuned with additional hierarchy-based loss items. We exploit the neural network backbone to characterize each sample, and establish a decision tree in the weight space is run to enhance the interpretability of the model. At the same time, the optimization of the model is completed. Compared with the original model, the traditional hard decision tree reasoning rules are abandoned and the soft decision tree reasoning rules are adopted to complete the soft tree supervision loss to improve the classification accuracy and generalization ability of the model, which not only ensures high accuracy, but also completes the explicit display of recognition and classification process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ji, S.L., Li, J.F., Du, T.Y., Li, B.: Review on interpretability methods, applications and security of machine learning models. Comput. Res. Dev. 56(10), 2071–2096 (2019)

    Google Scholar 

  2. Wu, F., Liao, B.B., Han, Y.H.: Interpretability of deep learning. Aviation Weapon 26(1), 39–46 (2019)

    Google Scholar 

  3. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Neural Inf. Process. Syst. 141(5), 1097–1105 (2012)

    Google Scholar 

  4. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  5. Donahue, J., Jia, Y., Hoffman, J., Darrell, T.: Decaf: A deep convolutional activation feature for generic visual recognition. Proc. Int. Conf. Mach. Learn. 32, 647–655 (2014)

    Google Scholar 

  6. Xin, Y.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)

    Article  Google Scholar 

  7. Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. 98(1/2/3), 49–71 (2003)

    Article  MathSciNet  Google Scholar 

  8. Eitel, A., Springenberg, J.T., Spinello, L., Riedmiller, M.: Multimodal deep learning for robust RGB-D object recognition. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 681–687 (2015)

    Google Scholar 

  9. Milanese, M., Tempo, R.: Optimal algorithms theory for robust estimation and prediction. IEEE Trans. Autom. Control 30(8), 730–738 (1985)

    Article  MathSciNet  Google Scholar 

  10. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. Int. Conf. Mach. Learn. 70, 1885–1894 (2017)

    Google Scholar 

  11. Reitmayr, G., Drummond, T.: Going out: Robust model-based tracking for outdoor augmented reality. IEEE/ACM Int. Symp. Mixed Augmented Reality, 109–118 (2006)

    Google Scholar 

  12. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. Int. Conf. Mach. Learn. 70, 3145–3153 (2017)

    Google Scholar 

  13. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 31–57 (2016)

    Google Scholar 

  14. Zhou, T., Huo, B.Q., Lu, H.L., Ren, H.L.: Residual neural network and its application in medical image processing. J. Electron. 48, 1436–1447 (2020)

    Google Scholar 

  15. Huang, R.R.: Application of deep learning in military aircraft recognition and detection. Lanzhou University (2020)

    Google Scholar 

  16. Wan, A., Dunlap, L., Ho, D., Yin, J.: NBDT: Neural-backed decision trees. arXiv:2004.00221 (2020)

  17. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  18. Cheng, K.Y., Wang, N., Shi, W.X.: Research progress on interpretability of deep learning. Comput. Res. Dev. 6, 1208–1217 (2020)

    Google Scholar 

  19. Gao, L., Fan, B.B., Huang, S.: Improved convolution neural network image classification algorithm based on residual. Comput. Syst. Appl., 139–144(2019)

    Google Scholar 

Download references

Funding

This research was funded in part by the National Natural Science Foundation of China, grant number 62172122, and the Scientific and Technological Innovation 2030 - Major Project of "Brain Science and Brain-Like Intelligence Technology Research", grant number 2021ZD0200406.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuntao Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, L., Jia, W., Jiang, J., Yu, Y. (2022). An Interpretability Algorithm of Neural Network Based on Neural Support Decision Tree. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13369. Springer, Cham. https://doi.org/10.1007/978-3-031-10986-7_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10986-7_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10985-0

  • Online ISBN: 978-3-031-10986-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics