Abstract
In view of the poor interpretability of the current neural network models, the neural support decision tree model is used to enhance its interpretability. The model combines the characteristics of high recognition accuracy of neural network and strong interpretation of decision tree. We employ the ResNet18 model to solve the gradient disappearance problem with the increase of network depth. By constructing induction hierarchy and establishing hierarchy in weight space, a higher accuracy is obtained. The hierarchical structure derived from the model parameters is adopted to avoid over fitting. And the trained network weights are utilized to construct a tree structure to complete the tree monitoring loss training, and the classification network is retrained or finetuned with additional hierarchy-based loss items. We exploit the neural network backbone to characterize each sample, and establish a decision tree in the weight space is run to enhance the interpretability of the model. At the same time, the optimization of the model is completed. Compared with the original model, the traditional hard decision tree reasoning rules are abandoned and the soft decision tree reasoning rules are adopted to complete the soft tree supervision loss to improve the classification accuracy and generalization ability of the model, which not only ensures high accuracy, but also completes the explicit display of recognition and classification process.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ji, S.L., Li, J.F., Du, T.Y., Li, B.: Review on interpretability methods, applications and security of machine learning models. Comput. Res. Dev. 56(10), 2071–2096 (2019)
Wu, F., Liao, B.B., Han, Y.H.: Interpretability of deep learning. Aviation Weapon 26(1), 39–46 (2019)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Neural Inf. Process. Syst. 141(5), 1097–1105 (2012)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Donahue, J., Jia, Y., Hoffman, J., Darrell, T.: Decaf: A deep convolutional activation feature for generic visual recognition. Proc. Int. Conf. Mach. Learn. 32, 647–655 (2014)
Xin, Y.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)
Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. 98(1/2/3), 49–71 (2003)
Eitel, A., Springenberg, J.T., Spinello, L., Riedmiller, M.: Multimodal deep learning for robust RGB-D object recognition. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 681–687 (2015)
Milanese, M., Tempo, R.: Optimal algorithms theory for robust estimation and prediction. IEEE Trans. Autom. Control 30(8), 730–738 (1985)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. Int. Conf. Mach. Learn. 70, 1885–1894 (2017)
Reitmayr, G., Drummond, T.: Going out: Robust model-based tracking for outdoor augmented reality. IEEE/ACM Int. Symp. Mixed Augmented Reality, 109–118 (2006)
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. Int. Conf. Mach. Learn. 70, 3145–3153 (2017)
Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 31–57 (2016)
Zhou, T., Huo, B.Q., Lu, H.L., Ren, H.L.: Residual neural network and its application in medical image processing. J. Electron. 48, 1436–1447 (2020)
Huang, R.R.: Application of deep learning in military aircraft recognition and detection. Lanzhou University (2020)
Wan, A., Dunlap, L., Ho, D., Yin, J.: NBDT: Neural-backed decision trees. arXiv:2004.00221 (2020)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Cheng, K.Y., Wang, N., Shi, W.X.: Research progress on interpretability of deep learning. Comput. Res. Dev. 6, 1208–1217 (2020)
Gao, L., Fan, B.B., Huang, S.: Improved convolution neural network image classification algorithm based on residual. Comput. Syst. Appl., 139–144(2019)
Funding
This research was funded in part by the National Natural Science Foundation of China, grant number 62172122, and the Scientific and Technological Innovation 2030 - Major Project of "Brain Science and Brain-Like Intelligence Technology Research", grant number 2021ZD0200406.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, L., Jia, W., Jiang, J., Yu, Y. (2022). An Interpretability Algorithm of Neural Network Based on Neural Support Decision Tree. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13369. Springer, Cham. https://doi.org/10.1007/978-3-031-10986-7_41
Download citation
DOI: https://doi.org/10.1007/978-3-031-10986-7_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-10985-0
Online ISBN: 978-3-031-10986-7
eBook Packages: Computer ScienceComputer Science (R0)