Abstract
Neural ordinary differential equations (NODEs) have achieved remarkable performance in many data mining applications that involve multivariate time series data. Its adoption in the data-driven discovery of dynamic systems, however, was hindered by the lack of interpretability due to the black-box nature of neural networks. In this study, we propose a simple yet effective NODE architecture inspired by the highly successful generalised additive models. Our proposed model combines linear and nonlinear components to capture interpretable evolution rules with only a marginal loss of model expressiveness. Experiments show that our model can effectively recover interactions among variables in a complex dynamic system from observation data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agarwal, R., et al.: Neural additive models: interpretable machine learning with neural nets. Adv. Neural. Inf. Process. Syst. 34, 4699–4711 (2021)
Butcher, J.C.: The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods. Wiley-Interscience, Hoboken (1987)
Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. Adv. Neural Inf. Process. Syst. 31, 1–13 (2018)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Ducat, D.C., Way, J.C., Silver, P.A.: Engineering cyanobacteria to generate high-value products. Trends Biotechnol. 29(2), 95–103 (2011)
Dupont, E., Doucet, A., Teh, Y.W.: Augmented neural odes. Adv. Neural Inf. Process. Syst. 32, 1–11 (2019)
Fan, F.L., Xiong, J., Li, M., Wang, G.: On interpretability of artificial neural networks: a survey. IEEE Trans. Radiat. Plasma Med. Sci. 5(6), 741–760 (2021)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data Science and Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)
Hastie, T.J.: Generalized additive models. In: Statistical Models in S, pp. 249–307. Routledge (2017)
Kimura, T.: On dormand-prince method. Jpn. Malaysia Tech. Instit 40, 1–9 (2009)
Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Lou, A., et al.: Neural manifold ordinary differential equations. Adv. Neural. Inf. Process. Syst. 33, 17548–17558 (2020)
Mangan, N.M., Brunton, S.L., Proctor, J.L., Kutz, J.N.: Inferring biological networks by sparse identification of nonlinear dynamics. IEEE Trans. Molec. Biol. Multi-Scale Commun. 2(1), 52–63 (2016)
Matsumoto, H., et al.: Scode: an efficient regulatory network inference algorithm from single-cell RNA-SEQ during differentiation. Bioinformatics 33(15), 2314–2321 (2017)
Micheletti, C.: Comparing proteins by their internal dynamics: exploring structure-function relationships beyond static structural alignments. Phys. Life Rev. 10(1), 1–26 (2013)
Molnar, C.: Interpretable machine learning. Lulu. com (2020)
Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digital Signal Process. 73, 1–15 (2018)
Munkhdalai, L., Munkhdalai, T., Ryu, K.H.: A locally adaptive interpretable regression. arXiv preprint arXiv:2005.03350 (2020)
Nowak, M.A.: Evolutionary Dynamics: Exploring the Equations of Life. Harvard University Press, Cambridge (2006)
Park, Y., Kellis, M.: Deep learning for regulatory genomics. Nat. Biotechnol. 33(8), 825–826 (2015)
Poli, M., Massaroli, S., Yamashita, A., Asama, H., Park, J., Ermon, S.: Torchdyn: implicit models and neural numerical methods in pytorch (2021)
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Roscher, R., Bohn, B., Duarte, M.F., Garcke, J.: Explainable machine learning for scientific insights and discoveries. IEEE Access 8, 42200–42216 (2020)
Schielzeth, H.: Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 1(2), 103–113 (2010)
Sheu, Y.H.: Illuminating the black box: interpreting deep neural network models for psychiatric research. Front. Psychiat. 11, 551299 (2020)
Singh, C., Murdoch, W.J., Yu, B.: Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337 (2018)
Yarmush, M.L., Banta, S.: Metabolic engineering: advances in modeling and intervention in health and disease. Annu. Rev. Biomed. Eng. 5(1), 349–381 (2003)
Zhang, Y., Tiňo, P., Leonardis, A., Tang, K.: A survey on neural network interpretability. IEEE Trans. Emerg. Topics Comput. Intell. 5, 726–742 (2021)
Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting deep visual representations via network dissection. IEEE Trans. Pattern Anal. Mach. Intell. 41(9), 2131–2145 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xiang, M., Luo, W., Hou, J., Tao, W. (2023). Bridging the Interpretability Gap in Coupled Neural Dynamical Models. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14176. Springer, Cham. https://doi.org/10.1007/978-3-031-46661-8_35
Download citation
DOI: https://doi.org/10.1007/978-3-031-46661-8_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46660-1
Online ISBN: 978-3-031-46661-8
eBook Packages: Computer ScienceComputer Science (R0)