Skip to main content

Bridging the Interpretability Gap in Coupled Neural Dynamical Models

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14176))

Included in the following conference series:

  • 564 Accesses

Abstract

Neural ordinary differential equations (NODEs) have achieved remarkable performance in many data mining applications that involve multivariate time series data. Its adoption in the data-driven discovery of dynamic systems, however, was hindered by the lack of interpretability due to the black-box nature of neural networks. In this study, we propose a simple yet effective NODE architecture inspired by the highly successful generalised additive models. Our proposed model combines linear and nonlinear components to capture interpretable evolution rules with only a marginal loss of model expressiveness. Experiments show that our model can effectively recover interactions among variables in a complex dynamic system from observation data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, R., et al.: Neural additive models: interpretable machine learning with neural nets. Adv. Neural. Inf. Process. Syst. 34, 4699–4711 (2021)

    Google Scholar 

  2. Butcher, J.C.: The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods. Wiley-Interscience, Hoboken (1987)

    MATH  Google Scholar 

  3. Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. Adv. Neural Inf. Process. Syst. 31, 1–13 (2018)

    Google Scholar 

  4. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  5. Ducat, D.C., Way, J.C., Silver, P.A.: Engineering cyanobacteria to generate high-value products. Trends Biotechnol. 29(2), 95–103 (2011)

    Article  Google Scholar 

  6. Dupont, E., Doucet, A., Teh, Y.W.: Augmented neural odes. Adv. Neural Inf. Process. Syst. 32, 1–11 (2019)

    Google Scholar 

  7. Fan, F.L., Xiong, J., Li, M., Wang, G.: On interpretability of artificial neural networks: a survey. IEEE Trans. Radiat. Plasma Med. Sci. 5(6), 741–760 (2021)

    Article  Google Scholar 

  8. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data Science and Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)

    Google Scholar 

  9. Hastie, T.J.: Generalized additive models. In: Statistical Models in S, pp. 249–307. Routledge (2017)

    Google Scholar 

  10. Kimura, T.: On dormand-prince method. Jpn. Malaysia Tech. Instit 40, 1–9 (2009)

    Google Scholar 

  11. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  12. Lou, A., et al.: Neural manifold ordinary differential equations. Adv. Neural. Inf. Process. Syst. 33, 17548–17558 (2020)

    Google Scholar 

  13. Mangan, N.M., Brunton, S.L., Proctor, J.L., Kutz, J.N.: Inferring biological networks by sparse identification of nonlinear dynamics. IEEE Trans. Molec. Biol. Multi-Scale Commun. 2(1), 52–63 (2016)

    Article  Google Scholar 

  14. Matsumoto, H., et al.: Scode: an efficient regulatory network inference algorithm from single-cell RNA-SEQ during differentiation. Bioinformatics 33(15), 2314–2321 (2017)

    Article  Google Scholar 

  15. Micheletti, C.: Comparing proteins by their internal dynamics: exploring structure-function relationships beyond static structural alignments. Phys. Life Rev. 10(1), 1–26 (2013)

    Article  MathSciNet  Google Scholar 

  16. Molnar, C.: Interpretable machine learning. Lulu. com (2020)

    Google Scholar 

  17. Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digital Signal Process. 73, 1–15 (2018)

    Article  MathSciNet  Google Scholar 

  18. Munkhdalai, L., Munkhdalai, T., Ryu, K.H.: A locally adaptive interpretable regression. arXiv preprint arXiv:2005.03350 (2020)

  19. Nowak, M.A.: Evolutionary Dynamics: Exploring the Equations of Life. Harvard University Press, Cambridge (2006)

    Book  MATH  Google Scholar 

  20. Park, Y., Kellis, M.: Deep learning for regulatory genomics. Nat. Biotechnol. 33(8), 825–826 (2015)

    Article  Google Scholar 

  21. Poli, M., Massaroli, S., Yamashita, A., Asama, H., Park, J., Ermon, S.: Torchdyn: implicit models and neural numerical methods in pytorch (2021)

    Google Scholar 

  22. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  23. Roscher, R., Bohn, B., Duarte, M.F., Garcke, J.: Explainable machine learning for scientific insights and discoveries. IEEE Access 8, 42200–42216 (2020)

    Article  Google Scholar 

  24. Schielzeth, H.: Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 1(2), 103–113 (2010)

    Article  Google Scholar 

  25. Sheu, Y.H.: Illuminating the black box: interpreting deep neural network models for psychiatric research. Front. Psychiat. 11, 551299 (2020)

    Google Scholar 

  26. Singh, C., Murdoch, W.J., Yu, B.: Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337 (2018)

  27. Yarmush, M.L., Banta, S.: Metabolic engineering: advances in modeling and intervention in health and disease. Annu. Rev. Biomed. Eng. 5(1), 349–381 (2003)

    Article  Google Scholar 

  28. Zhang, Y., Tiňo, P., Leonardis, A., Tang, K.: A survey on neural network interpretability. IEEE Trans. Emerg. Topics Comput. Intell. 5, 726–742 (2021)

    Article  Google Scholar 

  29. Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting deep visual representations via network dissection. IEEE Trans. Pattern Anal. Mach. Intell. 41(9), 2131–2145 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Luo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xiang, M., Luo, W., Hou, J., Tao, W. (2023). Bridging the Interpretability Gap in Coupled Neural Dynamical Models. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14176. Springer, Cham. https://doi.org/10.1007/978-3-031-46661-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46661-8_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46660-1

  • Online ISBN: 978-3-031-46661-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics