Skip to main content

Efficient Estimation of General Additive Neural Networks: A Case Study for CTG Data

  • Conference paper
  • First Online:
ECML PKDD 2020 Workshops (ECML PKDD 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1323))

  • 2769 Accesses

Abstract

This paper discusses the concepts of interpretability and explainability and outlines desiderata for robust interpretability. It then describes a neural network model that meets all criteria, with the addition of global faithfulness.

This is achieved by efficient estimation of a General Additive Neural Network, seeded by a conventional Multilayer Perceptron (MLP) by distilling the dependence on individual variables and pairwise interactions, so that their effects can be represented within the structure of a General Additive Model. This makes the logic of the model clear and transparent to users, across the complete input space. The model is self-explaining.

The modelling approach used in this paper derives the partial responses from the MLP, resulting in the Partial Response Network (PRN). Its application is illustrated in a medical context using the CTU-UHB Cardiotacography intrapartum database (nā€‰=ā€‰552) to infer the features associated with caesarean deliveries. This is the first application of the PRN to this data set and it is shown that the self-explaining model achieves comparable discrimination performance to that of Random Forests previously applied to the same data set. The classes are highly imbalanced with a prevalence of caesarean sections of 8.33%. The resulting model uses 4 from 8 possible features and has an AUROC of 0.69 [CI 0.60, 0.77] estimated by 4-fold cross-validation. Its performance and features are compared also with those from a Sparse Additive Models (SAM) which has an AUROC of 0.72 [CI 0.64, 0.80]. This is not significantly different and requires all features.

For clinical utility by risk stratification, the odds-ratio for caesarian section vs. not at the prevalence threshold is 3.97 for the PRN, better 3.14 for the SAM. Compared for consistency, parsimony, stability and scalability the models have complementary properties.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision making and a ā€˜right to explanationā€™. AI Mag. 38, 50ā€“57 (2017)

    Article  Google Scholar 

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6 52138ā€“52160 (2018)

    Google Scholar 

  3. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1ā€“38 (2019)

    Article  MathSciNet  Google Scholar 

  4. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI Workshop on Explainable AI (XAI) (2017)

    Google Scholar 

  5. Etchells, T.A., Lisboa, P.J.G.: Orthogonal Search-Based Rule Extraction (OSRE) for trained neural networks: a practical and efficient approach. IEEE Trans. Neural Netw. 17(2), 374ā€“384 (2006)

    Article  Google Scholar 

  6. Rƶgnvaldsson, T., Etchells, T.A., You, L., Garwicz, D., Jarman, I., Lisboa, P.J.G.: How to find simple and accurate rules for viral protease cleavage specificities. BMC Bioinf. 10(1), 149 (2009)

    Article  Google Scholar 

  7. Montani, S., Striani, M.: Artificial intelligence in clinical decision support: a focused literature survey. Yearb. Med. Inform. 28(1), 120ā€“127 (2019)

    Article  Google Scholar 

  8. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 2016, pp. 1135ā€“1144 (2016)

    Google Scholar 

  9. Lundberg, S., Lee, S.-I., A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30, pp. 4765ā€“4774 (2017)

    Google Scholar 

  10. Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self- explaining neural networks. In: NIPS, vol. 31 (2018)

    Google Scholar 

  11. Ravikumar, P., Lafferty, J., Liu, H., Wasserman, L.: Sparse additive models. J. Roy. Stat. Soc.: Ser. B (Stat. Methodol.) 71(5), 1009ā€“1030 (2009)

    Article  MathSciNet  Google Scholar 

  12. Van Belle, V., Van Calster, B., Van Huffel, S., Suykens, J.A.K., Lisboa, P.: Explaining support vector machines: a color based nomogram. PLoS ONE 11(10), e0164568 (2016)

    Article  Google Scholar 

  13. Goldberger, A.L., et al.: PhysioBank, PhysioToolkit, and PhysioNet. Circulation 101(23), e215ā€“e220 (2000)

    Article  Google Scholar 

  14. ChudĆ”Äek, V., et al.: Open access intrapartum CTG database. BMC Pregnancy Childbirth 14(1), 16 (2014)

    Article  Google Scholar 

  15. Spilka, J., Chudacek, V., Koucky, M., Lhotska, L.: Assessment of non-linear features for intrapartal fetal heart rate classification. In: 2009 9th International Conference on Information Technology and Applications in Biomedicine, pp. 1ā€“4 (2009)

    Google Scholar 

  16. Fergus, P., Selvaraj, M., Chalmers, C.: Machine learning ensemble modelling to classify caesarean section and vaginal delivery types using Cardiotocography traces. Comput. Biol. Med. 93, 7ā€“16 (2018)

    Article  Google Scholar 

  17. Zhao, Z., Zhang, Y., Deng, Y.: A comprehensive feature analysis of the fetal heart rate signal for the intelligent assessment of fetal state. J. Clin. Med. 7(8), 223 (2018)

    Article  Google Scholar 

  18. Georgoulas, G., Karvelis, P., Spilka, J., ChudĆ”Äek, V., Stylios, C.D., LhotskĆ”, L.: Investigating pH based evaluation of fetal heart rate (FHR) recordings. Health Technol. (Berl). 7(2ā€“3), 241ā€“254 (2017)

    Google Scholar 

  19. Lisboa, P.J.G., Ortega-Martorell, S., Cashman, S., Olier, I.: The partial response network arXiv, pp. 1ā€“10 (2019)

    Google Scholar 

  20. Hooker, G.: Generalized functional ANOVA diagnostics for high-dimensional functions of dependent variables. J. Comput. Graph. Stat. 16(3), 709ā€“732 (2007)

    Article  MathSciNet  Google Scholar 

  21. Meier, L., Van De Geer, S., BĆ¼hlmann, P.: The group lasso for logistic regression. J. R. Stat. Soc. Ser. B Stat. Methodol. (2008)

    Google Scholar 

  22. MacKay, D.J.C.: The evidence framework applied to classification networks. Neural Comput. 4(5), 720ā€“736 (1992)

    Article  Google Scholar 

  23. Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: International Conference on Machine Learning (ICLR) (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. J. G. Lisboa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lisboa, P.J.G., Ortega-Martorell, S., Jayabalan, M., Olier, I. (2020). Efficient Estimation of General Additive Neural Networks: A Case Study for CTG Data. In: Koprinska, I., et al. ECML PKDD 2020 Workshops. ECML PKDD 2020. Communications in Computer and Information Science, vol 1323. Springer, Cham. https://doi.org/10.1007/978-3-030-65965-3_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65965-3_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65964-6

  • Online ISBN: 978-3-030-65965-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics