Skip to main content

Affinitree: A Compositional Framework for Formal Analysis and Explanation of Deep Neural Networks

  • Conference paper
  • First Online:
Tests and Proofs (TAP 2024)

Abstract

We present Affinitree, a compositional framework for analyzing Deep Neural Networks (DNNs) based on three elementary principles: (1) symbolic execution, (2) infeasible path elimination, and (3) abstraction. The combination of these principles allows one to elegantly solve a number of interesting analysis and verification tasks, like traditional verification problems with pre- and post-conditions, model explanations in terms of semantically equivalent decision trees, concolic execution for slice-oriented testing, and visual verification of two-dimensional slices. The paper illustrates the flexibility of Affinitree over three different use cases covering fairness evaluation, adversarial examples, and counterfactuals. Affinitree is available as a modular open source library for replication, experimentation, and extension.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/Conturing/affinitree-py.

  2. 2.

    Alternative terms are glass, transparent, or ante-hoc interpretable model.

  3. 3.

    Also called fidelitous, semantics-preserving, or functionally equivalent.

  4. 4.

    In a tree one can identify paths from the root to a node with the node itself.

References

  1. Arora, R., Basu, A., Mianjy, P., Mukherjee, A.: Understanding deep neural networks with rectified linear units. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, 30 April–3 May 2018, Conference Track Proceedings. OpenReview.net (2018). https://openreview.net/forum?id=B1J_rgWRW

  2. Aytekin, C.: Neural networks are decision trees. arXiv preprint arXiv:2210.05189 (2022)

  3. Bak, S.: nnenum: verification of ReLU neural networks with optimized abstraction refinement. In: Dutle, A., Moscato, M.M., Titolo, L., Muñoz, C.A., Perez, I. (eds.) NFM 2021. LNCS, vol. 12673, pp. 19–36. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-76384-8_2

    Chapter  Google Scholar 

  4. Balestriero, R., Baraniuk, R.G.: Mad max: affine spline insights into deep learning. Proc. IEEE 109(5), 704–727 (2020)

    Article  Google Scholar 

  5. Bryant, R.E.: Graph-based algorithms for Boolean function manipulation. IEEE Trans. Comput. 100(8), 677–691 (1986)

    Article  Google Scholar 

  6. Burrell, J.: How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 3(1), 2053951715622512 (2016)

    Article  Google Scholar 

  7. Buyl, M., Defrance, M., De Bie, T.: FAIRRET: a framework for differentiable fairness regularization terms. arXiv preprint arXiv:2310.17256 (2023)

  8. Böing, B., Müller, E.: On training and verifying robust autoencoders. In: 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–10 (2022). https://doi.org/10.1109/DSAA54385.2022.10032334

  9. Chu, L., Hu, X., Hu, J., Wang, L., Pei, J.: Exact and consistent interpretation for piecewise linear neural networks: a closed form solution. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1244–1253 (2018)

    Google Scholar 

  10. Dimanov, B., Bhatt, U., Jamnik, M., Weller, A.: You shouldn’t trust me: learning models which conceal unfairness from multiple explanation methods (2020)

    Google Scholar 

  11. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

    Google Scholar 

  12. Facchini, A., Termine, A.: Towards a taxonomy for the opacity of AI systems. In: Müller, V.C. (ed.) PTAI 2021. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol. 63, pp. 73–89. Springer, Cham (2021). https://doi.org/10.1007/978-3-031-09153-7_7

    Chapter  Google Scholar 

  13. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: Ai2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE symposium on security and privacy (SP), pp. 3–18. IEEE (2018)

    Google Scholar 

  14. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323. JMLR Workshop and Conference Proceedings (2011)

    Google Scholar 

  15. Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. In: International Conference on Machine Learning, pp. 1319–1327. PMLR (2013)

    Google Scholar 

  16. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  17. Gopinath, D., Wang, K., Zhang, M., Pasareanu, C.S., Khurshid, S.: Symbolic execution for deep neural networks. arXiv preprint arXiv:1807.10439 (2018)

  18. Goujon, A., Etemadi, A., Unser, M.: On the number of regions of piecewise linear neural networks. J. Comput. Appl. Math. 441, 115667 (2024)

    Article  MathSciNet  Google Scholar 

  19. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5) (2018https://doi.org/10.1145/3236009

  20. Hanin, B., Rolnick, D.: Complexity of linear regions in deep networks. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 2596–2604. PMLR (2019). https://proceedings.mlr.press/v97/hanin19a.html

  21. Hanin, B., Rolnick, D.: Deep ReLU networks have surprisingly few activation patterns. Adv. Neural. Inf. Process. Syst. 32 (2019)

    Google Scholar 

  22. Humayun, A.I., Balestriero, R., Balakrishnan, G., Baraniuk, R.G.: SplineCam: Exact visualization and characterization of deep network geometry and decision boundaries. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3789–3798 (2023)

    Google Scholar 

  23. Ignatiev, A., Narodytska, N., Marques-Silva, J.: On validating, repairing and refining heuristic ml explanations. arXiv preprint arXiv:1907.02509 (2019)

  24. İrsoy, O., Alpaydın, E.: PathFinder: discovering decision pathways in deep neural networks. arXiv preprint arXiv:2210.00319 (2022)

  25. Jia, S., Lin, P., Li, Z., Zhang, J., Liu, S.: Visualizing surrogate decision trees of convolutional neural networks. J. Vis. 23, 141–156 (2020)

    Article  Google Scholar 

  26. Jordan, M., Dimakis, A.G.: Exactly computing the local Lipschitz constant of ReLU networks. Adv. Neural. Inf. Process. Syst. 33, 7344–7353 (2020)

    Google Scholar 

  27. Kohavi, R., Becker, B.: UCI adult data set. UCI Meach. Learn. Repository 5 (1996)

    Google Scholar 

  28. Lakkaraju, H., Bastani, O.: “How do i fool you?” manipulating user trust via misleading black box explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 79–85 (2020)

    Google Scholar 

  29. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  30. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  31. Lee, G.H., Jaakkola, T.S.: Oblique decision trees from derivatives of ReLU networks. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=Bke8UR4FPB

  32. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  33. Logemann, T., Veith, E.M.: NN2EQCDT: equivalent transformation of feed-forward neural networks as DRL policies into compressed decision trees, vol. 15, pp. 94–100 (2023)

    Google Scholar 

  34. Lohaus, M., Kleindessner, M., Kenthapadi, K., Locatello, F., Russell, C.: Are two heads the same as one? Identifying disparate treatment in fair neural networks. Adv. Neural. Inf. Process. Syst. 35, 16548–16562 (2022)

    Google Scholar 

  35. Maas, A.L., Hannun, A.Y., Ng, A.Y., et al.: Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of the ICML, Atlanta, GA, vol. 30, p. 3 (2013)

    Google Scholar 

  36. Marques-Silva, J., Ignatiev, A.: Delivering trustworthy AI through formal XAI. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 12342–12350 (2022)

    Google Scholar 

  37. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)

    Article  Google Scholar 

  38. Molnar, C.: Interpretable machine learning (2020). Lulu.com

  39. Montúfar, G.: Notes on the number of linear regions of deep neural networks (2017)

    Google Scholar 

  40. Montufar, G.F., Pascanu, R., Cho, K., Bengio, Y.: On the number of linear regions of deep neural networks. Adv. Neural Inf. Process. Syst. 27 (2014)

    Google Scholar 

  41. Murtovi, A., Bainczyk, A., Nolte, G., Schlüter, M., Steffen, B.: Forest gump: a tool for verification and explanation. Int. J. Softw. Tools Technol. Transf. (2023). https://doi.org/10.1007/s10009-023-00702-5

    Article  Google Scholar 

  42. Nguyen, T.D., Kasmarik, K.E., Abbass, H.A.: An exact transformation from deep neural networks to multi-class multivariate decision trees. arXiv preprint arXiv:2003.04675 (2020)

  43. Nolte, G., Schlüter, M., Murtovi, A., Steffen, B.: The power of typed affine decision structures: a case study. Int. J. Softw. Tools Technol. Transf. (2023). https://doi.org/10.1007/s10009-023-00701-6

    Article  Google Scholar 

  44. Olteanu, A., Castillo, C., Diaz, F., Kıcıman, E.: Social data: Biases, methodological pitfalls, and ethical boundaries. Front. Big Data 2, 13 (2019)

    Article  Google Scholar 

  45. Parimbelli, E., Buonocore, T.M., Nicora, G., Michalowski, W., Wilk, S., Bellazzi, R.: Why did AI get this one wrong?-tree-based explanations of machine learning model predictions. Artif. Intell. Med. 135, 102471 (2023)

    Article  Google Scholar 

  46. Rolnick, D., Kording, K.: Reverse-engineering deep ReLU networks. In: International Conference on Machine Learning, pp. 8178–8187. PMLR (2020)

    Google Scholar 

  47. Schlüter, M., Nolte, G.: Introduction to symbolic execution of neural networks-towards faithful and explainable surrogate models. Electron. Commun. EASST 82 (2023)

    Google Scholar 

  48. Schlüter, M., Nolte, G., Murtovi, A., Steffen, B.: Towards rigorous understanding of neural networks via semantics-preserving transformations. Int. J. Softw. Tools Technol. Transf. (2023). https://doi.org/10.1007/s10009-023-00700-7

    Article  Google Scholar 

  49. Serra, T., Tjandraatmadja, C., Ramalingam, S.: Bounding and counting linear regions of deep neural networks. In: International Conference on Machine Learning, pp. 4558–4566. PMLR (2018)

    Google Scholar 

  50. Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3(POPL), 1–30 (2019)

    Google Scholar 

  51. Sudjianto, A., Knauth, W., Singh, R., Yang, Z., Zhang, A.: Unwrapping the black box of deep ReLU networks: interpretability, diagnostics, and simplification. ArXiv abs/2011.04041 (2020)

    Google Scholar 

  52. Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., Ashmore, R.: Testing deep neural networks. arXiv preprint arXiv:1803.04792 (2018)

  53. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., Kroening, D.: Concolic testing for deep neural networks. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pp. 109–119 (2018)

    Google Scholar 

  54. Thibault, W.C., Naylor, B.F.: Set operations on polyhedra using binary space partitioning trees. In: Proceedings of the 14th annual conference on Computer Graphics and Interactive Techniques, pp. 153–162 (1987)

    Google Scholar 

  55. Tran, H.-D., et al.: Star-based reachability analysis of deep neural networks. In: ter Beek, M.H., McIver, A., Oliveira, J.N. (eds.) FM 2019. LNCS, vol. 11800, pp. 670–686. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30942-8_39

    Chapter  Google Scholar 

  56. Usman, M., Noller, Y., Păsăreanu, C.S., Sun, Y., Gopinath, D.: NeuroSPF: a tool for the symbolic analysis of neural networks. In: 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pp. 25–28. IEEE (2021)

    Google Scholar 

  57. Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.105962 (2020)

  58. Wang, Y.: Estimation and comparison of linear regions for ReLU networks. In: IJCAI, pp. 3544–3550 (2022)

    Google Scholar 

  59. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017)

    Google Scholar 

  60. Zednik, C.: Solving the black box problem: a normative framework for explainable artificial intelligence. Philos. Technol. 34(2), 265–288 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maximilian Schlüter .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schlüter, M., Steffen, B. (2025). Affinitree: A Compositional Framework for Formal Analysis and Explanation of Deep Neural Networks. In: Huisman, M., Howar, F. (eds) Tests and Proofs. TAP 2024. Lecture Notes in Computer Science, vol 15153. Springer, Cham. https://doi.org/10.1007/978-3-031-72044-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72044-4_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72043-7

  • Online ISBN: 978-3-031-72044-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics