Skip to main content

Layer-Wise Relevance Propagation with Conservation Property for ResNet

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

The transparent formulation of explanation methods is essential for elucidating the predictions of neural networks, which are typically black-box models. Layer-wise Relevance Propagation (LRP) is a well-established method that transparently traces the flow of a model’s prediction backward through its architecture by backpropagating relevance scores. However, the conventional LRP does not fully consider the existence of skip connections, and thus its application to the widely used ResNet architecture has not been thoroughly explored. In this study, we extend LRP to ResNet models by introducing Relevance Splitting at points where the output from a skip connection converges with that from a residual block. Our formulation guarantees the conservation property throughout the process, thereby preserving the integrity of the generated explanations. To evaluate the effectiveness of our approach, we conduct experiments on ImageNet and the Caltech-UCSD Birds-200-2011 dataset. Our method achieves superior performance to that of baseline methods on standard evaluation metrics such as the Insertion-Deletion score while maintaining its conservation property. We will release our code for further research at https://5ei74r0.github.io/lrp-for-resnet.page/

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ali, A., Schnake, T., Eberle, O., et al.: XAI for transformers: better explanations through conservative propagation. In: ICML, pp. 435–451 (2022)

    Google Scholar 

  2. Arras, L., Montavon, G., Müller, R., et al.: Explaining recurrent neural network predictions in sentiment analysis. In: WASSA, pp. 159–168 (2017)

    Google Scholar 

  3. Bach, S., et al.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), 1–46 (2015)

    Article  Google Scholar 

  4. Binder, A., et al.: Layer-wise relevance propagation for neural networks with local renormalization layers. In: ICANN, pp. 63–71 (2016)

    Google Scholar 

  5. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: CVPR, pp. 782–791 (2021)

    Google Scholar 

  6. Chen, S., Sun, P., Song, Y., Luo, P.: DiffusionDet: diffusion model for object detection. In: ICCV, pp. 19773–19786 (2023)

    Google Scholar 

  7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)

    Google Scholar 

  8. Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: ICCV, pp. 3429–3437 (2017)

    Google Scholar 

  9. Fukui, H., Hirakawa, T., et al.: Attention branch network: learning of attention mechanism for visual explanation. In: CVPR, pp. 10705–10714 (2019)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  11. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  12. Iida, T., Komatsu, T., Kaneda, K., et al.: Visual explanation generation based on lambda attention branch networks. In: ACCV, pp. 3536–3551 (2022)

    Google Scholar 

  13. Itaya, H., et al.: Visual explanation using attention mechanism in actor-critic-based deep reinforcement learning. In: IJCNN, pp. 1–10 (2021)

    Google Scholar 

  14. Jacovi, A., Schuff, H., Adel, H., Vu, N.T., et al.: Neighboring words affect human interpretation of saliency explanations. In: ACL, pp. 11816–11833 (2023)

    Google Scholar 

  15. Kamath, A., Singh, M., LeCun, Y., Synnaeve, G., Misra, I., Carion, N.: MDETR - modulated detection for end-to-end multi-modal understanding. In: ICCV, pp. 1780–1790 (2021)

    Google Scholar 

  16. Krizhevsky, A., Nair, V., Hinton, G.: Learning multiple layers of features from tiny images. University of Toronto, Technical report (2009)

    Google Scholar 

  17. Lundberg, S., Lee, I.: A unified approach to interpreting model predictions. In: NeurIPS, pp. 4765–4774 (2017)

    Google Scholar 

  18. Madiaga: Artificial Intelligence Act (2023). https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf

  19. Molnar, C., Casalicchio, G., et al.: Interpretable machine learning – a brief history, state-of-the-art and challenges. In: ECML PKDD 2020 Workshops, pp. 417–431 (2020)

    Google Scholar 

  20. Montavon, G., Lapuschkin, S., et al.: Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn. 65, 211–222 (2017)

    Article  Google Scholar 

  21. Ogura, T., et al.: Alleviating the burden of labeling: sentence generation by attention branch encoder-decoder network. RA-L 5(4), 5945–5952 (2020)

    Google Scholar 

  22. Pan, B., Panda, R., Jiang, Y., et al.: IA-RED\(^2\): interpretability-aware redundancy reduction for vision transformers. In: NeurIPS, pp. 24898–24911 (2021)

    Google Scholar 

  23. Pan, D., Li, X., Zhu, D.: Explaining deep neural network models with adversarial gradient integration. In: IJCAI (2021)

    Google Scholar 

  24. Parekh, J., Mozharovskyi, P., d’Alché-Buc, F.: A framework to learn with interpretation. In: NeurIPS, pp. 24273–24285 (2021)

    Google Scholar 

  25. Petsiuk, V., Das, A., Saenko, K.: RISE: randomized input sampling for explanation of black-box models. In: BMVC, pp. 151–164 (2018)

    Google Scholar 

  26. Porwal, P., Pachade, S., Kokare, M., et al.: IDRiD: diabetic retinopathy – segmentation and grading challenge. Med. Image Anal. 59(101561) (2020)

    Google Scholar 

  27. Radford, A., et al.: Learning transferable visual models from natural language supervision. In: ICML, pp. 8748–8763 (2021)

    Google Scholar 

  28. Reed, S., et al.: A generalist agent. In: TMLR 2022 (2022)

    Google Scholar 

  29. Ren, S., He, K., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. PAMI 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  30. Ribeiro, M., Singh, S., et al.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)

    Google Scholar 

  31. Saeed, W., Omlin, C.: Explainable AI (XAI): a systematic meta-survey of current challenges and future opportunities. Knowl.-Based Syst. 263, 110273 (2023)

    Article  Google Scholar 

  32. Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2660–2673 (2017)

    Article  MathSciNet  Google Scholar 

  33. Selvaraju, R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)

    Google Scholar 

  34. Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713 (2016)

  35. Shrikumar, A., et al.: Learning important features through propagating activation differences. In: ICML, vol. 70, pp. 3145–3153 (2017)

    Google Scholar 

  36. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR, pp. 1–14 (2015)

    Google Scholar 

  37. Simonyan, K., Vedaldi, A., et al.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: ICLR, pp. 1–8 (2014)

    Google Scholar 

  38. Springenberg, J., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. In: ICLR (Workshop Track) (2015)

    Google Scholar 

  39. Srinivas, S., Fleuret, F.: Full-gradient representation for neural network visualization. In: NeurIPS, vol. 32 (2019)

    Google Scholar 

  40. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: ICML, pp. 3319–3328 (2017)

    Google Scholar 

  41. Vaswani, A., et al.: Attention is all you need. In: NeurIPS, pp. 5998–6008 (2017)

    Google Scholar 

  42. Wah, C., Branson, S., et al.: The Caltech-UCSD birds-200-2011 dataset. Technical report. CNS-TR-2011-001, California Institute of Technology (2011)

    Google Scholar 

  43. Wang, H., Wang, Z., et al.: Score-CAM: score-weighted visual explanations for convolutional neural networks. In: CVPR, pp. 24–25 (2020)

    Google Scholar 

  44. Wang, W., et al.: VisionLLM: large language model is also an open-ended decoder for vision-centric tasks. In: NeurIPS, pp. 61501–61513 (2023)

    Google Scholar 

  45. Zhou, X., Girdhar, R., Joulin, A., et al.: Detecting twenty-thousand classes using image-level supervision. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13669, pp. 350–368. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20077-9_21

    Chapter  Google Scholar 

Download references

Acknowledgements

This work was partially supported by JSPS KAKENHI Grant Number 23H03478, JST CREST, and NEDO.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seitaro Otsuki .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 4398 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Otsuki, S. et al. (2025). Layer-Wise Relevance Propagation with Conservation Property for ResNet. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15101. Springer, Cham. https://doi.org/10.1007/978-3-031-72775-7_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72775-7_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72774-0

  • Online ISBN: 978-3-031-72775-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics