Abstract
For safety critical applications, it is still a challenge to use AI and fulfill all regulatory requirements. Medicine/healthcare and transportation are two fields where regulatory requirements are of fundamental importance. A wrong decision can lead to serious hazards or even deaths. In these fields, semantic segmentation is often utilized to extract features. Especially U-Net architectures are used. This paper shows how to apply layer-wise relevance propagation (LRP) to a trained U-Net architecture. We achieve an efficient explanation of a segmentation by back-propagating the whole resulting image. To tackle the non-linear results of the LRP, we introduce a threshold mechanism in combination with a logarithmic transfer function to preprocess the data for visualization. We demonstrate our method on three use cases: the segmentation of a fiber-reinforced polymer in the field of non-destructive testing, the segmentation of pedestrians in an automotive application, and a lung segmentation example from the medical domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Al-hammuri, K., Gebali, F., Kanan, A., Chelvan, I.T.: Vision transformer architecture and applications in digital health: a tutorial and survey. Visual Computing for Industry, Biomedicine, and Art 6(1) (2023). https://doi.org/10.1186/s42492-023-00140-9
Alzubaidi, L., Zhang, J., Humaidi, A.J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M.A., Al-Amidie, M., Farhan, L.: Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data 8(1) (2021). https://doi.org/10.1186/s40537-021-00444-8
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7) (2015). https://doi.org/10.1371/journal.pone.0130140
Candemir, S., Jaeger, S., Palaniappan, K., Musco, J.P., Singh, R.K., Xue, Z., Karargyris, A., Antani, S., Thoma, G., McDonald, C.J.: Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 33(2), 577–590 (2014). https://doi.org/10.1109/TMI.2013.2290491
Cardoso, M.J., Li, W., Brown, R., Ma, N., Kerfoot, E., Wang, Y., Murray, B., Myronenko, A., Zhao, C., Yang, D., Nath, V., He, Y., Xu, Z., Hatamizadeh, A., Zhu, W., Liu, Y., Zheng, M., Tang, Y., Yang, I., Zephyr, M., Hashemian, B., Alle, S., Zalbagi Darestani, M., Budd, C., Modat, M., Vercauteren, T., Wang, G., Li, Y., Hu, Y., Fu, Y., Gorman, B., Johnson, H., Genereaux, B., Erdal, B.S., Gupta, V., Diaz-Pinto, A., Dourson, A., Maier-Hein, L., Jaeger, P.F., Baumgartner, M., Kalpathy-Cramer, J., Flores, M., Kirby, J., Cooper, L.A., Roth, H.R., Xu, D., Bericat, D., Floca, R., Zhou, S.K., Shuaib, H., Farahani, K., Maier-Hein, K.H., Aylward, S., Dogra, P., Ourselin, S., Feng, A.: MONAI: An open-source framework for deep learning in healthcare (2022). https://doi.org/10.48550/arXiv.2211.02701
Chattopadhyay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: Improved visual explanations for deep convolutional networks. In: IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 839–847 (2018). https://doi.org/10.1109/WACV.2018.00097
Chen, H., Lundberg, S., Lee, S.I.: Explaining models by propagating shapley values of local components. Studies in Computational Intelligence 914, 261–270 (2021). https://doi.org/10.1007/978-3-030-53352-6_24
Chlebus, G., Abolmaali, N., Schenk, A., Meine, H.: Relevance analysis of MRI sequences for automatic liver tumor segmentation (2019), http://arxiv.org/abs/1907.11773
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 3213–3223 (2016). https://doi.org/10.1109/CVPR.2016.350
Dardouillet, P., Benoit, A., Amri, E., Bolon, P., Dubucq, D., Credoz, A.: Explainability of image semantic segmentation through shap values. In: Rousseau, J.J., Kapralos, B. (eds.) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. pp. 188–202. Springer Nature Switzerland, Cham (2023)
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: 9th International Conference on Learning Representations, ICLR 2021 (2021), https://openreview.net/forum?id=YicbFdNTTy
Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. Lect. Notes Comput. Sci. 10008, 179–187 (2016). https://doi.org/10.1007/978-3-319-46976-8_19
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Jaeger, S., Karargyris, A., Candemir, S., Folio, L., Siegelman, J., Callaghan, F., Xue, Z., Palaniappan, K., Singh, R.K., Antani, S., Thoma, G., Wang, Y.X., Lu, P.X., McDonald, C.J.: Automatic tuberculosis screening using chest radiographs. IEEE Trans. Med. Imaging 33(2), 233–245 (2014). https://doi.org/10.1109/TMI.2013.2284099
Karen, S., Andrew, Z.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations (ICLR) Conference Track Proceedings. p. 14 (2015), http://arxiv.org/abs/1409.1556
Kauffmann, J., Esders, M., Ruff, L., Montavon, G., Samek, W., Muller, K.R.: From clustering to cluster explanations via neural networks. IEEE Transactions on Neural Networks and Learning Systems (2022). https://doi.org/10.1109/TNNLS.2022.3185901
Ker, J., Wang, L., Rao, J., Lim, T.: Deep learning applications in medical image analysis. IEEE Access 6, 9375–9389 (2017). https://doi.org/10.1109/ACCESS.2017.2788044
Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Alsallakh, B., Reynolds, J., Melnikov, A., Kliushkina, N., Araya, C., Yan, S., Reblitz-Richardson, O.: Captum: A unified and generic model interpretability library for PyTorch (2020)
Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer : hierarchical vision transformer using shifted windows. Proceedings of the IEEE International Conference on Computer Vision pp. 9992–10002 (2021)
Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., Xie, S.: A ConvNet for the 2020s (2022), https://arxiv.org/abs/2201.03545
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS’17, vol. 31, p. 4768-4777. Curran Associates Inc., Red Hook, NY, USA (2017)
Mall, P.K., Singh, P.K., Srivastav, S., Narayan, V., Paprzycki, M., Jaworska, T., Ganzha, M.: A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities. Healthcare Analytics 4 (2023). https://doi.org/10.1016/j.health.2023.100216
Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.R.: Layer-Wise Relevance Propagation: An Overview, pp. 193–209. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10
Mubashar, M., Ali, H., Grönlund, C., Azmat, S.: R2u++: a multiscale recurrent residual u-net with dense skip connections for medical image segmentation. Neural Comput. Appl. 34(20), 17723–17739 (2022). https://doi.org/10.1007/s00521-022-07419-7
Ribeiro, M.T., Singh, S., Guestrin, C.: "why should i trust you?" explaining the predictions of any classifier. NAACL-HLT 2016 - 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Demonstrations Session pp. 97–101 (2016). https://doi.org/10.18653/v1/n16-3020
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI). pp. 234–241. Springer International Publishing, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models (2017), https://arxiv.org/abs/1708.08296
Saranya, A., Subhashini, R.: A systematic review of explainable artificial intelligence models and applications: Recent developments and future trends. Decision Analytics Journal 7 (2023). https://doi.org/10.1016/j.dajour.2023.100230
Schnurr, A.K., Schoeben, M., Hermann, I., Schmidt, R., Chlebus, G., Schad, L.R., Gass, A., Zoellner, F.G.: Relevance analysis of MRI sequences for MS lesion detection. In: European Society of Magnetic Resonance in Medicine and Biology. vol. 20 (2019)
Schorr, C., Goodarzi, P., Chen, F., Dahmen, T.: Neuroscope: An explainable AI toolbox for semantic segmentation and image classification of convolutional neural nets. Applied Sciences (Switzerland) 11(5), 1–16 (2021). https://doi.org/10.3390/app11052199
Schwalbe, G., Finzel, B.: A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. Data Min. Knowl. Disc. (2023). https://doi.org/10.1007/s10618-022-00867-8
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: Visual explanations from deep networks via gradient-based localization. In: IEEE International Conference on Computer Vision. pp. 618–626 (2017). https://doi.org/10.1109/ICCV.2017.74
Sheu, R.K., Pardeshi, M.S.: A survey on medical explainable ai (xai): Recent progress, explainability approach, human interaction and scoring system. Sensors 22(20) (2022). https://doi.org/10.3390/s22208068
Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Transactions on Neural Networks and Learning Systems 32(11), 4793–4813 (2021). https://doi.org/10.1109/tnnls.2020.3027314
Tjoa, E., Heng, G., Yuhao, L., Guan, C.: Enhancing the extraction of interpretable information for ischemic stroke imaging from deep neural networks (2019), https://arxiv.org/abs/1911.08136
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L.u., Polosukhin, I.: Attention is all you need. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems. vol. 30. Curran Associates, Inc. (2017). https://doi.org/10.48550/arXiv.1706.03762
Yang, J., Li, S., Wang, Z., Dong, H., Wang, J., Tang, S.: Using deep learning to detect defects in manufacturing: A comprehensive survey and current challenges. Materials 13, 5755 (2020). https://doi.org/10.3390/ma13245755
Zhang, H., Zhong, X., Li, G., Liu, W., Liu, J., Ji, D., Li, X., Wu, J.: BCU-Net: Bridging ConvNeXt and U-Net for medical image segmentation. Comput. Biol. Med. 159, 106960 (2023). https://doi.org/10.1016/j.compbiomed.2023.106960
Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 15(5), 749–753 (2018). https://doi.org/10.1109/lgrs.2018.2802944
Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: Unet++: A nested u-net architecture for medical image segmentation. Lect. Notes Comput. Sci. 11045, 3–11 (2018). https://doi.org/10.1007/978-3-030-00889-5_1
Zimmermann, R.S., Borowski, J., Geirhos, R., Bethge, M., Wallis, T.S., Brendel, W.: How well do feature visualizations support causal understanding of CNN activations? In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems. vol. 34, pp. 11730–11744. Curran Associates, Inc. (2021), https://neurips.cc/virtual/2021/poster/27775
Acknowledgment
The research leading to these results has received funding by research subsidies granted by the government of Upper Austria within the projects “X-PRO”, as well as “XPlain”, grant no. 895981. The research has also been supported by the European Regional Development Fund in frame of the project Pemowe (BA0100107) in the INTERREG Programm Bayern-Österreich 2021-2027.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Weinberger, P., Fröhler, B., Heim, A., Gall, A., Bodenhofer, U., Senck, S. (2025). Applying Layer-Wise Relevance Propagation on U-Net Architectures. In: Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, CL., Bhattacharya, S., Pal, U. (eds) Pattern Recognition. ICPR 2024. Lecture Notes in Computer Science, vol 15312. Springer, Cham. https://doi.org/10.1007/978-3-031-78198-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-78198-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-78197-1
Online ISBN: 978-3-031-78198-8
eBook Packages: Computer ScienceComputer Science (R0)