Loading [MathJax]/extensions/MathMenu.js
A Comparison of Saliency Methods for Deep Learning Explainability | IEEE Conference Publication | IEEE Xplore

A Comparison of Saliency Methods for Deep Learning Explainability


Abstract:

Saliency methods are widely used to visually explain “black-box” deep learning model outputs to humans. These methods produce meaningful maps which aim to identify the sa...Show More

Abstract:

Saliency methods are widely used to visually explain “black-box” deep learning model outputs to humans. These methods produce meaningful maps which aim to identify the salient part of an image responsible for, and so best explain, a Convolutional Neural Network (CNN) decision. In this paper, we consider the case of a classifier and the role of the two main categories of saliency methods: backpropagation and attribution. The first method is based on the gradient of the output with respect to the network parameters, while the second tests how local image perturbations affect the output. In this paper, we compare the Gradient method, Grad-CAM, Extremal perturbation, and DEEPCOVER, and highlight the complexity in determining which method provides the best explanation of a CNN's decision.
Date of Conference: 29 November 2021 - 01 December 2021
Date Added to IEEE Xplore: 23 December 2021
ISBN Information:
Conference Location: Gold Coast, Australia

Contact IEEE to Subscribe

References

References is not available for this document.