Abstract:
In recent years, Convolutional Neural Networks (CNN s) have been widely applied in various applications due to its powerful learning capability. However, its lack of expl...Show MoreMetadata
Abstract:
In recent years, Convolutional Neural Networks (CNN s) have been widely applied in various applications due to its powerful learning capability. However, its lack of explainability hinders its further usage in tasks requiring high reliability. Therefore, interpretability technique is the key to the application and deployment of CNN models. As a typical interpretability technique for CNN, Class Activation Map (CAM) utilizing the gradient based weights and activation map is widely applied to traditional CNN models for offering visual interpretability. However, the activation map adopted by CAM cannot loyally quantify the relevance between input samples and activation values. Hence, in this paper, we propose a new interpretability approach called Salience-CAM employing salience scores to accurately measure the relevance between input samples and activation values. To evaluate the effectiveness of Salience-CAM, comprehensive experiments are conducted on 6 selected time series datasets. By leveraging an evaluation algorithm proposed in this paper, the experimental results show that our proposed Salience-CAM outperforms the baseline by discovering more discriminative features.
Date of Conference: 18-22 July 2021
Date Added to IEEE Xplore: 21 September 2021
ISBN Information: