Abstract:
Interpretation of convolutional neural networks (CNNs) critically influence our understanding of deep learning models’ internal dynamics. In this paper, we demonstrate an...Show MoreMetadata
Abstract:
Interpretation of convolutional neural networks (CNNs) critically influence our understanding of deep learning models’ internal dynamics. In this paper, we demonstrate an interpretable training method, namely class activation mapping guided adversarial training (CAMAT), for two typical remote sensing tasks, land-use classification and object detection. We first generate class activation maps of the current batch training samples. Class activation map is a kind of class-specific saliency map that quantifies the contributions of a particular region in the image to the CNN prediction result. Then, high contribution regions in the training samples are occluded, and we leverage the partial masked images as the inputs for network training. Following this paradigm, the key areas for network learning and decision making are purposefully disturbed in the training phase, thus the trained model could have better performance in robustness and generalization. Experiments conducted on classic remote sensing datasets verified the outperforming effectiveness and efficiency of the proposed CAMAT.
Date of Conference: 28 July 2019 - 02 August 2019
Date Added to IEEE Xplore: 14 November 2019
ISBN Information: