ABSTRACT
An effective way to visualize the prediction of deep neural networks on an image is to decompose the prediction into the contribution of units (pixels or patches). In the existing works, these units are largely considered independently, thus limiting the performance of visualization. In this paper, we propose a new predication visualization method that uses super-pixel as a contribution unit. Moreover, our method takes into consideration of the interaction of adjacent super-pixels. We implement our technique and evaluate its performance with various images. Our results show its excellent performance.
- Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk. 2012. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence, Vol. 34, 11 (2012), 2274--2282. Google ScholarDigital Library
- Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, Vol. 10, 7 (2015), e0130140.Google ScholarCross Ref
- M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. 2012. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html. (2012).Google Scholar
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why Should I Trust You?: Explaining the Predictions of Any Classifier Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1135--1144. Google ScholarDigital Library
- Marko Robnik-Sikonja and Igor Kononenko. 2008. Explaining classifications for individual instances. Knowledge and Data Engineering, IEEE Transactions on, Vol. 20, 5 (2008), 589--600. Google ScholarDigital Library
- Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks European conference on computer vision. Springer, 818--833.Google Scholar
- Chiyuan Zhang, Samy Bengio, Hardt, Moritz, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethiking genrealization International Conference on Learning Representations.Google Scholar
- Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2921--2929.Google Scholar
- Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. 2017. Visualizing deep neural network decisions: prediction difference analysis International Conference on Learning Representations.Google Scholar
Recommendations
Visualizing deep neural network by alternately image blurring and deblurring
Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to ...
Convolutional long short term memory deep neural networks for image sequence prediction
Highlights- A three step combination of different Neural Networks for discrimination.
- The ...
AbstractAlthough done nearly effortlessly by humans, digital systems cannot easily recognize images or predictions from recent observations. Tackling these limitations by proposing novel algorithms to improve the performance of image ...
Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
AbstractWe propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions. KNNs employ the Kronecker product, which provides an efficient way of ...
Comments