Skip to main content

Exploring Internal Representations of Deep Neural Networks

  • Conference paper
  • First Online:
  • 298 Accesses

Part of the book series: Studies in Computational Intelligence ((SCI,volume 829))

Abstract

This paper introduces a method for the generation of images that activate any target neuron or group of neurons of a trained convolutional neural network (CNN). These images are created in such a way that they contain attributes of natural images such as color patterns or textures. The main idea of the method is to pre-train a deep generative network on a dataset of natural images and then use this network to generate images for the target CNN. The analysis of the generated images allows for a better understanding of the CNN internal representations, the detection of otherwise unseen biases, or the creation of explanations through feature localization and description.

Supported by the Hasler Fundation, project number 16015.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  2. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  3. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. arXiv:1606.08813 (2016)

  4. Montavon, G., Samek, W., Mller, K.R.: Methods for interpreting and understanding deep neural networks. arXiv:1706.07979 (2017)

  5. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  6. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)

    Google Scholar 

  7. Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2528–2535 (2010)

    Google Scholar 

  8. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034 (2013)

  9. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision ECCV 2014. Lecture Notes in Computer Science, vol. 8689, pp. 818–833. Springer International Publishing, Cham (2014)

    Google Scholar 

  10. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv:1506.06579 (2015)

  11. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436 (2015)

    Google Scholar 

  12. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5188–5196 (2015)

    Google Scholar 

  13. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196 (2017)

  14. Denton, E.L., Chintala, S., Fergus, R., Others: Deep generative image models using a Laplacian pyramid of adversarial networks. In: Advances in neural information processing systems, pp. 1486–1494 (2015)

    Google Scholar 

  15. Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill 2 (2017)

    Google Scholar 

  16. Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Advances in Neural Information Processing Systems, pp. 3387–3395 (2016)

    Google Scholar 

  17. Despraz, J., Gomez, S., Satizbal, H.F., Pea-Reyes, C.A.: Towards a better understanding of deep neural networks representations using deep generative networks. In: Proceedings of the 9th International Joint Conference on Computational Intelligence - Volume 1: IJCCI, INSTICC, pp. 215–222. SciTePress (2017)

    Google Scholar 

  18. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  20. Mahendran, A., Vedaldi, A.: Visualizing deep convolutional neural networks using natural pre-images. Int. J. Comput. Vis. 120, 233–255 (2016)

    Article  MathSciNet  Google Scholar 

  21. Chollet, F., et al.: Keras (2015) (2017)

    Google Scholar 

  22. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Others: Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467 (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jérémie Despraz .

Editor information

Editors and Affiliations

Appendix

Appendix

See Tables 2 and 3 and Figs. 11, 12, 13 and 14.

Fig. 11
figure 11

Well converged selection of images that activate target classes of the VGG-16 classifier (sample #2). This image is best viewed in color/screen

Fig. 12
figure 12

Well converged selection of images that activate target classes of the VGG-16 classifier (sample #3). This image is best viewed in color/screen

Fig. 13
figure 13

Well converged selection of images that activate target classes of the VGG-16 classifier (sample #4). This image is best viewed in color/screen

Fig. 14
figure 14

Selection of generated images that maximize the last layer of the VGG-16 classifier (i.e. the layer farthest from the input) (sample #2). This image is best viewed in color/screen

Table 2 Detailed encoding network architecture (E)
Table 3 Detailed generative network architecture (G)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Despraz, J., Gomez, S., Satizábal, H.F., Peña-Reyes, C.A. (2019). Exploring Internal Representations of Deep Neural Networks. In: Sabourin, C., Merelo, J.J., Madani, K., Warwick, K. (eds) Computational Intelligence. IJCCI 2017. Studies in Computational Intelligence, vol 829. Springer, Cham. https://doi.org/10.1007/978-3-030-16469-0_7

Download citation

Publish with us

Policies and ethics