Skip to main content

Understanding Deep Neural Network by Filter Sensitive Area Generation Network

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11301))

Abstract

Deep convolutional networks have recently gained much attention because of their impressive performance on some visual tasks. However, it is still not clear why they achieve such great success. In this paper, a novel approach called Filter Sensitive Area Generation Network (FSAGN), has been proposed to interpret what the convolutional filters have learnt after training CNNs. Given any trained CNN model, the proposed method aims to figure out which object part each filter represents in a high conv-layer, through appropriate input image mask which filters out unrelated area. In order to obtain such a mask, a mask generation network is designed and the corresponding loss function is defined to evaluate the changes of feature maps before and after mask operation. Experiments on multiple datasets and networks show that FSAGN clarifies the knowledge representations of each filter and how small disturbance on specific object parts affects the performance of CNNs.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Zhang, N., Donahue, J., Girshick, R., Darrell, T.: Part-based R-CNNs for fine-grained category detection. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 834–849. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_54

    Chapter  Google Scholar 

  2. Zhang, X., Xiong, H., Zhou, W., Tian, Q.: Picking deep filter responses for fine-grained image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, pp. 1134–1142 (2016)

    Google Scholar 

  3. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: International Conference on Neural Information Processing Systems (NIPS 2015), vol. 39, pp. 91–99. MIT Press (2015)

    Google Scholar 

  4. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Computer Vision and Pattern Recognition (CVPR 2016), pp. 779–788. IEEE Computer Society (2016)

    Google Scholar 

  5. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2017)

    Article  Google Scholar 

  6. Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: IEEE International Conference on Computer Vision (ICCV 2015), pp. 1520–1528. IEEE Computer Society (2015)

    Google Scholar 

  7. Zeiler, Matthew D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  8. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. In: International Conference on Machine Learning — Deep Learning Workshop, pp. 12 (2015)

    Google Scholar 

  9. Kumar, D., Wong, A., Taylor, G.W., Kumar, D., Wong, A., Taylor, G.W.: Explaining the unexplained: A CLass-Enhanced Attentive Response (CLEAR) approach to understanding deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2017), pp. 1686–1694. IEEE (2017)

    Google Scholar 

  10. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. (2014)

  11. Maji, S., Rahtu, E., Kannala, J., Blaschko, M., Vedaldi, A.: Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151. (2013)

  12. Zheng, H., Fu, J., Mei, T., Luo, J.: Learning multi-attention convolutional neural network for fine-grained image recognition. In: IEEE International Conference on Computer Vision, pp. 5219–5227. IEEE Computer Society (2017)

    Google Scholar 

  13. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: ImageNet: a large-scale hierarchical image database. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2009), pp. 248–255. IEEE (2009)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Key Research and Development Program of China (2017YFB1300203), in part by the National Natural Science Foundation of China under Grant 91648205.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong Qiao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qian, Y., Qiao, H., Xu, J. (2018). Understanding Deep Neural Network by Filter Sensitive Area Generation Network. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11301. Springer, Cham. https://doi.org/10.1007/978-3-030-04167-0_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04167-0_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04166-3

  • Online ISBN: 978-3-030-04167-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics