skip to main content
10.1145/3345252.3345294acmotherconferencesArticle/Chapter ViewAbstractPublication PagescompsystechConference Proceedingsconference-collections
research-article

Filters in Convolutional Neural Networks as Independent Detectors of Visual Concepts

Authors Info & Claims
Published:21 June 2019Publication History

ABSTRACT

With the development of Convolutional neural networks (CNN), a myth was formed in recent years that only the last convolutional layers, including the fully connected layers, provide the most essential visual features (visual concepts) of the images at the input. In this paper, we experimentally disprove this prejudice through demonstrations showing that all the convolution layers of a CNN contain essential information that influences the classification. In connection with the experiments conducted, we look at each filter from different convolution layers as an independent detector of a particular visual structure (visual concept), and on this basis we introduce a new notion called the "vector of visual concepts". This concept summarizes the information, what visual constellation detectors are activated by an input image, and how strong they are activated. The classification error obtained by the vector of visual concepts differs by only 0.12% from the results of the last (softmax) layer of the network on the training data domain, which undoubtedly suggests the information saturation of all layers in the network. Additionally, we show by experiments that in input data from a domain other than the training one, the features of the initial and intermediate convolution layers provide a more accurate classification than those of the last convolution layers.

References

  1. LeCun, Y., B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and I. Jackel (1989) Backpropagation applied to handwritten zip code recognition. Neural Computation. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Krizhevsky, A., I. Sutskever, and G. E. Hinton, Imagenet Classification with deep convolutional neural networks. Advances in NIPS, 25, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Erhan, D., Y. Bengio, A. Courville, and P. Vincent, Visualizing higher-layer features of a deep network. In Technical report, University of Montreal, 2009.Google ScholarGoogle Scholar
  4. Zeiler, M. D., and R. Fergus, Visualizing and understanding convolutional networks. ECCV 2014, Springer, LNCS 8689, Part I, pp. 818--833, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  5. Donahue, J., Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Girshick, R., J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Sermanet, P., D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. In Proc. ICLR, 2014.Google ScholarGoogle Scholar
  8. Oquab, M., L. Bottou, I. Laptev, and J. Sivic, Learning and transferring mid-level image representations using convolutional neural networks. In CVPR, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Razavian, A. S., H. Azizpour, J. Sullivan, and S. Carlsson, CNN features off-the-shelf: An astounding baseline for recognition. In CVPRW, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Krizhevsky, A., I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pp.1106--1114, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Hristov, A., M. Nisheva, D. Dimov. An Introduction to Convolutional Neural Networks. Automatica & Informatics, Bulgarian John Atanasoff Society of Automatics and Informatics, 2019, in Bulgarian, to appear.Google ScholarGoogle Scholar
  12. Simonyan, K., and A. Zisserman, Very deep convolutional networks for large-scale image recognition. In arXiv:1409.1556, 2014.Google ScholarGoogle Scholar
  13. Krizhevsky, A., and G. Hinton, Learning multiple layers of features from tiny images. Computer Science Department, University of Toronto, Tech. Rep, 2009.Google ScholarGoogle Scholar
  14. Lewis, J. P., Fast Normalized Cross-Correlation. Expanded version of a paper from Vision Interface, 1995Google ScholarGoogle Scholar
  15. Babenko, A., A. Slesarev, A. Chigorin, and V. Lempitsky, Neural codes for image retrieval. In ECCV, Sep. 2014.Google ScholarGoogle ScholarCross RefCross Ref
  16. Gong, Y., L. Wang, R. Guo, and S. Lazebnik, Multi-scale orderless pooling of deep convolutional activation features. In ECCV, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  17. Babenko, A., and V. Lempitsky, Aggregating deep convolutional features for image retrieval. In ICCV, 2015.Google ScholarGoogle Scholar
  18. Mohedano, E., K. McGuinness, N. E. O'Connor, A. Salvador, F. Marques, and X. Giro-i Nieto, Bags of local convolutional features for scalable instance search. In ICMR, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Arandjelovic, R., P. Gronat, A. Torii, T. Pajdla, and J. Sivic, NetVLAD: CNN architecture for weakly supervised place recognition. In CVPR, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  20. Razavian, A. S., J. Sullivan, S. Carlsson, and A. Maki, Visual instance retrieval with deep convolutional networks. In ITE Trans. MTA, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  21. Radenović, F., G. Tolias, and O. Chum, CNN image retrieval learns from BoW: Unsupervised fine-tuning with hard examples. In ECCV, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  22. Radenovi, F., G. Tolias, and O. Chum, Deep shape matching. In arXiv: 1709.03409v2, 2018.Google ScholarGoogle Scholar
  23. Radenović, F., G. Tolias, and O. Chum, Fine-tuning CNN Image Retrieval with No Human Annotation. In arXiv:1711.02512, 2017.Google ScholarGoogle Scholar
  24. Kalantidis, Y., C. Mellina, and S. Osindero, Cross-dimensional weighting for aggregated deep convolutional features. In ECCVW, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  25. Tolias, G., R. Sicre, and H. Jégou, Particular object retrieval with integral max-pooling of CNN activations. In ICLR, 2016.Google ScholarGoogle Scholar
  26. Gordo, A., J. Almazan, J. Revaud, and D. Larlus, Deep image retrieval: Learning global representations for image search. In ECCV, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  27. Gordo, A., J. Almazan, J. Revaud, and D. Larlus, End-to-end learning of deep visual representations for image retrieval. In IJCV, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Mohedano, E., K. McGuinness, X. Giro-i Nieto, and N. E. O'Connor, Saliency Weighted Convolutional Features for Instance Search. In arXiv: 1711.10795, 2017.Google ScholarGoogle Scholar
  29. Hoang, T., T.-T. Do, D.-K. L. Tan, and N.-M. Cheung, Selective Deep Convolutional Features for Image Retrieval. In ACM Multimedia conference, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Kim, B., M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas, and R. Sayres, Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In ICML, 2018.Google ScholarGoogle Scholar
  31. Huang, G., Z. Liu, L. v.d.Maaten, and K. Q. Weinberger. Densely connected convolutional networks. CoRR, abs/1608.06993, 2016.Google ScholarGoogle Scholar

Index Terms

  1. Filters in Convolutional Neural Networks as Independent Detectors of Visual Concepts

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        CompSysTech '19: Proceedings of the 20th International Conference on Computer Systems and Technologies
        June 2019
        365 pages
        ISBN:9781450371490
        DOI:10.1145/3345252

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 June 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate241of492submissions,49%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader