ABSTRACT
With the development of Convolutional neural networks (CNN), a myth was formed in recent years that only the last convolutional layers, including the fully connected layers, provide the most essential visual features (visual concepts) of the images at the input. In this paper, we experimentally disprove this prejudice through demonstrations showing that all the convolution layers of a CNN contain essential information that influences the classification. In connection with the experiments conducted, we look at each filter from different convolution layers as an independent detector of a particular visual structure (visual concept), and on this basis we introduce a new notion called the "vector of visual concepts". This concept summarizes the information, what visual constellation detectors are activated by an input image, and how strong they are activated. The classification error obtained by the vector of visual concepts differs by only 0.12% from the results of the last (softmax) layer of the network on the training data domain, which undoubtedly suggests the information saturation of all layers in the network. Additionally, we show by experiments that in input data from a domain other than the training one, the features of the initial and intermediate convolution layers provide a more accurate classification than those of the last convolution layers.
- LeCun, Y., B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and I. Jackel (1989) Backpropagation applied to handwritten zip code recognition. Neural Computation. Google ScholarDigital Library
- Krizhevsky, A., I. Sutskever, and G. E. Hinton, Imagenet Classification with deep convolutional neural networks. Advances in NIPS, 25, 2012. Google ScholarDigital Library
- Erhan, D., Y. Bengio, A. Courville, and P. Vincent, Visualizing higher-layer features of a deep network. In Technical report, University of Montreal, 2009.Google Scholar
- Zeiler, M. D., and R. Fergus, Visualizing and understanding convolutional networks. ECCV 2014, Springer, LNCS 8689, Part I, pp. 818--833, 2014.Google ScholarCross Ref
- Donahue, J., Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. Google ScholarDigital Library
- Girshick, R., J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. Google ScholarDigital Library
- Sermanet, P., D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. In Proc. ICLR, 2014.Google Scholar
- Oquab, M., L. Bottou, I. Laptev, and J. Sivic, Learning and transferring mid-level image representations using convolutional neural networks. In CVPR, 2014. Google ScholarDigital Library
- Razavian, A. S., H. Azizpour, J. Sullivan, and S. Carlsson, CNN features off-the-shelf: An astounding baseline for recognition. In CVPRW, 2014. Google ScholarDigital Library
- Krizhevsky, A., I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pp.1106--1114, 2012. Google ScholarDigital Library
- Hristov, A., M. Nisheva, D. Dimov. An Introduction to Convolutional Neural Networks. Automatica & Informatics, Bulgarian John Atanasoff Society of Automatics and Informatics, 2019, in Bulgarian, to appear.Google Scholar
- Simonyan, K., and A. Zisserman, Very deep convolutional networks for large-scale image recognition. In arXiv:1409.1556, 2014.Google Scholar
- Krizhevsky, A., and G. Hinton, Learning multiple layers of features from tiny images. Computer Science Department, University of Toronto, Tech. Rep, 2009.Google Scholar
- Lewis, J. P., Fast Normalized Cross-Correlation. Expanded version of a paper from Vision Interface, 1995Google Scholar
- Babenko, A., A. Slesarev, A. Chigorin, and V. Lempitsky, Neural codes for image retrieval. In ECCV, Sep. 2014.Google ScholarCross Ref
- Gong, Y., L. Wang, R. Guo, and S. Lazebnik, Multi-scale orderless pooling of deep convolutional activation features. In ECCV, 2014.Google ScholarCross Ref
- Babenko, A., and V. Lempitsky, Aggregating deep convolutional features for image retrieval. In ICCV, 2015.Google Scholar
- Mohedano, E., K. McGuinness, N. E. O'Connor, A. Salvador, F. Marques, and X. Giro-i Nieto, Bags of local convolutional features for scalable instance search. In ICMR, 2016. Google ScholarDigital Library
- Arandjelovic, R., P. Gronat, A. Torii, T. Pajdla, and J. Sivic, NetVLAD: CNN architecture for weakly supervised place recognition. In CVPR, 2016.Google ScholarCross Ref
- Razavian, A. S., J. Sullivan, S. Carlsson, and A. Maki, Visual instance retrieval with deep convolutional networks. In ITE Trans. MTA, 2016.Google ScholarCross Ref
- Radenović, F., G. Tolias, and O. Chum, CNN image retrieval learns from BoW: Unsupervised fine-tuning with hard examples. In ECCV, 2016.Google ScholarCross Ref
- Radenovi, F., G. Tolias, and O. Chum, Deep shape matching. In arXiv: 1709.03409v2, 2018.Google Scholar
- Radenović, F., G. Tolias, and O. Chum, Fine-tuning CNN Image Retrieval with No Human Annotation. In arXiv:1711.02512, 2017.Google Scholar
- Kalantidis, Y., C. Mellina, and S. Osindero, Cross-dimensional weighting for aggregated deep convolutional features. In ECCVW, 2016.Google ScholarCross Ref
- Tolias, G., R. Sicre, and H. Jégou, Particular object retrieval with integral max-pooling of CNN activations. In ICLR, 2016.Google Scholar
- Gordo, A., J. Almazan, J. Revaud, and D. Larlus, Deep image retrieval: Learning global representations for image search. In ECCV, 2016.Google ScholarCross Ref
- Gordo, A., J. Almazan, J. Revaud, and D. Larlus, End-to-end learning of deep visual representations for image retrieval. In IJCV, 2017. Google ScholarDigital Library
- Mohedano, E., K. McGuinness, X. Giro-i Nieto, and N. E. O'Connor, Saliency Weighted Convolutional Features for Instance Search. In arXiv: 1711.10795, 2017.Google Scholar
- Hoang, T., T.-T. Do, D.-K. L. Tan, and N.-M. Cheung, Selective Deep Convolutional Features for Image Retrieval. In ACM Multimedia conference, 2017. Google ScholarDigital Library
- Kim, B., M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas, and R. Sayres, Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In ICML, 2018.Google Scholar
- Huang, G., Z. Liu, L. v.d.Maaten, and K. Q. Weinberger. Densely connected convolutional networks. CoRR, abs/1608.06993, 2016.Google Scholar
Index Terms
- Filters in Convolutional Neural Networks as Independent Detectors of Visual Concepts
Recommendations
Stratified pooling based deep convolutional neural networks for human action recognition
Video based human action recognition is an active and challenging topic in computer vision. Over the last few years, deep convolutional neural networks (CNN) has become the most popular method and achieved the state-of-the-art performance on several ...
Food Image Recognition Using Very Deep Convolutional Networks
MADiMa '16: Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary ManagementWe evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 ...
Convolutional Neural Networks
Human-Centered Artificial IntelligenceAbstractThis chapter presents Convolutional Neural Networks (CNNs). The chapter begins with a review of the convolution equation, and a description of the original LeNet series of CNN architectures. It then traces the emergence of Convolutional Networks ...
Comments