Skip to main content
Log in

Support Vector Machine Histogram: New Analysis and Architecture Design Method of Deep Convolutional Neural Network

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Deep convolutional neural network (DCNN) is a kind of hierarchical neural network models and attracts attention in recent years since it has shown high classification performance. DCNN can acquire the feature representation which is a parameter indicating the feature of the input by learning. However, its internal analysis and the design of the network architecture have many unclear points and it cannot be said that it has been sufficiently elucidated. We propose the novel DCNN analysis method “Support vector machine (SVM) histogram” as a prescription to deal with these problems. This is a method that examines the spatial distribution of DCNN extracted feature representation by using the decision boundary of linear SVM. We show that we can interpret DCNN hierarchical processing using this method. In addition, by using the result of SVM histogram, DCNN architecture design becomes possible. In this study, we designed the architecture of the application to large scale natural image dataset. In the result, we succeeded in showing higher accuracy than the original DCNN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin

    MATH  Google Scholar 

  2. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) ImageNet: a large-scale hierarchical image database. In: CVPR09

  3. Deng L, Yu D (2014) Deep learning: Methods and applications. Tech. Rep. MSR-TR-2014-21, Microsoft Research. http://research.microsoft.com/apps/pubs/default.aspx?id=209355

  4. Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2013) Decaf: a deep convolutional activation feature for generic visual recognition. CoRR arXiv:1310.1531

  5. Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36(4):193–202

    Article  MATH  Google Scholar 

  6. Fukushima K (2013) Artificial vision by multi-layered neural networks: neocognitron and its advances. Neural Netw 37:103–119. doi:10.1016/j.neunet.2012.09.016

    Article  Google Scholar 

  7. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778

  8. Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick RB, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. CoRR arXiv:1408.5093

  9. Krizhevsky A (2009) Learning multiple layers of features from tiny images. Tech. rep., Department of Computer Science, University of Toronto

  10. Krizhevsky A, Nair V, Hinton GE (2009) Cifar-10 and cifar-100 datasets. http://www.cs.toronto.edu/~kriz/cifar.html. Accessed 18 Jan 2017

  11. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems 25, Curran Associates, Inc., pp 1097–1105. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

  12. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel L (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551

    Article  Google Scholar 

  13. Lin M, Chen Q, Yan S (2013) Network in network. CoRR arXiv:1312.4400

  14. Shouno H (2007) Recent studies around the neocognitron. In: Neural information processing, 14th international conference, ICONIP 2007, Kitakyushu, Japan, November 13–16, 2007, Revised Selected Papers, Part I, Springer, lecture notes in computer science, vol 4984, pp 1061–1070

  15. Shouno H, Suzuki S, Kido S (2015) A transfer learning method with deep convolutional neural network for diffuse lung disease classification. In: Neural information processing, 22nd international conference, ICONIP 2015, Istanbul, Turkey, November 9–12, 2015, Proceedings, Part I, Springer, lecture notes in computer science, vol 9489, pp 199–207

  16. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Driessche GVD, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D (2016) Mastering the game of go with deep neural networks and tree search. Nature 529:484–503, http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html

  17. Simonyan K, Vedaldi A, Zisserman A (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. In: International conference on learning representations workshop

  18. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: Computer vision - ECCV 2014 - 13th European conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part I, pp 818–833. doi:10.1007/978-3-319-10590-1_53

Download references

Acknowledgements

This work is partly supported by MEXT/JSPS KAKENHI Grant Number 26120515 and 16H01542. We thank for Prof. Kazuyuki Hara in Nihon Univ., and Aiga Suzuki in the Univ. of Electro-Communications for their fruitful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hayaru Shouno.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Suzuki, S., Shouno, H. Support Vector Machine Histogram: New Analysis and Architecture Design Method of Deep Convolutional Neural Network. Neural Process Lett 47, 767–782 (2018). https://doi.org/10.1007/s11063-017-9652-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9652-0

Keywords

Navigation