Skip to main content

A Convolutional Fuzzy Min-Max Neural Network for Image Classification

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1148))

Abstract

Convolutional neural network (CNN) is a well established practice for image classification. In order to learn new classes without forgetting learned ones, CNN models are trained in offline manner which involves re-training of a network considering seen as well as unseen data samples. However, such training takes too much time. This problem is addressed using proposed convolutional fuzzy min-max neural network (CFMNN) avoiding the re-training process. In CFMNN, the online learning ability is added to network by introducing the idea of hyperbox fuzzy sets for CNNs. To evaluate the performance of CFMNN, benchmark datasets such as MNIST, Caltech-101 and CIFAR-100 are used. The experimental results show that drastic reduction in the training time is achieved for online learning of CFMNN. Moreover, compared to existing methods, the proposed CFMNN has compatible or better accuracy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Chavan, T., Nandedkar, A.: A hybrid deep neural network for online learning. In: Ninth International Conference on Advances in Pattern Recognition (ICAPR), pp. 1–6 (2017). https://doi.org/10.1109/ICAPR.2017.8592942

  2. Li, F.F., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. In: 2004 Conference on Computer Vision and Pattern Recognition Workshop, pp. 178–178 (2004). https://doi.org/10.1109/CVPR.2004.383

  3. Käding, C., Rodner, E., Freytag, A., Denzler, J.: Active and continuous exploration with deep neural networks and expected model output changes. CoRR, vol. abs/1612.0 (2016). http://arxiv.org/abs/1612.06129

  4. Käding, C., Rodner, E., Freytag, A., Denzler, J.: Fine-tuning deep neural networks in continuous learning scenarios. In: Chen, C.-S., Lu, J., Ma, K.-K. (eds.) ACCV 2016. LNCS, vol. 10118, pp. 588–605. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54526-4_43

    Chapter  Google Scholar 

  5. Krizhevsky, A.: Learning multiple layers of features from tiny images. Masters thesis, University of Toronto (2009)

    Google Scholar 

  6. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105 (2012)

    Google Scholar 

  7. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  8. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2018)

    Article  Google Scholar 

  9. Lomonaco, V., Maltoni, D.: CORe50: a new dataset and benchmark for continuous object recognition. CoRR, vol. abs/1705.0 (2017). http://arxiv.org/abs/1705.03550

  10. Parisi, G., Kemker, R., Part, J., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019). https://doi.org/10.1016/j.neunet.2019.01.012

    Article  Google Scholar 

  11. Rebuffi, S., Kolesnikov, A., Sperl, G., Lampert, C.: iCaRL: incremental classifier and representation learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5533–5542 (2017)

    Google Scholar 

  12. Ren, B., Wang, H., Li, J., Gao, H.: Life-long learning based on dynamic combination model. Appl. Soft Comput. 56, 398–404 (2017). https://doi.org/10.1016/j.asoc.2017.03.005

    Article  Google Scholar 

  13. Roy, D., Panda, P., Roy, K.: Tree-CNN: a deep convolutional neural network for lifelong learning. CoRR, vol. abs/1802.0 (2018). http://arxiv.org/abs/1802.05800

  14. Simonovsky, M., Komodakis, N.: Dynamic edge-conditioned filters in convolutional neural networks on graphs. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 29–38 (2017)

    Google Scholar 

  15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, vol. abs/1409.1 (2014)

    Google Scholar 

  16. Simpson, P.: Fuzzy min-max neural networks - part 2: clustering. IEEE Trans. Fuzzy Syst. 1(1), 32–45 (1993)

    Article  Google Scholar 

  17. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  18. Woodward, M., Finn, C.: Active one-shot learning. CoRR, vol. abs/1702.0 (2017). http://arxiv.org/abs/1702.06559

  19. Wu, Y., et al.: Incremental classifier learning with generative adversarial networks. CoRR, vol. abs/1802.0 (2018). http://arxiv.org/abs/1802.00853

  20. Xiao, T., Zhang, J., Yang, K., Peng, Y., Zhang, Z.: Error-driven incremental learning in deep convolutional neural network for large-scale image classification. In: Proceedings of 22nd ACM International Conference on Multimedia, pp. 177–186 (2014). https://doi.org/10.1145/2647868.2654926

  21. Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: International Conference on Learning Representations (2018). http://arxiv.org/abs/1708.01547

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Trupti R. Chavan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chavan, T.R., Nandedkar, A.V. (2020). A Convolutional Fuzzy Min-Max Neural Network for Image Classification. In: Nain, N., Vipparthi, S., Raman, B. (eds) Computer Vision and Image Processing. CVIP 2019. Communications in Computer and Information Science, vol 1148. Springer, Singapore. https://doi.org/10.1007/978-981-15-4018-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-4018-9_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-4017-2

  • Online ISBN: 978-981-15-4018-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics