skip to main content
10.1145/3549555.3549590acmotherconferencesArticle/Chapter ViewAbstractPublication PagescbmiConference Proceedingsconference-collections
research-article

Streaming learning with Move-to-Data approach for image classification

Published: 07 October 2022 Publication History

Abstract

In Deep Neural Network training, the availability of a large amount of representative training data is the sine qua non-condition for a good generalization capacity of the model. In many real-world applications, data is not available at a glance, but coming on the fly. If a pre-trained model is fine-tuned on the new data, then catastrophic forgetting happens mostly. Incremental learning mechanisms propose ways to overcome catastrophic forgetting. Streaming learning is a type of incremental learning where models learn from new data instances as soon as they become available in a single training pass. In this work, we conduct an experimental study, on a large dataset, of an incremental/streaming learning method Move-to-Data we previously proposed, and propose an updated approach by ”re-targeting” with gradient descent which is faster than the popular streaming learning method ExStream. The method achieves better performances and computational efficiency compared to ExStream. Move-to-Data with gradient is on average 3.5 times faster than ExStream and has a similar accuracy, with 0.5% improvement compared to ExStream.

References

[1]
Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV). 233–248.
[2]
Francisco M. Castro, Manuel J. Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-End Incremental Learning. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XII(Lecture Notes in Computer Science, Vol. 11216), Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.). Springer, 241–257. https://doi.org/10.1007/978-3-030-01258-8_15
[3]
Anuvabh Dutt, Denis Pellerin, and Georges Quénot. 2017. Improving Hierarchical Image Classification with Merged CNN Architectures. In CBMI. ACM, 31:1–31:7.
[4]
Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3, 4 (1999), 128–135.
[5]
Tommaso Furlanello, Jiaping Zhao, Andrew M Saxe, Laurent Itti, and Bosco S Tjan. 2016. Active long term memory networks. arXiv preprint arXiv:1606.02355(2016).
[6]
Tyler L Hayes, Nathan D Cahill, and Christopher Kanan. 2019. Memory efficient experience replay for streaming learning. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 9769–9776.
[7]
Tyler L Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, and Christopher Kanan. 2020. Remind your neural network to prevent catastrophic forgetting. In European Conference on Computer Vision. Springer, 466–483.
[8]
Tyler L Hayes and Christopher Kanan. 2020. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 220–221.
[9]
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 2, 7 (2015).
[10]
Heechul Jung, Jeongwoo Ju, Minju Jung, and Junmo Kim. 2016. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122(2016).
[11]
Alex Krizhevsky, Geoffrey Hinton, 2009. Learning multiple layers of features from tiny images. (2009).
[12]
Ya Le and Xuan Yang. 2015. Tiny imagenet visual recognition challenge. CS 231N 7, 7 (2015), 3.
[13]
Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems 31 (2018).
[14]
Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence 40, 12(2017), 2935–2947.
[15]
Edwin Lughofer. 2008. Extensions of vector quantization for incremental clustering. Pattern recognition 41, 3 (2008), 995–1011.
[16]
Miltiadis Poursanidis, Jenny Benois-Pineau, Akka Zemmari, Boris Mansenca, and Aymar de Rugy. 2020. Move-to-Data: A new Continual Learning approach with Deep CNNs, Application for image-class recognition. arXiv preprint arXiv:2006.07152(2020).
[17]
Amal Rannen, Rahaf Aljundi, Matthew B Blaschko, and Tinne Tuytelaars. 2017. Encoder based lifelong learning. In Proceedings of the IEEE International Conference on Computer Vision. 1320–1328.
[18]
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2001–2010.
[19]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, 2015. Imagenet large scale visual recognition challenge. International journal of computer vision 115, 3 (2015), 211–252.
[20]
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks?Advances in neural information processing systems 27 (2014).

Cited By

View all
  • (2023)Entropy-based Sampling for Streaming learning with Move-to-Data approach on VideoProceedings of the 20th International Conference on Content-based Multimedia Indexing10.1145/3617233.3617240(21-27)Online publication date: 20-Sep-2023

Index Terms

  1. Streaming learning with Move-to-Data approach for image classification

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      CBMI '22: Proceedings of the 19th International Conference on Content-based Multimedia Indexing
      September 2022
      208 pages
      ISBN:9781450397209
      DOI:10.1145/3549555
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 October 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. ExStream
      2. Move-to-Data
      3. Streaming learning
      4. convolutional neural network
      5. incremental learning

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CBMI 2022

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)13
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 24 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Entropy-based Sampling for Streaming learning with Move-to-Data approach on VideoProceedings of the 20th International Conference on Content-based Multimedia Indexing10.1145/3617233.3617240(21-27)Online publication date: 20-Sep-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media