Skip to main content

Computationally Efficient Rehearsal for Online Continual Learning

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2022 (ICIAP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13233))

Included in the following conference series:

Abstract

Continual learning is a crucial ability for learning systems that have to adapt to changing data distributions, without reducing their performance in what they have already learned. Rehearsal methods offer a simple countermeasure to help avoid this catastrophic forgetting which frequently occurs in dynamic situations and is a major limitation of machine learning models. These methods continuously train neural networks using a mix of data both from the stream and from a rehearsal buffer, which maintains past training samples. Although the rehearsal approach is reasonable and simple to implement, its effectiveness and efficiency is significantly affected by several hyperparameters such as the number of training iterations performed at each step, the choice of learning rate, and the choice on whether to retrain the agent at each step. These options are especially important in resource-constrained environments commonly found in online continual learning for image analysis. This work evaluates several rehearsal training strategies for continual online learning and proposes the combined use of a drift detector that decides on (a) when to train using data from the buffer and the online stream, and (b) how to train, based on a combination of heuristics. Experiments on the MNIST and CIFAR-10 image classification datasets demonstrate the effectiveness of the proposed approach over baseline training strategies at a fraction of the computational cost.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bottou, L., Bousquet, O.: The tradeoffs of large-scale learning. In: Optimization for Machine Learning, p. 351 (2011)

    Google Scholar 

  2. Chaudhry, A., et al.: Continual Learning with Tiny Episodic Memories. CoRR, vol. abs/1902.10486 (2019)

    Google Scholar 

  3. Delange, M., et al.: A continual learning survey: defying forgetting in classification tasks. IEEE Trans. Pattern Anal. Mach. Intell., 1 (2021). https://doi.org/10.1109/TPAMI.2021.3057446

  4. Demosthenous, G., Vassiliades, V.: Continual Learning on the Edge with TensorFlow Lite. arXiv preprint arXiv:2105.01946 (2021)

  5. He, J., Mao, R., Shao, Z., Zhu, F.: Incremental learning in online scenario. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13926–13935 (2020)

    Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. arXiv [cs.CV] (2015)

    Google Scholar 

  7. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  8. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  9. Li, Z., Hoiem, D.: Learning without Forgetting. CoRR, vol. abs/1606.09282 (2016)

    Google Scholar 

  10. Mallya, A., Lazebnik, S.: Packnet: adding multiple tasks to a single network by iterative pruning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7765–7773 (2018)

    Google Scholar 

  11. Milan, K., Veness, J., Kirkpatrick, J., Hassabis, D., Koop, A., Bowling, M.: The forget-me-not process. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 3709–3717 (2016)

    Google Scholar 

  12. Pellegrini, L., Graffieti, G., Lomonaco, V., Maltoni, D.: Latent Replay for Real-Time Continual Learning. CoRR, vol. abs/1912.01100 (2019)

    Google Scholar 

  13. Pellegrini, L., Lomonaco, V., Graffieti, G., Maltoni, D.: Continual Learning at the Edge: Real-Time Training on Smartphone Devices. arXiv preprint arXiv:2105.13127 (2021)

  14. Rebuffi, S.-A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017)

    Google Scholar 

  15. Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T.P., Wayne, G.: Experience Replay for Continual Learning. CoRR, vol. abs/1811.11682 (2018)

    Google Scholar 

  16. Ross, G.J., Adams, N.M., Tasoulis, D.K., Hand, D.J.: Exponentially weighted moving average charts for detecting concept drift. Pattern Recogn. Lett. 33(2), 191–198 (2012)

    Article  Google Scholar 

  17. Wiewel, F., Yang, B.: Entropy-based sample selection for online continual learning. In: 28th European Signal Proceedings Conference (EUSIPCO), pp. 1477–1481 (2021)

    Google Scholar 

  18. Widmer, G., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Mach. Learn. 23(1), 69–101 (1996)

    Google Scholar 

Download references

Acknowledgment

This work is supported by the “TEACHING” project that has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement No 871385. The work reflects only the author’s view and the EU Agency is not responsible for any use that may be made of the information it contains.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charalampos Davalas .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Davalas, C., Michail, D., Diou, C., Varlamis, I., Tserpes, K. (2022). Computationally Efficient Rehearsal for Online Continual Learning. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13233. Springer, Cham. https://doi.org/10.1007/978-3-031-06433-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06433-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06432-6

  • Online ISBN: 978-3-031-06433-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics