Abstract
Evaluation of incremental classification algorithms is a complex task because there are many aspects to evaluate. Besides the aspects such as accuracy and generalization that are usually evaluated in the context of classification, we also need to assess how the algorithm handles two main challenges of the incremental learning: the concept drift and the catastrophic forgetting. However, only catastrophic forgetting is evaluated by the current methodology, where the classifier is evaluated in two scenarios for class addition and expansion. We generalize the methodology by proposing two new scenarios of incremental learning for class inclusion and separation that evaluate the handling of the concept drift. We demonstrate the proposed methodology on the evaluation of three different incremental classifiers, where we show that the proposed methodology provides a more complete and finer evaluation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In a metric space X, a neighborhood of the point \(\varvec{x}\) is defined as a ball of the radius r with \(\varvec{x}\) in the center: \(\mathcal B_d(\varvec{x}, r) = \{\varvec{y}|d(\varvec{x}, \varvec{y})<r; \varvec{y}\in X\}\), where d is a metric function. A close neighborhood is a neighborhood with a very small radius r.
References
Freund, Y., Mansour, Y.: Learning under persistent drift. In: Ben-David, S. (ed.) EuroCOLT 1997. LNCS, vol. 1208, pp. 109–118. Springer, Heidelberg (1997). https://doi.org/10.1007/3-540-62685-9_10
Gepperth, A., Hammer, B.: Incremental learning algorithms and applications. In: European Symposium on Artificial Neural Networks (ESANN), pp. 357–368 (2016)
Goodfellow, I.J., Mirza, M., Xiao, D., Courville, A., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks (2013). arXiv e-prints: arXiv:1312.6211
Kemker, R., Abitino, A., McClure, M., Kanan, C.: Measuring catastrophic forgetting in neural networks. CoRR abs/1708.02072 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2015)
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017). https://doi.org/10.1073/pnas.1611835114
Lane, T., Brodley, C.E.: Approaches to online learning and concept drift for user identification in computer security. In: Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, KDD 1998, pp. 259–263. AAAI Press (1998)
LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/, cited on 2019-29-01
Lee, S., Kim, J., Ha, J., Zhang, B.: Overcoming catastrophic forgetting by incremental moment matching. CoRR abs/1703.08475 (2017)
Moreno-Torres, J.G., Raeder, T., Alaiz-Rodríguez, R., Chawla, N.V., Herrera, F.: A unifying view on dataset shift in classification. Pattern Recogn. 45(1), 521–530 (2012). https://doi.org/10.1016/j.patcog.2011.06.019
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019). https://doi.org/10.1016/j.neunet.2019.01.012
Pfülb, B., Gepperth, A., Abdullah, S., Kilian, A.: Catastrophic forgetting: still a problem for DNNs. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018, Part I. LNCS, vol. 11139, pp. 487–497. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01418-6_48
Wang, K., Zhou, S., Fu, C.A., Yu, J.X.: Mining changes of classification by correspondence tracing. In: Proceedings of the SIAM International Conference on Data Mining, pp. 95–106. SIAM (2003). https://doi.org/10.1137/1.9781611972733.9
Wu, Y., Chen, Y., Wang, L., Ye, Y., Liu, Z., Guo, Y., Zhang, Z., Fu, Y.: Incremental classifier learning with generative adversarial networks. CoRR abs/1802.00853 (2018)
Acknowledgments
This work was supported by the Czech Science Foundation (GAČR) under research project No. 18-18858S.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Szadkowski, R., Drchal, J., Faigl, J. (2019). Basic Evaluation Scenarios for Incrementally Trained Classifiers. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. ICANN 2019. Lecture Notes in Computer Science(), vol 11728. Springer, Cham. https://doi.org/10.1007/978-3-030-30484-3_41
Download citation
DOI: https://doi.org/10.1007/978-3-030-30484-3_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30483-6
Online ISBN: 978-3-030-30484-3
eBook Packages: Computer ScienceComputer Science (R0)