Skip to main content

An Initial Parameter Search for Rapid Concept Drift Adaptation in Deep Neural Networks

  • Conference paper
  • First Online:
Proceedings of the 12th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2020) (SoCPaR 2020)

Abstract

Concept drift is a common issue in data stream mining algorithms that causes prediction models to lose its original performance gradually or abruptly due to the non-stationarity of the data distribution and decision boundaries. To combat concept drifts, prediction models need to be updated periodically or when concept drifts occur to adapt to the current concept. Unfortunately, training deep neural networks often require a large amount of data samples and high computational resource consumption, making adaptation slow when concept drifts occur. This paper proposes an approach by searching for an optimum initial parameter that could be adapted quickly to all possible concept drift situations. The initial parameter search is based on the Reptile [1] algorithm, which had been successfully applied in image classification, which allows a neural network model to learn from a few samples and minimal gradient steps. We argue that using an optimum initial parameter allows prior information to be embedded and makes the prediction model less reliant on training exclusively from new data when concept drift occurs. Experimental results show that this approach performs at least as well as current data streaming algorithms but with the lowest computational overhead.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    This optimization procedure can be regarded as a dummy optimization not used to search for the optimum initial parameter \({\theta }_{Meta}\).

References

  1. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv 2018. arXiv preprint arXiv:1803.02999 (2018)

  2. Bifet, A., Holmes, G., Pfahringer, B., Kranen, P., Kremer, H., Jansen, T., Seidl, T.: Moa: Massive online analysis, a framework for stream classification and clustering. In: Proceedings of the First Workshop on Applications of Pattern Analysis 2010, pp. 44–50. PMLR

    Google Scholar 

  3. Vapnik, V.: The nature of statistical learning theory. Springer science & business media, (2013)

    Google Scholar 

  4. Cabral, D., Barros, R.: Concept drift detection based on fisher’s exact test. Inf. Sci. 442–443, 220–234 (2018). https://doi.org/10.1016/j.ins.2018.02.054

    Article  MathSciNet  Google Scholar 

  5. Barros, R.S.M.d., Santos, S.G.T.d.C.: An overview and comprehensive comparison of ensembles for concept drift. Inf. Fusion 52, 213–244 (2019). https://doi.org/10.1016/j.inffus.2019.03.006

  6. Gomes, H.M., Bifet, A., Read, J., Barddal, J.P., Enembreck, F., Pfharinger, B., Holmes, G., Abdessalem, T.: Adaptive random forests for evolving data stream classification. Mach. Learn. 106(9–10), 1469–1495 (2017)

    Article  MathSciNet  Google Scholar 

  7. Shen, Y., Zhu, Y., Du, J., Chen, Y.: A fast learn++. NSE classification algorithm based on weighted moving average. Filomat 32(5), 1737–1745 (2018).

    Google Scholar 

  8. Sun, Y., Tang, K., Zhu, Z., Yao, X.: Concept drift adaptation by exploiting historical knowledge. IEEE Trans. Neural Networks Learn. Syst. 29(10), 4822–4832 (2018)

    Article  Google Scholar 

  9. Khamassi, I., Sayed-Mouchaweh, M., Hammami, M., Ghédira, K.: Discussion and review on evolving data streams and concept drift adapting. Evol. Syst. 9(1), 1–23 (2018)

    Article  Google Scholar 

  10. Yang, H., Fong, S., Sun, G., Wong, R.: A very fast decision tree algorithm for real-time data mining of imperfect data streams in a distributed wireless sensor network. Int. J. Distrib. Sens. Netw. 8(12), 863545 (2012). https://doi.org/10.1155/2012/863545

    Article  Google Scholar 

  11. Oza, N.C.: Online bagging and boosting. In: 2005 IEEE International Conference on Systems, Man and Cybernetics 2005, pp. 2340–2345. IEEE (2005)

    Google Scholar 

  12. Bifet, A., Zhang, J., Fan, W., He, C., Zhang, J., Qian, J., Holmes, G., Pfahringer, B.: Extremely fast decision tree mining for evolving data streams. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2017, pp. 1733–1742 (2017)

    Google Scholar 

  13. Manapragada, C., Webb, G.I., Salehi, M.: Extremely fast decision tree. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1953–1962 (2018)

    Google Scholar 

  14. Killick, R., Fearnhead, P., Eckley, I.A.: Optimal detection of changepoints with a linear computational cost. J. Am. Stat. Assoc. 107(500), 1590–1598 (2012)

    Article  MathSciNet  Google Scholar 

  15. Frías-Blanco, I., del Campo-Ávila, J., Ramos-Jimenez, G., Morales-Bueno, R., Ortiz-Díaz, A., Caballero-Mota, Y.: Online and non-parametric drift detection methods based on Hoeffding’s bounds. IEEE Trans. Knowl. Data Eng. 27(3), 810–823 (2014)

    Article  Google Scholar 

  16. Kosina, P., Gama, J.: Very fast decision rules for classification in data streams. Data Min. Knowl. Disc. 29(1), 168–202 (2015)

    Article  MathSciNet  Google Scholar 

  17. Cao, P., Liu, X., Zhang, J., Zhao, D., Huang, M., Zaiane, O.: ℓ2, 1 norm regularized multi-kernel based joint nonlinear feature selection and over-sampling for imbalanced data classification. Neurocomputing 234, 38–57 (2017)

    Article  Google Scholar 

  18. Hong, X., Chen, S., Harris, C.J.: A kernel-based two-class classifier for imbalanced data sets. IEEE Trans. Neural Networks 18(1), 28–41 (2007)

    Article  Google Scholar 

  19. Yuan, X., Xie, L., Abouelenien, M.: A regularized ensemble framework of deep learning for cancer detection from multi-class, imbalanced training data. Pattern Recogn. 77, 160–172 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Zafran Bin Muhammad Zaly Shah .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shah, M.Z.B.M.Z., Zainal, A.B. (2021). An Initial Parameter Search for Rapid Concept Drift Adaptation in Deep Neural Networks. In: Abraham, A., et al. Proceedings of the 12th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2020). SoCPaR 2020. Advances in Intelligent Systems and Computing, vol 1383. Springer, Cham. https://doi.org/10.1007/978-3-030-73689-7_4

Download citation

Publish with us

Policies and ethics