Skip to main content

Depth and Width Adaption of DNN for Data Stream Classification with Concept Drifts

  • Conference paper
  • First Online:
Intelligent Data Engineering and Automated Learning – IDEAL 2023 (IDEAL 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14404))

  • 403 Accesses

Abstract

To handle data stream classification with concept drifts, recent studies have shown that a continuously evolving network structure can achieve better performance. However, firstly, they only change one hidden node at a time, which is not enough to alleviate underfitting or overfitting of network model, leading to model’s inability to fit data well. Secondly, during the growth process of the network, they did not consider reducing hidden layers, which would affect the learning ability of deep neural network models. To overcome these shortcomings, an adaptive neural network structure (ANSN) is proposed to handle data stream classification with concept drifts. ANSN has a completely open structure, its network structure, depth, and width can evolve automatically in online mode. The experimental results on ten popular data stream datasets show that the proposed ANSN outperforms the comparison methods. The codes of the proposed algorithm is available on https://gitee.com/ymw12345/ansn.git.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations. LCLR, Washington DC (2015)

    Google Scholar 

  2. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv: 1804.02767 (2018)

    Google Scholar 

  3. Vaswani, A. et al.: Attention is all you need. In: Proceedings of the 31st Conference on Neural Information Processing Systems, pp. 5998–6008. NIPS, San Diego (2017)

    Google Scholar 

  4. Pratama, M., Dimla, E., Tjahjowidodo, T., Pedrycz, W., Lughofer, E.: Online tool condition monitoring based on parsimonious ensemble+. IEEE Trans. Cybern. 50(2), 64–677 (2020)

    Article  Google Scholar 

  5. Pratama, M., Pedrycz, W., Lughofer, E.: Evolving ensemble fuzzy classifier. IEEE Trans. Fuzzy Syst. 26(5), 2552–2567 (2018)

    Article  Google Scholar 

  6. Pratama, M., Pedrycz, W., Webb, G.I.: An incremental construction of deep neuro fuzzy system for continual learning of nonstationary data streams. IEEE Trans. Fuzzy Syst. 28(7), 1315–1328 (2020)

    Google Scholar 

  7. Ashfahani, A., Pratama, M.: Autonomous deep learning: continual learning approach for dynamic environments. In: Proceedings of the 2019 SIAM International Conference on Data Mining, pp. 666–674. SIAM, Canada (2019)

    Google Scholar 

  8. Ashfahani, A., Pratama, M., Lughofer, E., Cai, Q., Sheng, H.: An online RFID localization in the manufacturing shopfloor. In: Lughofer, E., Sayed-Mouchaweh, M. (eds.) Predictive Maintenance in Dynamic Systems, pp. 287–309. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05645-2_10

    Chapter  Google Scholar 

  9. Pratama, M., Za’in, C., Ashfahani, A., Ong, Y. S., Ding, W.: Automatic construction of multi-layer perceptron network from streaming examples. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 1171–1180. ACM, New York (2019)

    Google Scholar 

  10. Yuan, L., Li, H., Xia, B., Gao, C., Liu, M., Yuan, W. You, X.: Recent advances in concept drift adaptation methods for deep learning. In: Proceedings of the 31st International Joint Conference on Artificial Intelligence, pp. 5654–5661. IJCAI, Freiburg (2022)

    Google Scholar 

  11. Sahoo, D., Pham, Q., Jing, L., Steven, C. H.: Online deep learning: learning deep neural networks on the fly. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pp. 2660–2666. IJCAI, Freiburg (2018)

    Google Scholar 

  12. Kauschke, S., Lehmann, D. H., Fürnkranz, J.: Patching deep neural networks for nonstationary environments. In: Proceedings of the International Joint Conference on Neural Networks, pp. 1–8. IEEE, Piscataway (2019)

    Google Scholar 

  13. Guo, H., Zhang, S., Wang, W.: Selective ensemble-based online adaptive deep neural networks for streaming data with concept drift. Neural Netw. 142(10), 437–456 (2021)

    Article  Google Scholar 

  14. Ding, S., Zhao, H., Zhang, Y., Xu, X., Nie, R.: Extreme learning machine: algorithm, theory and applications. Artif. Intell. Rev. 44(1), 103–115 (2015)

    Article  Google Scholar 

  15. Rusu, A.A., Rabinowitz, N.C., Desjardins, G., Soyer, H., et al.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)

  16. Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong Learning with dynamically expandable networks. In: Proceedings of the International Conference on Learning Representations. LCLR, Washington DC (2018)

    Google Scholar 

  17. Andri, A., Mahardhika, P., Edwin, L., Ramasamy, S., Yew-Soon, O.: DEVDAN: deep evolving denoising autoencoder. Neurocomputing 390(5), 297–314 (2020)

    Google Scholar 

  18. Jidong, H., Yujian, L.: A review of catastrophic forgetting in neural network models. J. Beijing Univ. Technol. 47(5), 551–564 (2021)

    Google Scholar 

  19. Roodschild, M., Sardiñas, J.G., Will, A.: A new approach for the vanishing gradient problem on sigmoid activation. Prog. Artif. Intell. 9(4), 351–360 (2020)

    Article  Google Scholar 

  20. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of Machine Learning Research, pp. 249–256. Princeton, NJ (2010)

    Google Scholar 

  21. Montufar, G.F., Pascanu, R., Cho, K., Bengio, Y.: On the number of linear regions of deep neural networks. In: Proceedings of the 13th Conference on Neural Information Processing Systems, pp. 2924–2932. Curran Associates, New York (2014)

    Google Scholar 

  22. Kirkpatrick, J., Pascanu, R., Rabinowitz, N.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (62366011), the Key R&D Program of Guangxi under Grant (AB21220023), and Guangxi Key Laboratory of Image and Graphic Intelligent Processing (GIIP2306).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to YiMin Wen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, X., Liu, X., Wen, Y. (2023). Depth and Width Adaption of DNN for Data Stream Classification with Concept Drifts. In: Quaresma, P., Camacho, D., Yin, H., Gonçalves, T., Julian, V., Tallón-Ballesteros, A.J. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2023. IDEAL 2023. Lecture Notes in Computer Science, vol 14404. Springer, Cham. https://doi.org/10.1007/978-3-031-48232-8_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48232-8_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48231-1

  • Online ISBN: 978-3-031-48232-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics