Skip to main content

Convolutional Recurrent Neural Networks for Computer Network Analysis

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series (ICANN 2019)

Abstract

The paper proposes a method of computer network user detection with recurrent neural networks. We use long short-term memory and gated recurrent unit neural networks. To present URLs from computer network sessions to the neural networks, we add convolutional input layers. Moreover, we transform requested URLs by one-hot character-level encoding. We show detailed analysis and comparison of the experiments with the aforementioned neural networks. The system was checked on real network data collected in a local municipal network. It can classify network users; hence, it can also detect anomalies and security compromises.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Beritelli, F., Capizzi, G., Lo Sciuto, G., Scaglione, F., Połap, D., Woźniak, M.: A neural network pattern recognition approach to automatic rainfall classification by using signal strength in LTE/4G networks. In: Polkowski, L., et al. (eds.) IJCRS 2017. LNCS (LNAI), vol. 10314, pp. 505–512. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60840-2_36

    Chapter  Google Scholar 

  2. Chen, W., Zhang, Y., Yeo, C.K., Lau, C.T., Lee, B.S.: Unsupervised rumor detection based on users’ behaviors using neural networks. Pattern Recogn. Lett. 105, 226–233 (2018)

    Article  Google Scholar 

  3. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  4. Chung, J., Gülçehre, Ç., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR abs/1412.3555 (2014)

    Google Scholar 

  5. Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634 (2015)

    Google Scholar 

  6. Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM. In: 1999 Ninth International Conference on Artificial Neural Networks, ICANN 1999 (Conf. Publ. No. 470), vol. 2, pp. 850–855, September 1999

    Google Scholar 

  7. Gratian, M., Bhansali, D., Cukier, M., Dykstra, J.: Identifying infected users via network traffic. Comput. Secur. 80, 306–316 (2019)

    Article  Google Scholar 

  8. Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)

    Article  Google Scholar 

  9. Greff, K., Srivastava, R.K., Koutník, J., Steunebrink, B.R., Schmidhuber, J.: LSTM: a search space Odyssey. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017)

    Article  MathSciNet  Google Scholar 

  10. Kamimura, R.: Supposed maximum mutual information for improving generalization and interpretation of multi-layered neural networks. J. Artif. Intell. Soft Comput. Res. 9(2), 123–147 (2019)

    Article  Google Scholar 

  11. Kinga, D., Adam, J.B.: A method for stochastic optimization. In: International Conference on Learning Representations (ICLR), vol. 5 (2015)

    Google Scholar 

  12. Ludwig, S.A.: Applying a neural network ensemble to intrusion detection. J. Artif. Intell. Soft Comput. Res. 9(3), 177–188 (2019)

    Article  Google Scholar 

  13. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  14. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  15. Molina, L.E., Bhulai, S., Reader, V.S., de Jeu, R., Tjepkema, J.: Understanding user behavior in e-commerce with long short-term memory (LSTM) and autoencoders. Masters thesis, Vrije Universiteit Amsterdam (2018)

    Google Scholar 

  16. Verde, N.V., Ateniese, G., Gabrielli, E., Mancini, L.V., Spognardi, A.: No NAT’d user left behind: fingerprinting users behind NAT from NetFlow records alone. In: 2014 IEEE 34th International Conference on Distributed Computing Systems, pp. 218–227, June 2014

    Google Scholar 

  17. Wu, Z., King, S.: Investigating gated recurrent networks for speech synthesis. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5140–5144, March 2016

    Google Scholar 

  18. Xiong, W., Wu, L., Alleva, F., Droppo, J., Huang, X., Stolcke, A.: The Microsoft 2017 conversational speech recognition system. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5934–5938. IEEE (2018)

    Google Scholar 

  19. Yao, K., Zweig, G., Hwang, M.Y., Shi, Y., Yu, D.: Recurrent neural networks for language understanding. In: Interspeech, pp. 2524–2528 (2013)

    Google Scholar 

  20. Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems, pp. 649–657 (2015)

    Google Scholar 

  21. Zhou, C., Sun, C., Liu, Z., Lau, F.: A C-LSTM neural network for text classification. arXiv preprint arXiv:1511.08630 (2015)

Download references

Acknowledgements

The project financed under the program of the Minister of Science and Higher Education under the name “Regional Initiative of Excellence” in the years 2019–2022 project number 020/RID/2018/19, the amount of financing 12,000,000 PLN.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafał Scherer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nowak, J., Korytkowski, M., Scherer, R. (2019). Convolutional Recurrent Neural Networks for Computer Network Analysis. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series. ICANN 2019. Lecture Notes in Computer Science(), vol 11730. Springer, Cham. https://doi.org/10.1007/978-3-030-30490-4_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30490-4_59

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30489-8

  • Online ISBN: 978-3-030-30490-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics