Skip to main content
Log in

Reinforcement learning-based autonomous attacker to uncover computer network vulnerabilities

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In today’s intricate information technology landscape, the escalating complexity of computer networks is accompanied by a myriad of malicious threats seeking to compromise network components. To address these security challenges, we propose an approach that synergizes reinforcement learning and deep neural networks. Our method involves training autonomous cyber-agents to strategically attack network nodes, aiming to expose vulnerabilities and extract confidential information. We employ various off-policy deep reinforcement learning algorithms, including deep Q-network (DQN), double DQN, and dueling DQN, to train and evaluate these agents within two enterprise simulation networks provided by Microsoft. The simulations, modeled as Markov games between attack and defense, exclude human intervention. Results demonstrate that agents trained by double DQN and dueling DQN surpass baseline agents trained using traditional reinforcement learning and DQN methods. This approach not only enhances our understanding of network vulnerabilities but also lays the groundwork for future efforts to fortify computer network defense and security.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The reinforcement learning agents presented in this manuscript were trained using Microsoft CyberBattleSim [27], a simulation platform for training and evaluating cybersecurity agents. The agents that we have developed and trained can be accessed from our repository at https://github.com/ammohamedds/CyberBattleSim_agents. In addition, the official repository for the simulation contains other baseline agents that may be of interest [27].

References

  1. Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460. https://doi.org/10.1093/mind/lix.236.433

    Article  MathSciNet  Google Scholar 

  2. Tesauro G (1995) Temporal difference learning and td-gammon. Commun ACM 38(3):58–68. https://doi.org/10.1145/203330.203343

    Article  Google Scholar 

  3. Kohl N, Stone P (2004) Policy gradient reinforcement learning for fast quadrupedal locomotion. In: IEEE international conference on robotics and automation, 2004. Proceedings. ICRA ’04. 2004, vol 3, pp 2619–26243. https://doi.org/10.1109/ROBOT.2004.1307456

  4. Ng AY, Coates A, Diel M, Ganapathi V, Schulte J, Tse B, Berger E, Liang E (2006) Autonomous inverted helicopter flight via reinforcement learning. In: Experimental Robotics IX, Springer, Berlin, Heidelberg, pp 363–372

  5. Singh S, Litman D, Kearns M, Walker M (2002) Optimizing dialogue management with reinforcement learning: experiments with the njfun system. J Artif Int Res 16(1):105–133

    Google Scholar 

  6. Arulkumaran K, Deisenroth MP, Brundage M, Bharath AA (2017) A brief survey of deep reinforcement learning. CoRR arXiv:1708.05866

  7. Rusu AA, Colmenarejo SG, Gülçehre Desjardins G, Kirkpatrick J, Pascanu R, Mnih V, Kavukcuoglu K, Hadsell R (2016) Policy distillation. In: ICLR (Poster). arxiv:1511.06295

  8. Nguyen TT, Reddi VJ (2019) Deep reinforcement learning for cyber security. CoRR arXiv:1906.05799

  9. Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller MA (2013) Playing atari with deep reinforcement learning. CoRR arxiv:1312.5602

  10. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533. https://doi.org/10.1038/nature14236

    Article  Google Scholar 

  11. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap TP, Leach M, Kavukcuoglu K, Graepel T, Hassabis D (2016) Mastering the game of go with deep neural networks and tree search. Nature 529:484–489

    Article  Google Scholar 

  12. Vinyals O, Ewalds T, Bartunov S, Georgiev P, Vezhnevets AS, Yeo M, Makhzani A, Küttler H, Agapiou JP, Schrittwieser J, Quan J, Gaffney S, Petersen S, Simonyan K, Schaul T, Hasselt H, Silver D, Lillicrap TP, Calderone K, Keet P, Brunasso A, Lawrence D, Ekermo A, Repp J, Tsing R (2017) Starcraft II: a new challenge for reinforcement learning. CoRR arxiv:1708.04782

  13. Isele D, Cosgun A, Subramanian K, Fujimura K (2017) Navigating intersections with autonomous vehicles using deep reinforcement learning. CoRR arxiv:1705.01196

  14. Gu S, Holly E, Lillicrap TP, Levine S (2016) Deep reinforcement learning for robotic manipulation. CoRR arxiv:1610.00633

  15. Savant VB, Kasar RD (2021) A review on network security and cryptography. Res J Eng Technol 12(4):110–114

    Article  Google Scholar 

  16. Al-Shabi M (2019) A survey on symmetric and asymmetric cryptography algorithms in information security. Int J Sci Res Publ (IJSRP) 9(3):576–589

    Google Scholar 

  17. Pachghare V (2019) Cryptography and information security. PHI Learning Pvt. Ltd

  18. Mushtaq MF, Jamel S, Disina AH, Pindar ZA, Shakir NSA, Deris MM (2017) A survey on the cryptographic encryption algorithms. Int J Adv Comput Sci Appl 8(11)

  19. Sharma DK, Singh NC, Noola DA, Doss AN, Sivakumar J (2022) A review on various cryptographic techniques and algorithms. Mater Today Proc 51:104–109

    Article  Google Scholar 

  20. Maloof MA et al (2006) Machine learning and data mining for computer security: methods and applications. Springer, Berlin

    Book  Google Scholar 

  21. Han Y, Rubinstein BIP, Abraham T, Alpcan T, Vel OY, Erfani SM, Hubczenko D, Leckie C, Montague P (2018) Reinforcement learning for autonomous defence in software-defined networking. CoRR arxiv:1808.05770

  22. Wan X, Sheng G, Li Y, Xiao L, Du X (2017) Reinforcement learning based mobile offloading for cloud-based malware detection. In: GLOBECOM 2017 - 2017 IEEE global communications conference, pp 1–6. https://doi.org/10.1109/GLOCOM.2017.8254503

  23. Li Y, Liu J, Li Q, Xiao L (2015) Mobile cloud offloading for malware detections with learning. In: 2015 IEEE conference on computer communications workshops (INFOCOM WKSHPS), pp 197–201. https://doi.org/10.1109/INFCOMW.2015.7179384

  24. Manshaei MH, Zhu Q, Alpcan T, Bacşar T, Hubaux J-P (2013) Game theory meets network security and privacy. ACM Comput Surv. https://doi.org/10.1145/2480741.2480742

    Article  Google Scholar 

  25. Hasselt H, Guez A, Silver D (2015) Deep reinforcement learning with double q-learning. CoRR arxiv:1509.06461

  26. Wang Z, Freitas N, Lanctot M (2015) Dueling network architectures for deep reinforcement learning. CoRR arxiv:1511.06581

  27. Team MDR (2021) CyberBattleSim. GitHub. Created by Christian Seifert, Michael Betser, William Blum, James Bono, Kate Farris, Emily Goren, Justin Grana, Kristian Holsheimer, Brandon Marken, Joshua Neil, Nicole Nichols, Jugal Parikh, Haoran Wei

  28. Khan S, Rahmani H, Shah SAA, Bennamoun M, Medioni G, Dickinson S (2018) A guide to convolutional neural networks for computer vision, vol 8. Springer, Berlin

    Book  Google Scholar 

  29. Ahmed AM, Abdelrazek M, Aryal S, Nguyen TT (2023) An overview of Eulerian video motion magnification methods. Comput Graph 117:145–163. https://doi.org/10.1016/j.cag.2023.10.015

    Article  Google Scholar 

  30. Goldberg Y (2022) Neural network methods for natural language processing. Springer, Berlin

    Google Scholar 

  31. Bekey GA, Goldberg KY (2012) Neural networks in robotics, vol 202. Springer, Berlin

    Google Scholar 

  32. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction, 2nd edn. MIT press, Cambridge http://incompleteideas.net/book/the-book-2nd.html

  33. Brockman G, Cheung V, Pettersson L, Schneider J, Schulman J, Tang J, Zaremba W (2016) Openai gym. CoRR arxiv:1606.01540

  34. Wydmuch M, Kempka M, Jaśkowski W (2018) Vizdoom competitions: playing doom from pixels. IEEE Trans Games 11(3):248–259

    Article  Google Scholar 

  35. Lanctot M, Lockhart E, Lespiau J-B, Zambaldi V, Upadhyay S, Pérolat J, Srinivasan S, Timbers F, Tuyls K, Omidshafiei S, Hennes D, Morrill D, Muller P, Ewalds T, Faulkner R, Kramár J, Vylder BD, Saeta B, Bradbury J, Ding D, Borgeaud S, Lai M, Schrittwieser J, Anthony T, Hughes E, Danihelka I, Ryan-Davis J (2019) OpenSpiel: a framework for reinforcement learning in games. CoRR arxiv:arXiv:1908.09453 [cs.LG]

  36. Shah S, Dey D, Lovett C, Kapoor A (2017) Airsim: high-fidelity visual and physical simulation for autonomous vehicles. In: Field and service robotics. arxiv:1705.05065

  37. Tunyasuvunakool S, Muldal A, Doron Y, Liu S, Bohez S, Merel J, Erez T, Lillicrap T, Heess N, Tassa Y (2020) dm_control: software and tasks for continuous control. Softw Impacts 6:100022. https://doi.org/10.1016/j.simpa.2020.100022

    Article  Google Scholar 

  38. Hasselt H (2012) Reinforcement learning in continuous state and action spaces. In: Wiering M, Otterlo M (eds) Reinforcement learning: state-of-the-art. Springer, Berlin, Heidelberg, pp 207–251

    Chapter  Google Scholar 

  39. Achiam J (2018) spinning up in deep reinforcement learning

  40. Littman ML (1994) Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the eleventh international conference on international conference on machine learning. ICML’94, pp 157–163. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA

  41. Watkins CJCH (1989) Learning from delayed rewards. PhD thesis, King’s College, Cambridge, UK

  42. Konda V, Tsitsiklis J (2000) Actor-critic algorithms. SIAM Journal on Control and Optimization. MIT Press, Cambridge, pp 1008–1014

    Google Scholar 

  43. Mnih V, Badia AP, Mirza M, Graves A, Lillicrap TP, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning. CoRR arxiv:1602.01783

  44. Schulman J, Levine S, Moritz P, Jordan MI, Abbeel P (2015) Trust region policy optimization. CoRR arxiv:1502.05477

  45. Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O (2017) Proximal policy optimization algorithms. arxiv: 1707.06347

  46. Fortunato M, Azar MG, Piot B, Menick J, Osband I, Graves A, Mnih V, Munos R, Hassabis D, Pietquin O, Blundell C, Legg S (2017) Noisy networks for exploration. CoRR arxiv: 1706.10295

  47. Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2016) Continuous control with deep reinforcement learning. In: ICLR (Poster). arxiv:1509.02971

  48. Watkins CJCH, Dayan P (1992) Q-learning. Mach Learn 8(3):279–292. https://doi.org/10.1007/BF00992698

    Article  Google Scholar 

  49. Pollack J, Blair A (1996) Why did td-gammon work? In: Mozer MC, Jordan M, Petsche T (eds) Advances in neural information processing systems, vol 9. MIT Press, Cambridge https://proceedings.neurips.cc/paper/1996/file/459a4ddcb586f24efd9395aa7662bc7c-Paper.pdf

  50. Baird L (1995) Residual algorithms: reinforcement learning with function approximation. In: Proceedings of the twelfth international conference on machine learning, pp 30–37. Morgan Kaufmann, Burlington

  51. Tsitsiklis JN, Van Roy B (1997) An analysis of temporal-difference learning with function approximation. IEEE Trans Autom Control 42(5):674–690. https://doi.org/10.1109/9.580874

    Article  MathSciNet  Google Scholar 

  52. Sallans B, Hinton GE (2004) Reinforcement learning with factored states and actions. J Mach Learn Res 5:1063–1088

    MathSciNet  Google Scholar 

  53. Maei HR, Szepesvári C, Bhatnagar S, Precup D, Silver D, Sutton RS (2009) Convergent temporal-difference learning with arbitrary smooth function approximation. In: Proceedings of the 22nd international conference on neural information processing systems. NIPS’09, pp 1204–1212. Curran Associates Inc., Red Hook, NY, USA

  54. Sutton RS, Maei H, Szepesvári C (2008) A convergent o(n) temporal-difference algorithm for off-policy learning with linear function approximation. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in Neural Information Processing Systems, vol 21. Curran Associates, Inc., New York https://proceedings.neurips.cc/paper/2008/file/e0c641195b27425bb056ac56f8953d24-Paper.pdf

  55. Sutton RS, Maei HR, Precup D, Bhatnagar S, Silver D, Szepesvári C, Wiewiora E (2009) Fast gradient-descent methods for temporal-difference learning with linear function approximation. ICML ’09, pp 993–1000. Association for Computing Machinery, New York https://doi.org/10.1145/1553374.1553501

  56. Lin L-J (1992) Reinforcement learning for robots using neural networks. PhD thesis, USA. UMI Order No. GAX93-22750

  57. Mcclelland JL, Mcnaughton BL, O’Reilly RC (1995) Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol Rev 102(3):419–457

    Article  Google Scholar 

  58. O’Neill J, Pleydell-Bouverie B, Dupret D, Csicsvari J (2010) Play it again: reactivation of waking experience and memory. Trends Neurosci 33(5):220–229. https://doi.org/10.1016/j.tins.2010.01.006

    Article  Google Scholar 

  59. Riedmiller M (2005) Neural fitted q iteration - first experiences with a data efficient neural reinforcement learning method. In: Proceedings of the 16th European conference on machine learning. ECML’05, pp 317–328. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11564096_32

  60. Lange S, Riedmiller M (2010) Deep auto-encoder neural networks in reinforcement learning. In: The 2010 international joint conference on neural networks (IJCNN), pp 1–8.https://doi.org/10.1109/IJCNN.2010.5596468

  61. Diuk C, Cohen A, Littman ML (2008) An object-oriented representation for efficient reinforcement learning. In: Proceedings of the 25th international conference on machine learning. ICML ’08, pp 240–247. Association for Computing Machinery, New York https://doi.org/10.1145/1390156.1390187

  62. Bellemare MG, Naddaf Y, Veness J, Bowling M (2013) The arcade learning environment: an evaluation platform for general agents. J Artif Int Res 47(1):253–279

    Google Scholar 

  63. Hausknecht M, Lehman J, Miikkulainen R, Stone P (2014) A neuroevolution approach to general Atari game playing. IEEE Trans Comput Intell AI Games 6(4):355–366. https://doi.org/10.1109/TCIAIG.2013.2294713

    Article  Google Scholar 

  64. Hasselt H (2010) Double q-learning. In: Lafferty J, Williams C, Shawe-Taylor J, Zemel R, Culotta A (eds) Advances in neural information processing systems, vol 23. Curran Associates, Inc., New York. https://proceedings.neurips.cc/paper/2010/file/091d584fced301b442654dd8c23b3fc9-Paper.pdf

  65. Kaelbling LP, Littman ML, Moore AW (1996) Reinforcement learning: a survey. J Artif Int Res 4(1):237–285

    Google Scholar 

  66. Sutton RS (1990) Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Porter B, Mooney R (eds) Machine learning proceedings 1990, pp 216–224. Morgan Kaufmann, San Francisco (CA). https://doi.org/10.1016/B978-1-55860-141-3.50030-4

  67. Brafman RI, Tennenholtz M (2003) R-max - a general polynomial time algorithm for near-optimal reinforcement learning. J Mach Learn Res 3:213–231. https://doi.org/10.1162/153244303765208377

    Article  MathSciNet  Google Scholar 

  68. Fujimoto S, Hoof H, Meger D (2018) Addressing function approximation error in actor-critic methods. CoRR arxiv: 1802.09477

  69. Dijk M, Juels A, Oprea A, Rivest R (2013) Flipit: the game of stealthy takeover. J Cryptol. https://doi.org/10.1007/s00145-012-9134-5

    Article  MathSciNet  Google Scholar 

  70. Chung K, Kamhoua CA, Kwiat KA, Kalbarczyk ZT, Iyer RK (2016) Game theory with learning for cyber security monitoring. In: 2016 IEEE 17th international symposium on high assurance systems engineering (HASE), pp 1–8. https://doi.org/10.1109/HASE.2016.48

  71. Tartakovsky AG, Rozovskii BL, Blažek RB, Kim H (2006) Detection of intrusions in information systems by sequential change-point methods. Stat Methodol 3(3):252–293. https://doi.org/10.1016/j.stamet.2005.05.003

    Article  MathSciNet  Google Scholar 

  72. Rasouli M, Miehling E, Teneketzis D (2014) A supervisory control approach to dynamic cyber-security. CoRR arxiv:1409.0838

  73. Liu W, Zhong S (2017) Web malware spread modelling and optimal control strategies. Sci Rep 7(1):42308

    Article  Google Scholar 

  74. Miehling E, Rasouli M, Teneketzis D (2018) A pomdp approach to the dynamic defense of large-scale cyber networks. IEEE Trans Inf Forensics Secur 13(10):2490–2505. https://doi.org/10.1109/TIFS.2018.2819967

    Article  Google Scholar 

  75. Bronfman-Nadas R, Zincir-Heywood N, Jacobs JT (2018) An artificial arms race: Could it improve mobile malware detectors?. In: 2018 network traffic measurement and analysis conference (TMA), pp 1–8. https://doi.org/10.23919/TMA.2018.8506545

  76. MYERSON RB (1991) Game theory: analysis of conflict. Harvard University Press, Cambridge http://www.jstor.org/stable/j.ctvjsf522 Accessed 04 Oct 2022

  77. Alpcan T, Basar T (2003) A game theoretic approach to decision and analysis in network intrusion detection. In: 42nd IEEE international conference on decision and control (IEEE Cat. No.03CH37475), vol 3, pp 2595–26003. https://doi.org/10.1109/CDC.2003.1273013

  78. Nguyen KC, Alpcan T, Basar T (2009) Security games with incomplete information. In: 2009 IEEE international conference on communications, pp 1–6. https://doi.org/10.1109/ICC.2009.5199443

  79. Durkota K, Lisy V, Bošansky B, Kiekintveld C (2015) Optimal network security hardening using attack graph games. In: Proceedings of the 24th International Conference on Artificial Intelligence. IJCAI’15, pp 526–532. AAAI Press

  80. Carroll TE, Grosu D (2009) A game theoretic investigation of deception in network security. In: 2009 Proceedings of 18th international conference on computer communications and networks, pp 1–6

  81. Tambe M (2011) Security and game theory: algorithms, deployed systems, lessons learned, 1st edn. Cambridge University Press, Cambridge

    Book  Google Scholar 

  82. Mailath GJ, Samuelson L (2006) Repeated games and reputations: long-run relationships. Oxford University Press, Oxford. https://doi.org/10.1093/acprof:oso/9780195300796.001.0001

    Book  Google Scholar 

  83. Bergadano F, Gunetti D, Picardi C (2002) User authentication through keystroke dynamics. ACM Trans Inf Syst Secur 5(4):367–397. https://doi.org/10.1145/581271.581272

    Article  Google Scholar 

  84. Maxion RA, Townsend TN (2004) Masquerade detection augmented with error analysis. IEEE Trans Reliab 53(1):124–147. https://doi.org/10.1109/TR.2004.824828

    Article  Google Scholar 

  85. Shapley LS (1953) Stochastic games*. Proc Nat Acad Sci 39(10):1095–1100. https://doi.org/10.1073/pnas.39.10.1095

    Article  MathSciNet  Google Scholar 

  86. Bethencourt J, Franklin J, Vernon M (2005) Mapping internet sensors with probe response attacks. In: 14th USENIX security symposium (USENIX Security 05). USENIX Association, Baltimore, MD. https://www.usenix.org/conference/14th-usenix-security-symposium/mapping-internet-sensors-probe-response-attacks

  87. Elderman R, Pater L, Thie A, Drugan M, Wiering M (2017) Adversarial reinforcement learning in a cyber security simulation. In: Filipe J, van den Herik J, Rocha A, Filipe J (eds) ICAART 2017 - Proceedings of the 9th international conference on agents and artificial intelligence, pp 559–566. SCITEPRESS-Science and Technology Publications, Lda., 9th International Conference on Agents and Artificial Intelligence (ICAART 2017), ICAART 2017 ; Conference date: 24-02-2017 Through 26-02-2017

  88. Alpcan T, Başar T (2010) Network security: a decision and game-theoretic approach. Cambridge University Press, Cambridge https://doi.org/10.1017/CBO9780511760778

  89. Li T, Peng G, Zhu Q, Basar T (2021) The confluence of networks, games and learning. CoRR arxiv:2105.08158

  90. Roy S, Ellis C, Shiva S, Dasgupta D, Shandilya V, Wu Q (2010) A survey of game theory as applied to network security. In: 2010 43rd Hawaii international conference on system sciences, pp 1–10. https://doi.org/10.1109/HICSS.2010.35

  91. Uther WTB, Veloso MM (2003) Adversarial reinforcement learning

  92. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: 2nd international conference on learning representations, ICLR 2014; Conference date: 14-04-2014 Through 16-04-2014

  93. Gilmer J, Metz L, Faghri F, Schoenholz SS, Raghu M, Wattenberg M, Goodfellow IJ (2018) Adversarial spheres. CoRR arxiv:1801.02774

  94. Shafahi A, Huang WR, Studer C, Feizi S, Goldstein T (2018) Are adversarial examples inevitable? CoRR arxiv:1809.02104

  95. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and Harnessing Adversarial Examples. https://doi.org/10.48550/ARXIV.1412.6572, arXiv arxiv:1412.6572

  96. Huang SH, Papernot N, Goodfellow IJ, Duan Y, Abbeel P (2017) Adversarial attacks on neural network policies. CoRR arxiv:1702.02284

  97. Lin Y, Hong Z, Liao Y, Shih M, Liu M, Sun M (2017) Tactics of adversarial attack on deep reinforcement learning agents. CoRR arxiv:1703.06748

  98. Kos J, Song D (2017) Delving into adversarial attacks on deep policies. https://doi.org/10.48550/ARXIV.1705.06452, arXiv arxiv:1705.06452

  99. Pattanaik A, Tang Z, Liu S, Bommannan G, Chowdhary G (2017) Robust deep reinforcement learning with adversarial attacks. CoRR arxiv:1712.03632

  100. Gleave A, Dennis M, Kant N, Wild C, Levine S, Russell S (2019) Adversarial policies: Attacking deep reinforcement learning. CoRR arxiv:1905.10615

  101. Molina-Markham A, Miniter C, Powell B, Ridley A (2021) Network environment design for autonomous cyberdefense. CoRR arxiv:2103.07583

  102. Xie C, Wu Y, Maaten L, Yuille AL, He K (2018) Feature denoising for improving adversarial robustness. CoRR arxiv:1812.03411

  103. Pinto L, Davidson J, Sukthankar R, Gupta A (2017) Robust adversarial reinforcement learning. CoRR arxiv:1703.02702

  104. Ilahi I, Usama M, Qadir J, Janjua MU, Al-Fuqaha A, Hoang DT, Niyato D (2022) Challenges and countermeasures for adversarial attacks on deep reinforcement learning. IEEE Trans Artif Intell 3(2):90–109. https://doi.org/10.1109/TAI.2021.3111139

    Article  Google Scholar 

  105. Chen T, Liu J, Xiang Y, Niu W, Tong E, Han Z (2019) Adversarial attack and defense in reinforcement learning-from AI security view. Cybersecurity 2:1–22

    Article  Google Scholar 

  106. Silva SH, Najafirad P (2020) Opportunities and challenges in deep learning adversarial robustness: a survey. CoRR arxiv:2007.00753

  107. Rege M (2018) Machine learning for cyber defense and attack. https://api.semanticscholar.org/CorpusID:232319054

  108. Benjamin DP, Pal P, Webber F, Rubel P, Atigetchi M (2008) Using a cognitive architecture to automate cyberdefense reasoning. In: 2008 bio-inspired, learning and intelligent systems for security, pp 58–63. https://doi.org/10.1109/BLISS.2008.17

  109. Ko RKL (2020) Cyber autonomy: Automating the hacker- self-healing, self-adaptive, automatic cyber defense systems and their impact to the industry, society and national security. CoRR arxiv:2012.04405

  110. Baah GK, Hobson T, Okhravi H, Roberts SC, Streilein WW, Yuditskaya S (2015) A study of gaps in cyber defense automation. https://api.semanticscholar.org/CorpusID:40128147

  111. Applebaum A, Dennler C, Dwyer P, Moskowitz M, Nguyen H, Nichols N, Park N, Rachwalski P, Rau F, Webster A, Wolk M (2022) Bridging automated to autonomous cyber defense: foundational analysis of tabular q-learning. In: Proceedings of the 15th ACM workshop on artificial intelligence and security. AISec’22, pp 149–159. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3560830.3563732

  112. Standen M, Lucas M, Bowman D, Richer TJ, Kim J, Marriott D (2021) CybORG: a gym for the development of autonomous cyber agents. arXiv arXiv:2108.09118

  113. Vyas S, Hannay J, Bolton A, Burnap PP (2023) Automated cyber defence: a review

  114. Lohn A, Knack A, Burke A, Jackson K (2023) Autonomous cyber defence: a roadmap from lab to ops. Technical report, CETaS Research Reports (June). https://cetas.turing.ac.uk/publications/autonomous-cyber-defence

  115. Rush G, Tauritz D, Kent A (2015) Coevolutionary agent-based network defense lightweight event system (candles), pp 859–866. https://doi.org/10.1145/2739482.2768429

  116. Zhu M, Hu Z, Liu P (2014) Reinforcement learning algorithms for adaptive cyber defense against heartbleed. In: Proceedings of the First ACM workshop on moving target defense. MTD ’14, pp 51–58. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/2663474.2663481

  117. Rawat DB, Reddy SR (2017) Software defined networking architecture, security and energy efficiency: a survey. IEEE Commun Surv Tutor 19(1):325–346. https://doi.org/10.1109/COMST.2016.2618874

    Article  Google Scholar 

  118. Ahmad I, Namal S, Ylianttila M, Gurtov A (2015) Security in software defined networks: a survey. IEEE Commun Surv Tutor 17(4):2317–2346. https://doi.org/10.1109/COMST.2015.2474118

    Article  Google Scholar 

  119. Kim G, Kim Y, Lim H (2022) Deep reinforcement learning-based routing on software-defined networks. IEEE Access 10:18121–18133. https://doi.org/10.1109/ACCESS.2022.3151081

    Article  Google Scholar 

  120. Salahuddin MA, Al-Fuqaha A, Guizani M (2015) Software-defined networking for RSU clouds in support of the internet of vehicles. IEEE Internet Things J 2(2):133–144. https://doi.org/10.1109/JIOT.2014.2368356

    Article  Google Scholar 

  121. Mao H, Alizadeh M, Menache I, Kandula S (2016) Resource management with deep reinforcement learning. HotNets ’16, pp 50–56. Association for computing machinery, New York, NY, USA . https://doi.org/10.1145/3005745.3005750

  122. Lin S-C, Akyildiz IF, Wang P, Luo M (2016) Qos-aware adaptive routing in multi-layer hierarchical software defined networks: a reinforcement learning approach. In: 2016 IEEE international conference on services computing (SCC), pp 25–33. https://doi.org/10.1109/SCC.2016.12

  123. Huang R, Chu X, Zhang J, Hu YH (2015) Energy-efficient monitoring in software defined wireless sensor networks using reinforcement learning: A prototype. Int J Distrib Sens Netw. https://doi.org/10.1155/2015/360428

    Article  Google Scholar 

  124. Salahuddin MA, Al-Fuqaha A, Guizani M (2016) Reinforcement learning for resource provisioning in the vehicular cloud. IEEE Wirel Commun 23(4):128–135. https://doi.org/10.1109/MWC.2016.7553036

    Article  Google Scholar 

  125. Mestres A, Rodriguez-Natal A, Carner J, Barlet-Ros P, Alarcón E, Solé M, Muntés-Mulero V, Meyer D, Barkai S, Hibbett MJ, Estrada G, Ma’ruf K, Coras F, Ermagan V, Latapie H, Cassar C, Evans J, Maino F, Walrand J, Cabellos A (2017) Knowledge-defined networking. SIGCOMM Comput Commun Rev 47(3):2–10. https://doi.org/10.1145/3138808.3138810

    Article  Google Scholar 

  126. Kim S, Son J, Talukder A, Hong CS (2016) Congestion prevention mechanism based on q-leaning for efficient routing in sdn. In: 2016 international conference on information networking (ICOIN), pp 124–128. https://doi.org/10.1109/ICOIN.2016.7427100

  127. Casas-Velasco DM, Rendon OMC, Fonseca NLS (2021) Intelligent routing based on reinforcement learning for software-defined networking. IEEE Trans Netw Serv Manage 18(1):870–881. https://doi.org/10.1109/TNSM.2020.3036911

    Article  Google Scholar 

  128. Radoglou-Grammatikis P, Rompolos K, Sarigiannidis P, Argyriou V, Lagkas T, Sarigiannidis A, Goudos S, Wan S (2022) Modeling, detecting, and mitigating threats against industrial healthcare systems: a combined software defined networking and reinforcement learning approach. IEEE Trans Industr Inf 18(3):2041–2052. https://doi.org/10.1109/TII.2021.3093905

    Article  Google Scholar 

  129. Ridley A (2018) Machine learning for autonomous cyber defense. Next Wave 22(1):7–14

    Google Scholar 

  130. Hammar K, Stadler R (2020) Finding effective security strategies through reinforcement learning and self-play. CoRR arxiv:2009.08120

  131. Berner C, Brockman G, Chan B, Cheung V, Debiak P, Dennison C, Farhi D, Fischer Q, Hashme S, Hesse C, Józefowicz R, Gray S, Olsson C, Pachocki J, Petrov M, Oliveira Pinto HP, Raiman J, Salimans T, Schlatter J, Schneider J, Sidor S, Sutskever I, Tang J, Wolski F, Zhang S (2019) Dota 2 with large scale deep reinforcement learning. CoRR arxiv:1912.06680

  132. Bowling M, Veloso MM (2002) Scalable learning in stochastic games

  133. Li S, Wu Y, Cui X, Dong H, Fang F, Russell S (2019) Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient. AAAI Press https://doi.org/10.1609/aaai.v33i01.33014213

  134. Hammar K, Stadler R (2021) Learning intrusion prevention policies through optimal stopping. CoRR arxiv:2106.07160

  135. Walter E, Ferguson-Walter K, Ridley A (2021) Incorporating deception into cyberbattlesim for autonomous defense. CoRR arxiv:2108.13980

  136. Mokube I, Adams M (2007) Honeypots: concepts, approaches, and challenges. In: ACM-SE 45: Proceedings of the 45th annual southeast regional conference, pp 321–326

  137. Andrew A, Spillard S, Collyer J, Dhir N (2022) Developing optimal causal cyber-defence agents via cyber security simulation

  138. Aglietti V, Dhir N, González J, Damoulas T (2021) Dynamic causal bayesian optimization

  139. Foley M, Hicks C, Highnam K, Mavroudis V (2022) Autonomous network defence using reinforcement learning. In: Proceedings of the 2022 ACM on Asia conference on computer and communications security. ASIA CCS ’22, pp 1252–1254. Association for computing machinery, New York, NY, USA. https://doi.org/10.1145/3488932.3527286

  140. Foley M, Wang M, MZ, Hicks C, Mavroudis V (2023) Inroads into autonomous network defence using explained reinforcement learning

  141. Zhu Z, Chen M, Zhu C, Zhu Y (2024) Effective defense strategies in network security using improved double dueling deep q-network. Comput Secur 136:103578. https://doi.org/10.1016/j.cose.2023.103578

    Article  Google Scholar 

  142. Kiely M, Bowman D, Standen M, Moir C (2023) On autonomous agents in a cyber defence environment

  143. Cyber Autonomy Gym for Experimentation Challenge 2. GitHub. Created by Maxwell Standen, David Bowman, Son Hoang, Toby Richer, Martin Lucas, Richard Van Tassel, Phillip Vu, Mitchell Kiely (2022)

  144. Adawadkar AMK, Kulkarni N (2022) Cyber-security and reinforcement learning - a brief survey. Eng Appl Artif Intell 114:105116. https://doi.org/10.1016/j.engappai.2022.105116

    Article  Google Scholar 

  145. Sewak M, Sahay SK, Rathore H (2022) Deep Reinforcement learning for cybersecurity threat detection and protection: a review. Springer, Berlin, pp 51–72 https://doi.org/10.1007/978-3-030-97532-6_4

  146. Wang W, Sun D, Jiang F, Chen X, Zhu C (2022) Research and challenges of reinforcement learning in cyber defense decision-making for intranet security. Algorithms. https://doi.org/10.3390/a15040134

    Article  Google Scholar 

  147. Huang Y, Huang L, Zhu Q (2022) Reinforcement learning for feedback-enabled cyber resilience. Annu Rev Control 53:273–295. https://doi.org/10.1016/j.arcontrol.2022.01.001

    Article  MathSciNet  Google Scholar 

  148. Fard NE, Selmic RR, Khorasani K (2023) A review of techniques and policies on cybersecurity using artificial intelligence and reinforcement learning algorithms. IEEE Technol Soc Mag 42(3):57–68. https://doi.org/10.1109/MTS.2023.3306540

    Article  Google Scholar 

  149. Baillie C, Standen M, Schwartz J, Docking M, Bowman D, Kim J (2020) Cyborg: an autonomous cyber operations research gym. CoRR arxiv:2002.10667

  150. Schwartz J (2022) Network Attack Simulator. https://github.com/Jjschwartz/NetworkAttackSimulator

  151. Tian Z, Shi W, Wang Y, Zhu C, Du X, Su S, Sun Y, Guizani N (2019) Real time lateral movement detection based on evidence reasoning network for edge computing environment. CoRR arxiv:1902.04387

  152. Bohara A, Noureddine MA, Fawaz A, Sanders WH (2017) An unsupervised multi-detector approach for identifying malicious lateral movement. In: 2017 IEEE 36th symposium on reliable distributed systems (SRDS), pp 224–233. https://doi.org/10.1109/SRDS.2017.31

  153. Fawaz A, Bohara A, Cheh C, Sanders WH (2016) Lateral movement detection using distributed data fusion. In: 2016 IEEE 35th symposium on reliable distributed systems (SRDS), pp 21–30 . https://doi.org/10.1109/SRDS.2016.014

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ahmed Mohamed Ahmed.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohamed Ahmed, A., Nguyen, T.T., Abdelrazek, M. et al. Reinforcement learning-based autonomous attacker to uncover computer network vulnerabilities. Neural Comput & Applic 36, 14341–14360 (2024). https://doi.org/10.1007/s00521-024-09668-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-024-09668-0

Keywords