Skip to main content

Advertisement

Log in

Chatgpt for cybersecurity: practical applications, challenges, and future directions

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Artificial intelligence (AI) advancements have revolutionized many critical domains by providing cost-effective, automated, and intelligent solutions. Recently, ChatGPT has achieved a momentous change and made substantial progress in natural language processing. As such, a chatbot-driven AI technology has the capabilities to interact and communicate with users and generate human-like responses. ChatGPT, on the other hand, has the potential to influence changes in the cybersecurity domain. ChatGPT can be utilized as a chatbot-driven security assistant for penetration testing to analyze, investigate, and develop security solutions. However, ChatGPT raises concerns about how the tool can be used for cybercrime and malicious activities. Attackers can use such a tool to cause substantial harm by exploiting vulnerabilities, writing malicious code, and circumventing security measures on a targeted system. This article investigates the implications of the ChatGPT model in the domain of cybersecurity. We present the state-of-the-art practical applications of ChatGPT in cybersecurity. In addition, we demonstrate in a case study how a ChatGPT can be used to design and develop False data injection attacks against critical infrastructure such as industrial control systems. Conversely, we show how such a tool can be used to help security analysts to analyze, design, and develop security solutions against cyberattacks. Finally, this article discusses the open challenges and future directions of ChatGPT in cybersecurity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

Enquiries about data availability should be directed to the authors.

References

  1. Sarker, I.H., Furhad, M.H., Nowrozy, R.: Ai-driven cybersecurity: an overview, security intelligence modeling and research directions. SN Comput. Sci. 2, 1–18 (2021)

    Article  Google Scholar 

  2. Hammad, M., Bsoul, M., Hammad, M., Al-Hawawreh, M.: An efficient approach for representing and sending data in wireless sensor networks. J. Commun. 14(2), 104–109 (2019)

    Article  Google Scholar 

  3. Farah, J.C., Spaenlehauer, B., Sharma, V., Rodríguez-Triana, M.J., Ingram, S., Gillet, D.: Impersonating chatbots in a code review exercise to teach software engineering best practices. In: IEEE Global Engineering Education Conference (EDUCON), pp. 1634–1642. IEEE (2022)

    Chapter  Google Scholar 

  4. Al-Hawawreh, M., Moustafa, N., Slay, J.: A threat intelligence framework for protecting smart satellite-based healthcare networks. Neural Comput. Appl. (2021). https://doi.org/10.1007/s00521-021-06441-5

    Article  Google Scholar 

  5. Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., Gao, M., Hou, H., Wang, C.: Machine learning and deep learning methods for cybersecurity. IEEE Access 6, 35365–35381 (2018)

    Article  Google Scholar 

  6. Wu, J.: Literature review on vulnerability detection using NLP technology. arXiv preprint arXiv:2104.11230 (2021)

  7. Maneriker, P., Stokes, J.W., Lazo, E.G., Carutasu, D., Tajaddodianfar, F., Gururajan, A.: Urltran: improving phishing URL detection using transformers. In: IEEE military communications conference (MILCOM), pp. 197–204. IEEE (2021)

    Google Scholar 

  8. Baki, S., Verma, R., Mukherjee, A., Gnawali, O.: Scaling and effectiveness of email masquerade attacks: Exploiting natural language generation. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 469–482 (2017)

  9. Zhou, Z., Guan, H., Bhat, M.M., Hsu, J.: Fake news detection via NLP is vulnerable to adversarial attacks. arXiv preprint arXiv:1901.09657 (2019)

  10. McKee, F., Noever, D.: Chatbots in a honeypot world. arXiv preprint arXiv:2301.03771 (2023)

  11. McKee, F., Noever, D.: Chatbots in a botnet world. arXiv preprint arXiv:2212.11126 (2022)

  12. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  13. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A.: Language models are few-shot learners. Adv. Neural Inform. Process. Syst. 33, 1877–1901 (2020)

    Google Scholar 

  14. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A.: Training language models to follow instructions with human feedback. Adv. Neural Inform. Process. Syst. 35, 27730–27744 (2022)

    Google Scholar 

  15. Abdullah, M., Madain, A., Jararweh, Y.: Chatgpt: fundamentals, applications and social impacts. In: Ninth international conference on social networks analysis, management and security (SNAMS), pp. 1–8. IEEE (2022)

    Google Scholar 

  16. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018)

  17. Schneider, E.T.R., Souza, J.V.A., Gumiel, Y.B., Moro, C., Paraiso, E.C.: A GPT-2 language model for biomedical texts in Portuguese. In: IEEE 34th international symposium on computer-based medical systems (CBMS), pp. 474–479. IEEE (2021)

    Google Scholar 

  18. Clark, E., August, T., Serrano, S., Haduong, N., Gururangan, S., Smith, N.A.: All that’s’ human is not gold: evaluating human evaluation of generated text. arXiv preprint arXiv:2107.00061 (2021)

  19. Ippolito, D., Duckworth, D., Callison-Burch, C., Eck, D.: Automatic detection of generated text is easiest when humans are fooled. arXiv preprint arXiv:1911.00650 (2019)

  20. Dale, R.: Gpt-3: what’s it good for? Nat. Lang. Eng. 27(1), 113–118 (2021)

    Article  Google Scholar 

  21. Kolides, A., Nawaz, A., Rathor, A., Beeman, D., Hashmi, M., Fatima, S., Berdik, D., Al-Ayyoub, M., Jararweh, Y.: Artificial intelligence foundation and pre-trained models: fundamentals, applications, opportunities, and social impacts. Simul. Modell. Pract. Theory 126, 102754 (2023)

    Article  Google Scholar 

  22. Noever, D., Williams, K.: Chatbots as fluent polyglots: revisiting breakthrough code snippets. arXiv preprint arXiv:2301.03373 (2023)

  23. Checkpoint: cybercriminals bypass ChatGPT restrictions to generate malicious content. www.checkpoint.com

  24. Karanjai, R.: Targeted phishing campaigns using large scale language models. arXiv preprint arXiv:2301.00665 (2022)

  25. Heaven, W.: A GPT-3 bot posted comments on reddit for a week and no one noticed. https://www.technologyreview.com/

  26. Ben-Moshe, S., Gekker, G., Cohen, G.: OPWNAI: AI that can save the day or hack it away. https://research.checkpoint.com/2022/opwnai-ai-that-can-save-the-day-or-hack-it-away/

  27. Patel, A., Satller, J.: Creatively malicious prompt engineering (2023)

  28. Zhai, X.: Chatgpt user experience: implications for education. (2022)

  29. Susnjak, T.: Chatgpt: The end of online exam integrity? arXiv preprint arXiv:2212.09292 (2022)

  30. Pang, Z.-H., Fan, L.-Z., Dong, Z., Han, Q.-L., Liu, G.-P.: False data injection attacks against partial sensor measurements of networked control systems. IEEE Trans. Circ. Syst. II: Express Briefs 69(1), 149–153 (2021)

    Google Scholar 

  31. Morris, T.H., Thornton, Z., Turnipseed, I.: Industrial control system simulation and data logging for intrusion detection system research. 7th annual southeastern cyber security summit, 3–4 (2015)

  32. Jolfaei, A., Kant, K.: On the silent perturbation of state estimation in smart grid. IEEE Trans. Ind. Appl. 56(4), 4405–4414 (2020)

    Google Scholar 

  33. Pei, C., Xiao, Y., Liang, W., Han, X.: Detecting false data injection attacks using canonical variate analysis in power grid. IEEE Trans. Network Sci. Eng. 8(2), 971–983 (2020)

    Article  MathSciNet  Google Scholar 

  34. Al-Hawawreh, M., Sitnikova, E., Den Hartog, F.: An efficient intrusion detection model for edge system in brownfield industrial internet of things. In: Proceedings of the 3rd international conference on big data and internet of things, pp. 83–87 (2019)

  35. Feng, Y., Huang, S., Chen, Q.A., Liu, H.X., Mao, Z.M.: Vulnerability of traffic control system under cyberattacks with falsified data. Transp. Res. Rec. 2672(1), 1–11 (2018)

    Article  Google Scholar 

  36. OpenAI: Open AI privacy policy. Accessed on: 2022-02-15. https://www.openai.com/privacy

  37. Balash, D.G., Wu, X., Grant, M., Reyes, I., Aviv, A.J.: Security and privacy perceptions of \(\{\)Third-Party\(\}\) application access for google accounts. In: 31st USENIX security symposium (USENIX Security 22), pp. 3397–3414 (2022)

  38. Roy, S.S., Naragam, K.V., Nilizadeh, S.: Generating phishing attacks using chatgpt. arXiv preprint arXiv:2305.05133 (2023)

  39. Renaud, K., Warkentin, M., Westerman, G.: From ChatGPT to HackGPT: Meeting the cybersecurity threat of generative AI. MIT Sloan Management Review (2023)

  40. Sebastian, G.: Do chatgpt and other AI chatbots pose a cybersecurity risk?: an exploratory study. Int. J. Secur. Priv. Pervas. Comput. (IJSPPC) 15(1), 1–11 (2023)

    Google Scholar 

  41. Sebastian, G.: Privacy and data protection in chatgpt and other AI chatbots: Strategies for securing user information. (2023)

Download references

Funding

The authors have not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muna Al-Hawawreh.

Ethics declarations

Competing interests

The authors have not disclosed any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Al-Hawawreh, M., Aljuhani, A. & Jararweh, Y. Chatgpt for cybersecurity: practical applications, challenges, and future directions. Cluster Comput 26, 3421–3436 (2023). https://doi.org/10.1007/s10586-023-04124-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-023-04124-5

Keywords

Navigation