Abstract
Artificial intelligence (AI) advancements have revolutionized many critical domains by providing cost-effective, automated, and intelligent solutions. Recently, ChatGPT has achieved a momentous change and made substantial progress in natural language processing. As such, a chatbot-driven AI technology has the capabilities to interact and communicate with users and generate human-like responses. ChatGPT, on the other hand, has the potential to influence changes in the cybersecurity domain. ChatGPT can be utilized as a chatbot-driven security assistant for penetration testing to analyze, investigate, and develop security solutions. However, ChatGPT raises concerns about how the tool can be used for cybercrime and malicious activities. Attackers can use such a tool to cause substantial harm by exploiting vulnerabilities, writing malicious code, and circumventing security measures on a targeted system. This article investigates the implications of the ChatGPT model in the domain of cybersecurity. We present the state-of-the-art practical applications of ChatGPT in cybersecurity. In addition, we demonstrate in a case study how a ChatGPT can be used to design and develop False data injection attacks against critical infrastructure such as industrial control systems. Conversely, we show how such a tool can be used to help security analysts to analyze, design, and develop security solutions against cyberattacks. Finally, this article discusses the open challenges and future directions of ChatGPT in cybersecurity.
Similar content being viewed by others
Data availability
Enquiries about data availability should be directed to the authors.
References
Sarker, I.H., Furhad, M.H., Nowrozy, R.: Ai-driven cybersecurity: an overview, security intelligence modeling and research directions. SN Comput. Sci. 2, 1–18 (2021)
Hammad, M., Bsoul, M., Hammad, M., Al-Hawawreh, M.: An efficient approach for representing and sending data in wireless sensor networks. J. Commun. 14(2), 104–109 (2019)
Farah, J.C., Spaenlehauer, B., Sharma, V., Rodríguez-Triana, M.J., Ingram, S., Gillet, D.: Impersonating chatbots in a code review exercise to teach software engineering best practices. In: IEEE Global Engineering Education Conference (EDUCON), pp. 1634–1642. IEEE (2022)
Al-Hawawreh, M., Moustafa, N., Slay, J.: A threat intelligence framework for protecting smart satellite-based healthcare networks. Neural Comput. Appl. (2021). https://doi.org/10.1007/s00521-021-06441-5
Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., Gao, M., Hou, H., Wang, C.: Machine learning and deep learning methods for cybersecurity. IEEE Access 6, 35365–35381 (2018)
Wu, J.: Literature review on vulnerability detection using NLP technology. arXiv preprint arXiv:2104.11230 (2021)
Maneriker, P., Stokes, J.W., Lazo, E.G., Carutasu, D., Tajaddodianfar, F., Gururajan, A.: Urltran: improving phishing URL detection using transformers. In: IEEE military communications conference (MILCOM), pp. 197–204. IEEE (2021)
Baki, S., Verma, R., Mukherjee, A., Gnawali, O.: Scaling and effectiveness of email masquerade attacks: Exploiting natural language generation. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 469–482 (2017)
Zhou, Z., Guan, H., Bhat, M.M., Hsu, J.: Fake news detection via NLP is vulnerable to adversarial attacks. arXiv preprint arXiv:1901.09657 (2019)
McKee, F., Noever, D.: Chatbots in a honeypot world. arXiv preprint arXiv:2301.03771 (2023)
McKee, F., Noever, D.: Chatbots in a botnet world. arXiv preprint arXiv:2212.11126 (2022)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A.: Language models are few-shot learners. Adv. Neural Inform. Process. Syst. 33, 1877–1901 (2020)
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A.: Training language models to follow instructions with human feedback. Adv. Neural Inform. Process. Syst. 35, 27730–27744 (2022)
Abdullah, M., Madain, A., Jararweh, Y.: Chatgpt: fundamentals, applications and social impacts. In: Ninth international conference on social networks analysis, management and security (SNAMS), pp. 1–8. IEEE (2022)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018)
Schneider, E.T.R., Souza, J.V.A., Gumiel, Y.B., Moro, C., Paraiso, E.C.: A GPT-2 language model for biomedical texts in Portuguese. In: IEEE 34th international symposium on computer-based medical systems (CBMS), pp. 474–479. IEEE (2021)
Clark, E., August, T., Serrano, S., Haduong, N., Gururangan, S., Smith, N.A.: All that’s’ human is not gold: evaluating human evaluation of generated text. arXiv preprint arXiv:2107.00061 (2021)
Ippolito, D., Duckworth, D., Callison-Burch, C., Eck, D.: Automatic detection of generated text is easiest when humans are fooled. arXiv preprint arXiv:1911.00650 (2019)
Dale, R.: Gpt-3: what’s it good for? Nat. Lang. Eng. 27(1), 113–118 (2021)
Kolides, A., Nawaz, A., Rathor, A., Beeman, D., Hashmi, M., Fatima, S., Berdik, D., Al-Ayyoub, M., Jararweh, Y.: Artificial intelligence foundation and pre-trained models: fundamentals, applications, opportunities, and social impacts. Simul. Modell. Pract. Theory 126, 102754 (2023)
Noever, D., Williams, K.: Chatbots as fluent polyglots: revisiting breakthrough code snippets. arXiv preprint arXiv:2301.03373 (2023)
Checkpoint: cybercriminals bypass ChatGPT restrictions to generate malicious content. www.checkpoint.com
Karanjai, R.: Targeted phishing campaigns using large scale language models. arXiv preprint arXiv:2301.00665 (2022)
Heaven, W.: A GPT-3 bot posted comments on reddit for a week and no one noticed. https://www.technologyreview.com/
Ben-Moshe, S., Gekker, G., Cohen, G.: OPWNAI: AI that can save the day or hack it away. https://research.checkpoint.com/2022/opwnai-ai-that-can-save-the-day-or-hack-it-away/
Patel, A., Satller, J.: Creatively malicious prompt engineering (2023)
Zhai, X.: Chatgpt user experience: implications for education. (2022)
Susnjak, T.: Chatgpt: The end of online exam integrity? arXiv preprint arXiv:2212.09292 (2022)
Pang, Z.-H., Fan, L.-Z., Dong, Z., Han, Q.-L., Liu, G.-P.: False data injection attacks against partial sensor measurements of networked control systems. IEEE Trans. Circ. Syst. II: Express Briefs 69(1), 149–153 (2021)
Morris, T.H., Thornton, Z., Turnipseed, I.: Industrial control system simulation and data logging for intrusion detection system research. 7th annual southeastern cyber security summit, 3–4 (2015)
Jolfaei, A., Kant, K.: On the silent perturbation of state estimation in smart grid. IEEE Trans. Ind. Appl. 56(4), 4405–4414 (2020)
Pei, C., Xiao, Y., Liang, W., Han, X.: Detecting false data injection attacks using canonical variate analysis in power grid. IEEE Trans. Network Sci. Eng. 8(2), 971–983 (2020)
Al-Hawawreh, M., Sitnikova, E., Den Hartog, F.: An efficient intrusion detection model for edge system in brownfield industrial internet of things. In: Proceedings of the 3rd international conference on big data and internet of things, pp. 83–87 (2019)
Feng, Y., Huang, S., Chen, Q.A., Liu, H.X., Mao, Z.M.: Vulnerability of traffic control system under cyberattacks with falsified data. Transp. Res. Rec. 2672(1), 1–11 (2018)
OpenAI: Open AI privacy policy. Accessed on: 2022-02-15. https://www.openai.com/privacy
Balash, D.G., Wu, X., Grant, M., Reyes, I., Aviv, A.J.: Security and privacy perceptions of \(\{\)Third-Party\(\}\) application access for google accounts. In: 31st USENIX security symposium (USENIX Security 22), pp. 3397–3414 (2022)
Roy, S.S., Naragam, K.V., Nilizadeh, S.: Generating phishing attacks using chatgpt. arXiv preprint arXiv:2305.05133 (2023)
Renaud, K., Warkentin, M., Westerman, G.: From ChatGPT to HackGPT: Meeting the cybersecurity threat of generative AI. MIT Sloan Management Review (2023)
Sebastian, G.: Do chatgpt and other AI chatbots pose a cybersecurity risk?: an exploratory study. Int. J. Secur. Priv. Pervas. Comput. (IJSPPC) 15(1), 1–11 (2023)
Sebastian, G.: Privacy and data protection in chatgpt and other AI chatbots: Strategies for securing user information. (2023)
Funding
The authors have not disclosed any funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors have not disclosed any competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Al-Hawawreh, M., Aljuhani, A. & Jararweh, Y. Chatgpt for cybersecurity: practical applications, challenges, and future directions. Cluster Comput 26, 3421–3436 (2023). https://doi.org/10.1007/s10586-023-04124-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10586-023-04124-5