Abstract
The development of Artificial Intelligence (AI) based systems to automatically generate hardware systems has gained an impulse that aims to accelerate the hardware design cycle with no human intervention. Recently, the striking AI-based system ChatGPT from OpenAI has achieved a momentous headline and has gone viral within a short span of time since its launch. This chatbot has the capability to interactively communicate with the designers through a prompt to generate software and hardware code, write logic designs, and synthesize designs for implementation on Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuits (ASIC). However, an unvetted ChatGPT prompt by a designer with an aim to generate hardware code may lead to security vulnerabilities in the generated code. In this work, we systematically investigate the necessary strategies to be adopted by a designer to enable ChatGPT to recommend secure hardware code generation. To perform this analysis, we prompt ChatGPT to generate code scenarios listed in Common Vulnerability Enumerations (CWEs) under the hardware design (CWE-1194) view from MITRE. We first demonstrate how a ChatGPT generates insecure code given the diversity of prompts. Finally, we propose techniques to be adopted by a designer to generate secure hardware code. In total, we create secure hardware code for 10 noteworthy CWEs under hardware design view listed on MITRE site.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Asare, O., et al.: Is github’s copilot as bad as humans at introducing vulnerabilities in code? (2022). https://doi.org/10.48550/ARXIV.2204.04741, https://arxiv.org/abs/2204.04741
Austin, J., et al.: Program synthesis with large language models. arXiv preprint arXiv:2108.07732 (2021)
Budzianowski, P., et al.: Hello, it’s GPT-2-how can i help you? towards the use of pretrained language models for task-oriented dialogue systems. arXiv preprint. arXiv:1907.05774 (2019)
Chen, M., et al.: Evaluating large language models trained on code. https://doi.org/10.48550/ARXIV.2107.03374, https://arxiv.org/abs/2107.03374
Dale, R.: Gpt-3: What’s it good for? Natural Language Engineering 27(1), 113–118
Devlin, J., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR (2018), http://arxiv.org/abs/1810.04805
Jain, N., et al.: Jigsaw: large language models meet program synthesis. In: Proceedings of the 44th ICSE, pp. 1219–1231 (2022)
Mangard, S., et al.: Power Analysis Attacks: Revealing the Secrets of Smart Cards, 1st edn. Springer Publishing Company, Incorporated (2010)
(MITRE), T.M.C.: Cwe-1221: Incorrect register defaults or module parameters. https://cwe.mitre.org/data/definitions/1221.html
(MITRE), T.M.C.: Cwe-1224: improper restriction of write-once bit fields. https://cwe.mitre.org/data/definitions/1224.html
(MITRE), T.M.C.: Cwe-1234: Hardware internal or debug modes allow override of locks. https://cwe.mitre.org/data/definitions/1234.html
(MITRE), T.M.C.: Cwe-1245: Improper finite state machines (fsms) in hardware logic. https://cwe.mitre.org/data/definitions/1245.html
(MITRE), T.M.C.: Cwe-1254: Incorrect comparison logic granularity. https://cwe.mitre.org/data/definitions/1254.html
(MITRE), T.M.C.: Cwe-1255: Comparison logic is vulnerable to power side-channel attacks. https://cwe.mitre.org/data/definitions/1255.html
(MITRE), T.M.C.: Cwe-1271: Uninitialized value on reset for registers holding security settings. https://cwe.mitre.org/data/definitions/1271.html
(MITRE), T.M.C.: Cwe-1276: Hardware child block incorrectly connected to parent system. https://cwe.mitre.org/data/definitions/1276.html
(MITRE), T.M.C.: Cwe-1280: Access control check implemented after asset is accessed. https://cwe.mitre.org/data/definitions/1280.html
(MITRE), T.M.C.: Cwe-1298: Hardware logic contains race conditions. https://cwe.mitre.org/data/definitions/1298.html
(MITRE), T.M.C.: Cwe-1194: Cwe view: hardware design (2021). https://cwe.mitre.org/data/definitions/1194.html
Nguyen, N., Nadi, S.: An empirical evaluation of github copilot’s code suggestions. In: Proceedings of the 19th International Conference on Mining Software Repositories, pp. 1–5 (2022)
OpenAI: Chatgpt: Optimizing language models for dialogue (2022). https://openai.com/blog/chatgpt/
Pearce, H., et al.: Asleep at the keyboard? assessing the security of github copilot’s code contributions. In: 2022 IEEE S & P, pp. 754–768. IEEE (2022)
Reddy, S., et al.: Coqa: a conversational question answering challenge. Trans. ACL 7, 249–266 (2019)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems. vol. 30. Curran Associates, Inc. (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Nair, M., Sadhukhan, R., Mukhopadhyay, D. (2023). How Hardened is Your Hardware? Guiding ChatGPT to Generate Secure Hardware Resistant to CWEs. In: Dolev, S., Gudes, E., Paillier, P. (eds) Cyber Security, Cryptology, and Machine Learning. CSCML 2023. Lecture Notes in Computer Science, vol 13914. Springer, Cham. https://doi.org/10.1007/978-3-031-34671-2_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-34671-2_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-34670-5
Online ISBN: 978-3-031-34671-2
eBook Packages: Computer ScienceComputer Science (R0)