Abstract
Artificial Intelligence emerged as a field of study in the mid-20th century, driven by the ambition to develop machines capable of emulating human intelligence and reasoning. However, its rapid advancement has brought forth many cybersecurity challenges, encompassing data security, privacy preservation, and model resilience. Consequently, the field of AI necessitates tailored cybersecurity defense mechanisms and protective technologies to safeguard its integrity. In this paper, we delve into the realm of AI cybersecurity, exploring its prominent areas and delineating various attacks occurring across different phases of the Artificial Intelligence lifecycle. Furthermore, we elucidate defensive strategies against adversarial attacks, encompassing preprocessing techniques, adversarial training methodologies, and distillation methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
McCarthy, J., Minsky, M.L., et al.: A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI Mag. 27(4), 12 (2006)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Han, X., Zhou, Y., Chen, K., et al.: ADS-lead: Lifelong anomaly detection in autonomous driving systems. IEEE Trans. Intell. Transp. Syst. 24(1), 1039–1051 (2022)
Hasselgren, C., Oprea, T.I.: Artificial intelligence for drug discovery: are we there yet? Annu. Rev. Pharmacol. Toxicol. 64, 527–550 (2024)
Endsley, M.R.: Autonomous driving systems: a preliminary naturalistic study of the Tesla Model S. J. Cogn. Eng. Decis. Making 11(3), 225–238 (2017)
Ammari, T., Kaye, J., Tsai, J.Y., et al.: Music, search, and IoT: how people (really) use voice assistants. ACM Trans. Comput. Hum. Interact. (TOCHI) 26(3), 1–28 (2019)
Zhao, W., Chellappa, R., Phillips, P.J., et al.: Face recognition: a literature survey. ACM Comput. Surv. (CSUR) 35(4), 399–458 (2003)
Kaur, R., Gabrijelčič, D., Klobučar, T.: Artificial intelligence for cybersecurity: literature review and future research directions. Inf. Fusion 97, 101804 (2023)
Chaslot, G.M.J., Winands, M.H.M., Herik, H.J., et al.: Progressive strategies for Monte-Carlo tree search. New Math. Nat. Comput. 4(03), 343–357 (2008)
Kong, Y., Zhang, J.: Adversarial audio: a new information hiding method. In:INTERSPEECH 2020, pp. 2287–2291 (2020)
Liang, W., et al.: Deep neural network security collaborative filtering scheme for service recommendation in intelligent cyber–physical systems. IEEE IoTJ 9(22), 22123–22132 (2022)
Cockburn, D., Jennings, N.R.: ARCHON: a distributed artificial intelligence system for industrial applications (1996)
Board, F.S.B.F.S.: Artificial intelligence and machine learning in financial services: market developments and financial stability implications. Financial Stability Board (2017)
Hu, Y., Kuang, W., Qin, Z., et al.: Artificial intelligence security: threats and countermeasures. ACM Comput. Surv. (CSUR) 55(1), 1–36 (2021)
Wirkuttis, N., Klein, H.: Artificial intelligence in cybersecurity. Cyber Intell. Secur. 1(1), 103–119 (2017)
Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Liu, Y., Ma, S., Aafer, Y., et al.: Trojaning attack on neural networks. In: 25th Annual Network and Distributed System Security Symposium (NDSS 2018). Internet Soc (2018)
Chen, T., Liu, J., Xiang, Y., et al.: Adversarial attack and defense in reinforcement learning-from AI security view. Cybersecurity 2, 1–22 (2019)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)
Henderson, P., Islam, R., Bachman, P., et al.: Deep reinforcement learning that matters. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)
Arulkumaran, K., Deisenroth, M.P., Brundage, M., et al.: A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866 (2017)
Cheng, R., Orosz, G., et al.: End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. In:AAAI, vol. 33, no. 01, pp. 3387–3395 (2019)
Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: a survey. Int. J. Robot. Res. 32(11), 1238–1274 (2013)
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning, arXiv Prepr. arXiv:1509.02971 (2015)
Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Playing atari with deep reinforcement learning. arXiv 2013. arXiv preprint arXiv:1312.5602 (2013)
Pattanaik, A., Tang, Z., Liu, S., et al.: Robust deep reinforcement learning with adversarial attacks. arXiv preprint arXiv:1712.03632 (2017)
Favarò, F.M., Nader, N., Eurich, S.O., et al.: Examining accident reports involving autonomous vehicles in California. PLoS ONE 12(9), e0184952 (2017)
Chan, M., Estève, D., Escriba, C., et al.: A review of smart homes—present state and future challenges. Comput. Methods Programs Biomed. 91(1), 55–81 (2008)
Lai, C.S., Jia, Y., Dong, Z., et al.: A review of technical standards for smart cities. Clean Technol. 2(3), 290–310 (2020)
Bruce, V., Young, A.: Understanding face recognition. Br. J. Psychol. 77(3), 305–327 (1986)
Devlin, J., Chang, M.W., Lee, K., et al.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Boscardin, C.K., Gin, B., Golde, P.B., et al.: ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Acad. Med. 99(1), 22–27 (2023)
Lin, J.C., Younessi, D.N., Kurapati, S.S., et al.: Comparison of GPT-3.5, GPT-4, and human user performance on a practice ophthalmology written examination. Eye 37, 1–2 (2023)
Lee, P., Bubeck, S., Petro, J.: Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N. Engl. J. Med. 388, 1233–1239 (2023). https://doi.org/10.1056/NEJMsr2214184
Dwivedi, Y.K., Pandey, N., et al.: Leveraging ChatGPT and other generative artificial intelligence (AI)-based applications in the hospitality and tourism industry: practices, challenges and research agenda. Intl. J. Contemp. Hosp. Mana. 36(1), 1–12 (2024)
Cheng, L., Liu, F., Yao, D.: Enterprise data breach: causes, challenges, prevention, and future directions, Wiley. Data Min. Knowl. Disc. 7(5), e1211 (2017)
Kim, W., Choi, B.J., Hong, E.K., et al.: A taxonomy of dirty data. Data Min. Knowl. Disc. 7, 81–99 (2003)
Dee, D.P.: Bias and data assimilation. Q. J. R. Meteorol. Soc. 131(613), 3323–3343 (2005)
Papernot, N., McDaniel, P., Sinha, A., et al.: SoK: security and privacy in machine learning. In: IEEE European Symposium on Security and Privacy (EuroS&P), pp. 399–414. IEEE (2018)
Quiring, E., Rieck, K.: Backdooring and poisoning neural networks with image-scaling attacks. In: 2020 IEEE Security and Privacy Workshops (SPW), pp. 41–47. IEEE (2020)
Xiao, Q., Chen, Y., Shen, C., et al.: Seeing is not believing: camouflage attacks on image scaling algorithms. In: USENIX Security, pp. 443–460 (2019)
Shafahi, A., Huang, W.R., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: NIPS (2018)
Zeng, Y., Pan, M., et al.: Narcissus: a practical clean-label backdoor attack with limited information.In: ACM CCS, pp. 771–785 (2023)
Gao, X., Qiu, M.: Energy-based learning for preventing backdoor attack. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds.) KSEM 2022. LNCS, vol. 13370, pp. 706–721. Springer, Cham (2022)
Qiu, H., Zeng, Y., et al.: Deepsweep: an evaluation framework for mitigating DNN backdoor attacks using data augmentation. In:ACM Asia CCS, pp. 363–377 (2021)
Qiu, M., Qiu, H.: Review on image processing based adversarial example defenses in computer vision. In: IEEE 6th BigDataSecurity (2020)
Li, C., Qiu, M.: Reinforcement Learning for Cyber-Physical Systems: With Cybersecurity Case Studies. CRC Press, Boca Raton (2019)
Zhang, Y., Qiu, M., et al.: Health-CPS: healthcare cyber-physical system assisted by cloud and big data. IEEE Syst. J. 11(1), 88–95 (2015)
Qiu, H., Zheng, Q., et al.: Topological graph convolutional network-based urban traffic flow and density prediction. IEEE Trans. ITS (2020)
Qiu, M., Gao, W., et al.: Energy efficient security algorithm for power grid wide area monitoring system. IEEE Trans. Smart Grid 2(4), 715–723 (2011)
Qiu, M., Su, H., et al.: Balance of security strength and energy for a PMU monitoring system in smart grid. IEEE Commun. Mag. 50(5), 142–149 (2012)
Qiu, H., Qiu, M., Lu, R.: Secure V2X communication network based on intelligent PKI and edge computing. IEEE Network 34(2), 172–178 (2019)
Wei, X., Guo, H., et al.: Reliable data collection techniques in underwater wireless sensor networks: a survey. IEEE Comm. Surv. Tutor. 24(1), 404–431 (2021)
Li, Y., Dai, W., et al.: Privacy protection for preventing data over-collection in smart city. IEEE Trans. Comput. 65(5), 1339–1350 (2015)
Gai, K., Zhang, Y., et al.: Blockchain-enabled service optimizations in supply chain digital twin. IEEE Trans. Serv. Comput. (2022)
Papernot, N., McDaniel, P., Wu, X., et al.: Distillation as a defense to adversarial perturbations against deep neural networks. In:IEEE Symposium on Security and Privacy (SP), pp. 582–597 (2016)
Papernot, N., McDaniel, P., Goodfellow, I., et al.: Practical black-box attacks against machine learning.In: ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017)
Vivek, B.S., Mopuri, K.R., Babu, R.V.: Gray-box adversarial training. In: European Conference on Computer Vision (ECCV), pp. 203–218 (2018)
Nicolae, M.I., Sinn, M., Tran, M.N., et al.: Adversarial Robustness Toolbox v1. 0.0. arXiv preprint arXiv:1807.01069 (2018). https://doi.org/10.48550/arXiv.1807.01069
Jia, X., Wei, X., Cao, X., et al.: Comdefend: an efficient image compression model to defend adversarial examples. In: IEEE/CVF CVPR, pp. 6084–6092 (2019)
Xu, H., Pei, C., Yang, G.: Adversarial example defense based on image reconstruction. PeerJ Comput. Sci. 7, e811 (2021)
Yang, Y., Zhang, G., Katabi, D., et al.: Me-net: towards effective adversarial robustness with matrix estimation. arXiv preprint arXiv:1905.11971 (2019)
Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)
Zhao, Z., Chen, G., et al.: Attack as defense: characterizing adversarial examples using robustness. In:30th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 42–55 (2021)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In:IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zou, J., Zhang, S., Qiu, M. (2024). Different Attack and Defense Types for AI Cybersecurity. In: Cao, C., Chen, H., Zhao, L., Arshad, J., Asyhari, T., Wang, Y. (eds) Knowledge Science, Engineering and Management. KSEM 2024. Lecture Notes in Computer Science(), vol 14886. Springer, Singapore. https://doi.org/10.1007/978-981-97-5498-4_14
Download citation
DOI: https://doi.org/10.1007/978-981-97-5498-4_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5497-7
Online ISBN: 978-981-97-5498-4
eBook Packages: Computer ScienceComputer Science (R0)