Loading [a11y]/accessibility-menu.js
A Survey of Security Protection Methods for Deep Learning Model | IEEE Journals & Magazine | IEEE Xplore

A Survey of Security Protection Methods for Deep Learning Model


Impact Statement:Deep learning (DL) models have achieved outstanding performance in a variety of domains. However, the security of DL models is becoming more serious under the influence o...Show More

Abstract:

In recent years, deep learning (DL) models have attracted widespread concern. Due to its own characteristics, DL has been successfully applied in the fields of object det...Show More
Impact Statement:
Deep learning (DL) models have achieved outstanding performance in a variety of domains. However, the security of DL models is becoming more serious under the influence of massive data and multiple attacks. In this article, we summarize the attacks and privacy protection methods at each stage of the DL model lifecycle. Besides, we focus on security issues and solutions for the DL model on different deployment platforms. This article guides developers and researchers in their future design and research efforts and draws their attention to security issues on edge mobile devices. The goal of this article is to find an efficient solution to the security problem of DL models that can enhance their security and performance.

Abstract:

In recent years, deep learning (DL) models have attracted widespread concern. Due to its own characteristics, DL has been successfully applied in the fields of object detection, superresolution reconstruction, speech recognition, natural language processing, etc., bringing high efficiency to industrial production and daily life. With the Internet of Things, 6G and other new technologies have been proposed, leading to an exponential growth in data volume. DL models currently suffer from some security issues, such as privacy issues during data collection, defense issues during model training and deployment, etc. The sensitive data of users and special institutions that are directly used as training data of DL models may lead to information leakage and serious privacy problems. In addition, DL models have encountered many malicious attacks in the real world, such as poisoning attack, exploratory attack, adversarial attack, etc., which caused model security problems. Therefore, this articl...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 4, April 2024)
Page(s): 1533 - 1553
Date of Publication: 12 September 2023
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.