Mitigating Adversarial Attacks using Pruning
Abstract
References
Index Terms
- Mitigating Adversarial Attacks using Pruning
Recommendations
A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples
AbstractDeep neural networks (DNNs) are vulnerable to adversarial attacks that generate adversarial examples by adding small perturbations to the clean images. To combat adversarial attacks, the two main defense methods used are denoising and adversarial ...
Mitigating the impact of adversarial attacks in very deep networks
AbstractDeep Neural Network (DNN) models have vulnerabilities related to security concerns, with attackers usually employing complex hacking techniques to expose their structures. Data poisoning-enabled perturbation attacks are complex ...
Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
AbstractAdversarial attacks on artificial neural network systems for image recognition are considered. To improve the security of image recognition systems against adversarial attacks (evasion attacks), the use of autoencoders is proposed. Various attacks ...
Comments
Information & Contributors
Information
Published In
![cover image ACM Other conferences](/cms/asset/f2ee04ee-8e8b-42a9-9c7a-756ed88febb7/3607947.cover.jpg)
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
- Research
- Refereed limited
Conference
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 40Total Downloads
- Downloads (Last 12 months)15
- Downloads (Last 6 weeks)0
Other Metrics
Citations
Cited By
View allView Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign inFull Access
View options
View or Download as a PDF file.
PDFeReader
View online with eReader.
eReaderHTML Format
View this article in HTML Format.
HTML Format