skip to main content
10.1145/3240765.3240791guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

DeepFense: Online Accelerated Defense Against Adversarial Deep Learning

Published: 05 November 2018 Publication History

Abstract

Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. With the wide-spread usage of DL in critical and time-sensitive applications, including unmanned vehicles, drones, and video surveillance systems, online detection of malicious inputs is of utmost importance. We propose DeepFense, the first end-to-end automated framework that simultaneously enables efficient and safe execution of DL models. DeepFense formalizes the goal of thwarting adversarial attacks as an optimization problem that minimizes the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint modular redundancies are trained to validate the legitimacy of the input samples in parallel with the victim DL model. DeepFense leverages hardware/software/algorithm co-design and customized acceleration to achieve just-in-time performance in resource-constrained settings. The proposed countermeasure is unsupervised, meaning that no adversarial sample is leveraged to train modular redundancies. We further provide an accompanying API to reduce the non-recurring engineering cost and ensure automated adaptation to various platforms. Extensive evaluations on FPGAs and GPUs demonstrate up to two orders of magnitude performance improvement while enabling online adversarial sample detection.

References

[1]
P. McDaniel, N. Papernot, and Z.B. Celik, “Machine learning in adversarial settings,” IEEE Security & Privacy, vol. 14, no. 3, pp. 68–72, 2016.
[2]
L. Deng, D. Yu et al., “Deep learning: methods and applications,” Foundations and Trends® in Signal Processing, vol. 7, no. 3–4, pp. 197–387, 2014.
[3]
E. Knorr, “How paypal beats the bad guys with machine learning,” 2015.
[4]
N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in IEEE Symposium on Security and Privacy (SP). IEEE, 2017, pp. 39–57.
[5]
I.J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:, 2014.
[6]
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” arXiv preprint arXiv:, 2016.
[7]
S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.
[8]
D. Meng and H. Chen, “Magnet: a two-pronged defense against adversarial examples,” in ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 135–147.
[9]
V. Zantedeschi, M.-I. Nicolae, and A. Rawat, “Efficient defenses against adversarial attacks,” in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 2017, pp. 39–49.
[10]
S. Shen, G. Jin, K. Gao, and Y. Zhang, “Ape-gan: Adversarial perturbation elimination with gan,” ICLR Submission, available on OpenReview, 2017.
[11]
N. Carlini and D. Wagner, “Magnet and“efficient defenses against adversarial attacks” are not robust to adversarial examples” arXiv preprint arXiv:, 2017.
[12]
C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks,” in ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 2015.
[13]
T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” ACM Sigplan Notices, vol. 49, no. 4, pp. 269–284, 2014.
[14]
H. Sharma, J. Park, E. Amaro, B. Thwaites, P. Kotha, A. Gupta, J.K. Kim, A. Mishra, and H. Esmaeilzadeh, “Dnnweaver: From high-level deep network models to fpga acceleration,” in The Workshop on Cognitive Architectures, 2016.
[15]
M. Samragh, M. Ghasemzadeh, and F. Koushanfar, “Customizing neural networks for efficient fpga implementation,” in Field-Programmable Custom Computing Machines (FCCM). IEEE, 2017.
[16]
B.D. Rouhani, A. Mirhoseini, and F. Koushanfar, “Deep3: Leveraging three levels of parallelism for efficient deep learning,” in Proceedings of the 54th Annual Design Automation Conference 2017. ACM, 2017, p. 61.
[17]
J. Tropp, A.C. Gilbert et al., “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007.
[18]
F.J. Diez, “Parameter adjustment in bayes networks. the generalized noisy or-gate,” in Uncertainty in Artificial Intelligence, 1993. Elsevier, 1993, pp. 99–105.
[19]
B.D. Rouhani, E.M. Songhori, A. Mirhoseini, and F. Koushanfar, “Ssketch: An automated framework for streaming sketch-based analysis of big data on fpga,” in Field-Programmable Custom Computing Machines (FCCM). IEEE, 2015.
[20]
S. Gu and L. Rigazio, “Towards deep neural network architectures robust to adversarial examples,” arXiv preprint arXiv:, 2014.
[21]
U. Shaham, Y. Yamada, and S. Negahban, “Understanding adversarial training: Increasing local stability of neural nets through robust optimization,” arXiv preprint arXiv:, 2015.
[22]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:, 2013.
[23]
T. Miyato, S.-I. Maeda, M. Koyama, K. Nakae, and S. Ishii, “Distributional smoothing with virtual adversarial training,” arXiv preprint arXiv:, 2015.
[24]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” pp. 582–597, 2016.
[25]
N. Carlini and D. Wagner, “Defensive distillation is not robust to adversarial examples,” arXiv preprint, 2016.

Cited By

View all
  • (2024)A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example AttacksACM Transactions on Embedded Computing Systems10.1145/367731823:5(1-28)Online publication date: 14-Aug-2024
  • (2024)EnsGuard: A Novel Acceleration Framework for Adversarial Ensemble LearningIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2024.339003143:10(3088-3101)Online publication date: Oct-2024
  • (2024)Real-Time Robust Video Object Detection System Against Physical-World Adversarial AttacksIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.330593243:1(366-379)Online publication date: Jan-2024
  • Show More Cited By

Index Terms

  1. DeepFense: Online Accelerated Defense Against Adversarial Deep Learning
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)
        Nov 2018
        939 pages

        Publisher

        IEEE Press

        Publication History

        Published: 05 November 2018

        Permissions

        Request permissions for this article.

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 19 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example AttacksACM Transactions on Embedded Computing Systems10.1145/367731823:5(1-28)Online publication date: 14-Aug-2024
        • (2024)EnsGuard: A Novel Acceleration Framework for Adversarial Ensemble LearningIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2024.339003143:10(3088-3101)Online publication date: Oct-2024
        • (2024)Real-Time Robust Video Object Detection System Against Physical-World Adversarial AttacksIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.330593243:1(366-379)Online publication date: Jan-2024
        • (2024)Camo-DNN: Layer Camouflaging to Protect DNNs against Timing Side-Channel Attacks2024 IEEE 30th International Symposium on On-Line Testing and Robust System Design (IOLTS)10.1109/IOLTS60994.2024.10616065(1-7)Online publication date: 3-Jul-2024
        • (2024)How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic ReviewNew Generation Computing10.1007/s00354-024-00283-042:5(1165-1235)Online publication date: 12-Oct-2024
        • (2023)Systemization of Knowledge: Robust Deep Learning using Hardware-software co-design in Centralized and Federated SettingsACM Transactions on Design Automation of Electronic Systems10.1145/361686828:6(1-32)Online publication date: 23-Aug-2023
        • (2023)Feature Distillation in Deep Attention Network Against Adversarial ExamplesIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2021.311334234:7(3691-3705)Online publication date: Jul-2023
        • (2023)EAM: Ensemble of Approximate Multipliers for Robust DNNsMicroprocessors and Microsystems10.1016/j.micpro.2023.104800(104800)Online publication date: Mar-2023
        • (2023)Nacc-Guard: a lightweight DNN accelerator architecture for secure deep learningThe Journal of Supercomputing10.1007/s11227-023-05671-980:5(5815-5831)Online publication date: 7-Oct-2023
        • (2022)A Globally-Connected and Trainable Hierarchical Fine-Attention Generative Adversarial Network based Adversarial DefenseProceedings of the Thirteenth Indian Conference on Computer Vision, Graphics and Image Processing10.1145/3571600.3571615(1-9)Online publication date: 8-Dec-2022
        • Show More Cited By

        View Options

        View options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media