skip to main content
10.1145/3287624.3288750acmconferencesArticle/Chapter ViewAbstractPublication PagesaspdacConference Proceedingsconference-collections
research-article

ADMM attack: an enhanced adversarial attack for deep neural networks with undetectable distortions

Published: 21 January 2019 Publication History

Abstract

Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different lp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various lp norms of the distortion, including l0, l1, l2, and l norms. Thus, the proposed general framework unifies the methods of crafting l0, l1, l2, and l attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.

References

[1]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei, "Imagenet: A large-scale hierarchical image database," in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248--255, IEEE, 2009.
[2]
Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, "Deepface: Closing the gap to human-level performance in face verification," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1701--1708, 2014.
[3]
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770--778, 2016.
[4]
G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82--97, 2012.
[5]
D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., "Mastering the game of go with deep neural networks and tree search," nature, vol. 529, no. 7587, pp. 484--489, 2016.
[6]
I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," 2015 ICLR, vol. arXiv preprint arXiv:1412.6572, 2015.
[7]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," arXiv preprint arXiv:1312.6199, 2013.
[8]
A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," arXiv preprint arXiv:1607.02533, 2016.
[9]
C. Guo, M. Rana, M. Cissé, and L. van der Maaten, "Countering adversarial images using input transformations," arXiv preprint arXiv:1711.00117, 2017.
[10]
G. K. Dziugaite, Z. Ghahramani, and D. M. Roy, "A study of the effect of jpg compression on adversarial images," arXiv preprint arXiv:1608.00853, 2016.
[11]
C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, "Mitigating adversarial effects through randomization," arXiv preprint arXiv:1711.01991, 2017.
[12]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," in Security and Privacy (SP), 2016 IEEE Symposium on, pp. 582--597, IEEE, 2016.
[13]
G. S. Dhillon, K. Azizzadenesheli, Z. C. Lipton, J. Bernstein, J. Kossaifi, A. Khanna, and A. Anandkumar, "Stochastic activation pruning for robust adversarial defense," arXiv preprint arXiv:1803.01442, 2018.
[14]
S. Wang, X. Wang, P. Zhao, W. Wen, D. Kaeli, P. Chin, and X. Lin, "Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks," ArXiv e-prints, Sept. 2018.
[15]
R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner, "Detecting adversarial samples from artifacts," arXiv preprint arXiv:1703.00410, 2017.
[16]
N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39--57, IEEE, 2017.
[17]
P. Zhao, S. Liu, Y. Wang, and X. Lin, "An admm-based universal framework for adversarial attacks on deep neural networks," CoRR, vol. abs/1804.03193, 2018.
[18]
P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, "Ead: elastic-net attacks to deep neural networks via adversarial examples," arXiv preprint arXiv:1709.04114, 2017.
[19]
A. Athalye, N. Carlini, and D. Wagner, "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples," arXiv preprint arXiv:1802.00420, 2018.
[20]
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al., "Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1--122, 2011.
[21]
M. Hong and Z.-Q. Luo, "On the linear convergence of the alternating direction method of multipliers," Mathematical Programming, vol. 162, pp. 165--199, Mar 2017.
[22]
H. Wang and A. Banerjee, "Bregman alternating direction method of multipliers," in Advances in Neural Information Processing Systems 27 (Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds.), pp. 2816--2824, Curran Associates, Inc., 2014.
[23]
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, "The limitations of deep learning in adversarial settings," in Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pp. 372--387, IEEE, 2016.
[24]
I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014.
[25]
F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, "Ensemble adversarial training: Attacks and defenses," 2018 ICLR, vol. arXiv preprint arXiv:1705.07204, 2018.
[26]
N. Parikh, S. Boyd, et al., "Proximal algorithms," Foundations and Trends® in Optimization, vol. 1, no. 3, pp. 127--239, 2014.
[27]
N. Qian, "On the momentum term in gradient descent learning algorithms," Neural Networks, vol. 12, no. 1, pp. 145 -- 151, 1999.
[28]
D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," 2015 ICLR, vol. arXiv preprint arXiv:1412.6980, 2015.
[29]
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, pp. 2278--2324, Nov 1998.
[30]
A. Krizhevsky and G. Hinton, "Learning multiple layers of features from tiny images," Master's thesis, Department of Computer Science, University of Toronto, 2009.
[31]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818--2826, 2016.
[32]
N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman, and P. McDaniel, "cleverhans v1.0.0: an adversarial machine learning library," arXiv preprint arXiv:1610.00768, 2016.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASPDAC '19: Proceedings of the 24th Asia and South Pacific Design Automation Conference
January 2019
794 pages
ISBN:9781450360074
DOI:10.1145/3287624
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • IEICE ESS: Institute of Electronics, Information and Communication Engineers, Engineering Sciences Society
  • IEEE CAS
  • IEEE CEDA
  • IPSJ SIG-SLDM: Information Processing Society of Japan, SIG System LSI Design Methodology

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 January 2019

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Conference

ASPDAC '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 466 of 1,454 submissions, 32%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Friend-Guard Textfooler Attack on Text Classification SystemIEEE Access10.1109/ACCESS.2021.308068013(3841-3848)Online publication date: 2025
  • (2024)Adversarial Examples Detection With Bayesian Neural NetworkIEEE Transactions on Emerging Topics in Computational Intelligence10.1109/TETCI.2024.33723838:5(3654-3664)Online publication date: Oct-2024
  • (2024)AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example AttackIEEE Access10.1109/ACCESS.2020.304283912(5345-5356)Online publication date: 2024
  • (2023)Dual-Targeted Textfooler Attack on Text Classification SystemsIEEE Access10.1109/ACCESS.2021.312136611(15164-15173)Online publication date: 2023
  • (2022)Toward robust spiking neural network against adversarial perturbationProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3601014(10244-10256)Online publication date: 28-Nov-2022
  • (2022)Optimized Adversarial Example With Classification Score Pattern Vulnerability RemovedIEEE Access10.1109/ACCESS.2021.311047310(35804-35813)Online publication date: 2022
  • (2021)Augmented Lagrangian Adversarial Attacks2021 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV48922.2021.00764(7718-7727)Online publication date: Oct-2021
  • (2020)Adversarial T-Shirt! Evading Person Detectors in a Physical WorldComputer Vision – ECCV 202010.1007/978-3-030-58558-7_39(665-681)Online publication date: 29-Oct-2020
  • (2019)On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method2019 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV.2019.00021(121-130)Online publication date: Oct-2019

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media