skip to main content
10.1145/3374664.3375728acmconferencesArticle/Chapter ViewAbstractPublication PagescodaspyConference Proceedingsconference-collections
research-article
Open access

Explore the Transformation Space for Adversarial Images

Published: 16 March 2020 Publication History

Abstract

Deep learning models are vulnerable to adversarial examples. Most of current adversarial attacks add pixel-wise perturbations restricted to some \(L^p\)-norm, and defense models are evaluated also on adversarial examples restricted inside \(L^p\)-norm balls. However, we wish to explore adversarial examples exist beyond \(L^p\)-norm balls and their implications for attacks and defenses. In this paper, we focus on adversarial images generated by transformations. We start with color transformation and propose two gradient-based attacks. Since \(L^p\)-norm is inappropriate for measuring image quality in the transformation space, we use the similarity between transformations and the Structural Similarity Index. Next, we explore a larger transformation space consisting of combinations of color and affine transformations. We evaluate our transformation attacks on three data sets --- CIFAR10, SVHN, and ImageNet --- and their corresponding models. Finally, we perform retraining defenses to evaluate the strength of our attacks. The results show that transformation attacks are powerful. They find high-quality adversarial images that have higher transferability and misclassification rates than C&W's \(L^p \) attacks, especially at high confidence levels. They are also significantly harder to defend against by retraining than C&W's \(L^p \) attacks. More importantly, exploring different attack spaces makes it more challenging to train a universally robust model.

References

[1]
Tom B Brown, Dandelion Mané, Aurko Roy, Mart'in Abadi, and Justin Gilmer. 2017. Adversarial patch. arXiv preprint arXiv: 1712.09665 (2017).
[2]
Nicholas Carlini and David Wagner. 2017a. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 3--14.
[3]
Nicholas Carlini and David Wagner. 2017b. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 39--57.
[4]
Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2018. EAD: elastic-net attacks to deep neural networks via adversarial examples. In Thirty-Second AAAI Conference on Artificial Intelligence .
[5]
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 15--26.
[6]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. Ieee, 248--255.
[7]
Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. 2017. A rotation and a translation suffice: Fooling cnns with simple transformations. arXiv preprint arXiv:1712.02779 (2017).
[8]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. CoRR, Vol. abs/ 1412.6572 (2014).
[9]
Max Jaderberg, Karen Simonyan, Andrew Zisserman, et almbox. 2015. Spatial transformer networks. In Advances in neural information processing systems . 2017--2025.
[10]
Can Kanbak, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. 2018. Geometric robustness of deep networks: analysis and improvement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4441--4449.
[11]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2014. The CIFAR-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html (2014).
[12]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[13]
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. 2011. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, Vol. 2011. 5.
[14]
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 582--597.
[15]
Andras Rozsa, Ethan M Rudd, and Terrance E Boult. 2016. Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops . 25--32.
[16]
Pouya Samangouei, Maya Kabkab, and Rama Chellappa. 2018. Defense-GAN: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018).
[17]
Mahmood Sharif, Lujo Bauer, and Michael K Reiter. 2018. On the suitability of lp-norms for creating and preventing adversarial examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops . 1605--1613.
[18]
Yash Sharma and Pin-Yu Chen. 2017. Breaking the Madry Defense Model with $L_1$-based Adversarial Examples. arXiv preprint arXiv:1710.10733 (2017).
[19]
Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon. 2018. Constructing unrestricted adversarial examples with generative models. In Advances in Neural Information Processing Systems. 8322--8333.
[20]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818--2826.
[21]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[22]
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, Vol. 13, 4 (2004), 600--612.
[23]
Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. 2018. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018).
[24]
Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S Dhillon, and Cho-Jui Hsieh. 2019. The limitations of adversarial training and the blind-spot attack. arXiv preprint arXiv:1901.04684 (2019).

Cited By

View all
  • (2023)On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks2023 IEEE Symposium on Security and Privacy (SP)10.1109/SP46215.2023.10179316(1384-1400)Online publication date: May-2023
  • (2022)Physical World Adversarial Attacks on Images and TextsAdversarial Machine Learning10.1007/978-3-030-99772-4_6(239-254)Online publication date: 26-Aug-2022
  • (2021)Beyond $L_{p}$ Norms: Delving Deeper into Robustness to Physical Image TransformationsMILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)10.1109/MILCOM52596.2021.9653101(189-196)Online publication date: 29-Nov-2021
  • Show More Cited By

Index Terms

  1. Explore the Transformation Space for Adversarial Images

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CODASPY '20: Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy
    March 2020
    392 pages
    ISBN:9781450371070
    DOI:10.1145/3374664
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 March 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial attacks
    2. deep learning security
    3. image transformation

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    CODASPY '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 149 of 789 submissions, 19%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)169
    • Downloads (Last 6 weeks)28
    Reflects downloads up to 20 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks2023 IEEE Symposium on Security and Privacy (SP)10.1109/SP46215.2023.10179316(1384-1400)Online publication date: May-2023
    • (2022)Physical World Adversarial Attacks on Images and TextsAdversarial Machine Learning10.1007/978-3-030-99772-4_6(239-254)Online publication date: 26-Aug-2022
    • (2021)Beyond $L_{p}$ Norms: Delving Deeper into Robustness to Physical Image TransformationsMILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)10.1109/MILCOM52596.2021.9653101(189-196)Online publication date: 29-Nov-2021
    • (2021)Targeted Attention Attack on Deep Learning Models in Road Sign RecognitionIEEE Internet of Things Journal10.1109/JIOT.2020.30348998:6(4980-4990)Online publication date: 15-Mar-2021
    • (2020)StyleCAPTCHAProceedings of the 2020 ACM-IMS on Foundations of Data Science Conference10.1145/3412815.3416895(161-170)Online publication date: 19-Oct-2020

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media