skip to main content
10.1145/3240765.3267502guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Efficient Utilization of Adversarial Training towards Robust Machine Learners and its Analysis

Published: 05 November 2018 Publication History

Abstract

Advancements in machine learning led to its adoption into numerous applications ranging from computer vision to security. Despite the achieved advancements in the machine learning, the vulnerabilities in those techniques are as well exploited. Adversarial samples are the samples generated by adding crafted perturbations to the normal input samples. An overview of different techniques to generate adversarial samples, defense to make classifiers robust is presented in this work. Furthermore, the adversarial learning and its effective utilization to enhance the robustness and the required constraints are experimentally provided, such as up to 97.65% accuracy even against CW attack. Though adversarial learning's effectiveness is enhanced, still it is shown in this work that it can be further exploited for vulnerabilities.

References

[1]
M. Wess, S.M.P. Dinakarrao, and A. Jantsch, “Weighted quantization-regularization in DNNs for weight memory minimization towards HW implementation;” IEEE Transactions on Computer Aided Systems of Integrated Circuits and Systems, 2018.
[2]
E. Ackerman, “How drive.ai is mastering autonomous driving with deep learning,” accessed August 2018. [Online]. Available: https://spectrum.ieee.org/cars-that-think/transportation/self-driving/how-driveai-is-mastering-autonomous-driving-with-deep-learning
[3]
A. Krizhevsky, I. Sutskever, and G.E. Hinton, “Imagenet classification with deep convolutional neural networks,” in International Conference on Neural Information Processing Systems, 2012.
[4]
Demme, M. Maycock, J. Schmitz, A. Tang, A. Waksman, S. Sethumadhavan, and S. Stolfo, “On the feasibility of online malware detection with performance counters,” in International Symposium on Computer Architecture, 2013.
[5]
M. Chiappetta, E. Savas, and C. Yilmaz, “Real time detection of cache-based side-channel attacks using hardware performance counters,” Appl. Soft Comput., vol. 49, no. C, Dec 2016.
[6]
K.N. Khasawneh, M. Ozsoy, C. Donovick, N. Abu-Ghazaleh, and D. Ponomarev, “EnsembleHMD: Accurate hardware malware detectors with specialized ensemble classifiers,” 2018.
[7]
F. Brasser and et al., “Hardware-assisted security: Understanding security vulnerabilities and emerging attacks for better defenses,” in International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), 2018.
[8]
H. Sayadi, N. Patel, P.D.S. Manoj, A. Sasan, S. Rafatirad, and H. Homayoun, “Ensemble learning for hardware-based malware detection: A comprehensive analysis and classification,” in ACM/EDAA/IEEE Design Automation Conference, 2018.
[9]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I.J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in International Conference on Learning Representations (ICLR), 2014.
[10]
I.J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International Conference on Learning Representations (ICLR), 2015.
[11]
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z.B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in IEEE European Symposium on Security and Privacy (Euro S&P), 2016.
[12]
Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” in International Conference on Learning Representations (ICLR), 2017.
[13]
Y. LeCun, C. Cortes, and C.J. Burges, “Mnist digit dataset,” accessed August 2018. [Online]. Available: http://yann.lecun.com/exdb/mnist/
[14]
N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, “Adversarial classification;” in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2004.
[15]
D. Lowd and C. Meek, “Adversarial learning,” in ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, 2005.
[16]
T. Matsumoto, H. Matsumoto, K. Yamada, and S. Hoshino, “Impact of artificial “gummy”fingers on fingerprint systems” vol. 26, 04 2002.
[17]
B. Nelson, M. Barreno, F.J. Chi, A.D. Joseph, B.I.P. Rubinstein, U. Saini, C. Sutton, J.D. Tygar, and K. Xia, “Exploiting machine learning to subvert your spam filter,” in Usenix Workshop on Large-Scale Exploits and Emergent Threats, 2008.
[18]
B.I. Rubinstein, B. Nelson, L. Huang, A.D. Joseph, S.-H. Lau, S. Rao, N. Taft, and J.D. Tygar, “ANTIDOTE: Understanding and defending against poisoning of anomaly detectors,” in ACM SIGCOMM Conference on Internet Measurement, 2009.
[19]
B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in International Conference on Machine Learning, 2012.
[20]
H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli, “Is feature selection secure against training data poisoning?” in International Conference on Machine Learning, 2015.
[21]
L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. Lupu, and F. Roli, “Towards poisoning of deep learning algorithms with back-gradient optimization,” in ACM Workshop on Artificial Intelligence and Security, 2017.
[22]
U. Shaham, Y. Yamada, and S. Negahban, “Understanding adversarial training: increasing local stability of neural nets through robust optimization,” ArXiv e-prints, 2015.
[23]
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in International Conference on Learning Representations, 2017.
[24]
Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in Neural Information Processing Systems Conference, 2017.
[25]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in IEEE Symposium on Security and Privacy (S&P), 2016.
[26]
S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” CoRR, vol. abs/1511.04599, 2015.
[27]
N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in IEEE Symposium on Security and Privacy (SP), 2017.
[28]
G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” ArXiv e-prints, 2015.
[29]
D. Meng and H. Chen, “Magnet: a two-pronged defense against adversarial examples,” in ACM Conference on Computer and Communications Security (CCS), 2017.
[30]
K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P.D. McDaniel, “On the (statistical) detection of adversarial examples,” CoRR, vol. abs/1702.06280, 2017.
[31]
J.H. Metzen, T. Genewein, V. Fischer, and B. Bischoff, “On detecting adversarial perturbations,” in International Conference on Learning Representations, 2017.
[32]
zalandoresearch, “Mnist fashion dataset,” accessed August 2018. [Online]. Available: https://github.com/zalandoresearch/fashion-mnist
[33]
N. Papernot and et al., “Technical report on the cleverhans v2.1.0 adversarial examples library,” arXiv preprint arXiv:, 2018.
[34]
keras, “Mnist model,” accessed August 2018. [Online]. Available: https://github.com/keras-team/keras/blob/rnaster/examples/mnist_mlp.py
[35]
Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” arXiv preprint arXiv:, 2017.

Cited By

View all
  • (2024)Disturbance rejection with compensation on featuresPattern Recognition10.1016/j.patcog.2023.110129147(110129)Online publication date: Mar-2024
  • (2023)CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse EngineeringInformation10.3390/info1412065614:12(656)Online publication date: 11-Dec-2023
  • (2021)Energy-Efficient and Adversarially Robust Machine Learning with Selective Dynamic Band FilteringProceedings of the 2021 Great Lakes Symposium on VLSI10.1145/3453688.3461756(195-200)Online publication date: 22-Jun-2021

Index Terms

  1. Efficient Utilization of Adversarial Training towards Robust Machine Learners and its Analysis
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)
        Nov 2018
        939 pages

        Publisher

        IEEE Press

        Publication History

        Published: 05 November 2018

        Permissions

        Request permissions for this article.

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 25 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Disturbance rejection with compensation on featuresPattern Recognition10.1016/j.patcog.2023.110129147(110129)Online publication date: Mar-2024
        • (2023)CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse EngineeringInformation10.3390/info1412065614:12(656)Online publication date: 11-Dec-2023
        • (2021)Energy-Efficient and Adversarially Robust Machine Learning with Selective Dynamic Band FilteringProceedings of the 2021 Great Lakes Symposium on VLSI10.1145/3453688.3461756(195-200)Online publication date: 22-Jun-2021

        View Options

        View options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media