skip to main content
10.1145/3616391.3622768acmconferencesArticle/Chapter ViewAbstractPublication PagesmswimConference Proceedingsconference-collections
research-article

Adversarial Training Method for Machine Learning Model in a Resource-Constrained Environment

Published:30 October 2023Publication History

ABSTRACT

Within the rapidly growing Adversarial Machine Learning (AML) domain, scholars strive to build robust defense mechanisms against adversarial attacks. Such advancements often need significant computing resources. This paper offers an innovative AML defense method designed explicitly for resource-restricted scenarios. The core strategy employs Adversarial Training with Elastic Weight Consolidation (AT-EWC) to reduce the computational demands of adversarial training, thereby fortifying model robustness without compromising accuracy. This regularization technique deters overfitting adversarial instances by adding a penalty term tied to the Fisher Information Matrix to the loss function. Our method emerges as a promising strategy to augment the security of machine learning systems, particularly in resource-limited environments.

References

  1. Abhishek Aich. 2021. Elastic Weight Consolidation (EWC): Nuts and Bolts. ArXiv abs/2105.04093 (2021). https://api.semanticscholar.org/CorpusID:234336029Google ScholarGoogle Scholar
  2. Alexandre Araujo, Rafael Pinot, Benjamin Négrevergne, Laurent Meunier, Yann Chevaleyre, Florian Yger, and Jamal Atif. 2019. Robust Neural Networks using Randomized Adversarial Training. ArXiv abs/1903.10219 (2019). https://api.semanticscholar.org/CorpusID:85501705Google ScholarGoogle Scholar
  3. Nicholas Carlini and David A. Wagner. 2016. Towards Evaluating the Robustness of Neural Networks. 2017 IEEE Symposium on Security and Privacy (SP) (2016), 39--57.Google ScholarGoogle Scholar
  4. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. In NIPS. https://api.semanticscholar.org/CorpusID:1518846Google ScholarGoogle Scholar
  5. Hassan Dbouk and Naresh R Shanbhag. 2023. On the Robustness of Randomized Ensembles to Adversarial Perturbations. In International Conference on Machine Learning. https://api.semanticscholar.org/CorpusID:256598229Google ScholarGoogle Scholar
  6. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572 (2014). https://api.semanticscholar.org/CorpusID:6706414Google ScholarGoogle Scholar
  7. Micah Gorsline, James T. Smith, and Cory E. Merkel. 2021. On the Adversarial Robustness of Quantized Neural Networks. Proceedings of the 2021 on Great Lakes Symposium on VLSI (2021). https://api.semanticscholar.org/CorpusID:233481724Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Song Han, Huizi Mao, and William J. Dally. 2015. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. arXiv: Computer Vision and Pattern Recognition (2015). https://api.semanticscholar.org/CorpusID:2134321Google ScholarGoogle Scholar
  9. Masazumi Iida, Yoshinari Takeishi, and Jun'ichi Takeuchi. 2022. On Fisher Information Matrix for Simple Neural Networks with Softplus Activation. 2022 IEEE International Symposium on Information Theory (ISIT) (2022), 3001--3006.Google ScholarGoogle Scholar
  10. Jung-Eun Kim, Richard M. Bradford, Max Del Giudice, and Zhong Shao. 2021. Adaptive Generative Modeling in Resource-Constrained Environments. 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE) (2021), 62--67.Google ScholarGoogle Scholar
  11. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial Machine Learning at Scale. ArXiv abs/1611.01236 (2016). https://api.semanticscholar.org/CorpusID:9059612Google ScholarGoogle Scholar
  12. Alexey Kutalev and Alisa Lapina. 2021. Stabilizing Elastic Weight Consolidation method in practical ML tasks and using weight importances for neural network pruning. ArXiv abs/2109.10021 (2021). https://api.semanticscholar.org/CorpusID:237581465Google ScholarGoogle Scholar
  13. J. Liu. 2020. Generative Adversarial Networks in Resource-Constrained Environment. Ph. D. Dissertation. University of Notre Dame. https://curate.nd.edu/show/p8418k7526fGoogle ScholarGoogle Scholar
  14. Mamta Mamta, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Exploring Multilingual, Multi-task, and Adversarial Learning for Low-resource Sentiment Analysis. Transactions on Asian and Low-Resource Language Information Processing 21 (2022), 1 -- 19. https://api.semanticscholar.org/CorpusID:251711320Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Awais Muhammad and Sung-Ho Bae. 2022. A Survey on Efficient Methods for Adversarial Robustness. IEEE Access 10 (2022), 118815--118830.Google ScholarGoogle ScholarCross RefCross Ref
  16. S. Park, J. Yoo, and G. Kim. 2019. Energy-Efficient Neural Network Acceleration in the Presence of Bit-Level Memory Errors. IEEE Trans. Comput. 68, 6 (2019), 851--865. https://doi.org/10.1109/TC.2018.2885564Google ScholarGoogle ScholarCross RefCross Ref
  17. Y. Qin, R. Hunt, and C. Yue. 2019. On Improving the Effectiveness of Adversarial Training. In Proceedings of the ACM International Workshop on Security and Privacy Analytics (IWSPA '19). 5--13. https://doi.org/10.1145/3309182.3309190Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Lukas Schott, Jonas Rauber, Matthias Bethge, and Wieland Brendel. 2018. Towards the first adversarially robust neural network model on MNIST. arXiv: Computer Vision and Pattern Recognition (2018).Google ScholarGoogle Scholar
  19. Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Helen Li, and Yiran Chen. 2022. FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices. ArXiv abs/2209.03839 (2022). https://api.semanticscholar.org/CorpusID:252118699Google ScholarGoogle Scholar
  20. Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick Mcdaniel. 2017. Ensemble Adversarial Training: Attacks and Defenses. ArXiv abs/1705.07204 (2017). https://api.semanticscholar.org/CorpusID:21946795Google ScholarGoogle Scholar
  21. H. Zhang et al . 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In Advances in Neural Information Processing Systems, Vol. 32. 9802--9813. https://arxiv.org/pdf/1901.08573.pdfGoogle ScholarGoogle Scholar

Index Terms

  1. Adversarial Training Method for Machine Learning Model in a Resource-Constrained Environment
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          Q2SWinet '23: Proceedings of the 19th ACM International Symposium on QoS and Security for Wireless and Mobile Networks
          October 2023
          121 pages
          ISBN:9798400703683
          DOI:10.1145/3616391
          • General Chair:
          • Ahmed Mostefaoui,
          • Program Chair:
          • Peng Sun

          Copyright © 2023 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 30 October 2023

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate46of131submissions,35%
        • Article Metrics

          • Downloads (Last 12 months)51
          • Downloads (Last 6 weeks)11

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader