ABSTRACT
Within the rapidly growing Adversarial Machine Learning (AML) domain, scholars strive to build robust defense mechanisms against adversarial attacks. Such advancements often need significant computing resources. This paper offers an innovative AML defense method designed explicitly for resource-restricted scenarios. The core strategy employs Adversarial Training with Elastic Weight Consolidation (AT-EWC) to reduce the computational demands of adversarial training, thereby fortifying model robustness without compromising accuracy. This regularization technique deters overfitting adversarial instances by adding a penalty term tied to the Fisher Information Matrix to the loss function. Our method emerges as a promising strategy to augment the security of machine learning systems, particularly in resource-limited environments.
- Abhishek Aich. 2021. Elastic Weight Consolidation (EWC): Nuts and Bolts. ArXiv abs/2105.04093 (2021). https://api.semanticscholar.org/CorpusID:234336029Google Scholar
- Alexandre Araujo, Rafael Pinot, Benjamin Négrevergne, Laurent Meunier, Yann Chevaleyre, Florian Yger, and Jamal Atif. 2019. Robust Neural Networks using Randomized Adversarial Training. ArXiv abs/1903.10219 (2019). https://api.semanticscholar.org/CorpusID:85501705Google Scholar
- Nicholas Carlini and David A. Wagner. 2016. Towards Evaluating the Robustness of Neural Networks. 2017 IEEE Symposium on Security and Privacy (SP) (2016), 39--57.Google Scholar
- Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. In NIPS. https://api.semanticscholar.org/CorpusID:1518846Google Scholar
- Hassan Dbouk and Naresh R Shanbhag. 2023. On the Robustness of Randomized Ensembles to Adversarial Perturbations. In International Conference on Machine Learning. https://api.semanticscholar.org/CorpusID:256598229Google Scholar
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572 (2014). https://api.semanticscholar.org/CorpusID:6706414Google Scholar
- Micah Gorsline, James T. Smith, and Cory E. Merkel. 2021. On the Adversarial Robustness of Quantized Neural Networks. Proceedings of the 2021 on Great Lakes Symposium on VLSI (2021). https://api.semanticscholar.org/CorpusID:233481724Google ScholarDigital Library
- Song Han, Huizi Mao, and William J. Dally. 2015. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. arXiv: Computer Vision and Pattern Recognition (2015). https://api.semanticscholar.org/CorpusID:2134321Google Scholar
- Masazumi Iida, Yoshinari Takeishi, and Jun'ichi Takeuchi. 2022. On Fisher Information Matrix for Simple Neural Networks with Softplus Activation. 2022 IEEE International Symposium on Information Theory (ISIT) (2022), 3001--3006.Google Scholar
- Jung-Eun Kim, Richard M. Bradford, Max Del Giudice, and Zhong Shao. 2021. Adaptive Generative Modeling in Resource-Constrained Environments. 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE) (2021), 62--67.Google Scholar
- Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial Machine Learning at Scale. ArXiv abs/1611.01236 (2016). https://api.semanticscholar.org/CorpusID:9059612Google Scholar
- Alexey Kutalev and Alisa Lapina. 2021. Stabilizing Elastic Weight Consolidation method in practical ML tasks and using weight importances for neural network pruning. ArXiv abs/2109.10021 (2021). https://api.semanticscholar.org/CorpusID:237581465Google Scholar
- J. Liu. 2020. Generative Adversarial Networks in Resource-Constrained Environment. Ph. D. Dissertation. University of Notre Dame. https://curate.nd.edu/show/p8418k7526fGoogle Scholar
- Mamta Mamta, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Exploring Multilingual, Multi-task, and Adversarial Learning for Low-resource Sentiment Analysis. Transactions on Asian and Low-Resource Language Information Processing 21 (2022), 1 -- 19. https://api.semanticscholar.org/CorpusID:251711320Google ScholarDigital Library
- Awais Muhammad and Sung-Ho Bae. 2022. A Survey on Efficient Methods for Adversarial Robustness. IEEE Access 10 (2022), 118815--118830.Google ScholarCross Ref
- S. Park, J. Yoo, and G. Kim. 2019. Energy-Efficient Neural Network Acceleration in the Presence of Bit-Level Memory Errors. IEEE Trans. Comput. 68, 6 (2019), 851--865. https://doi.org/10.1109/TC.2018.2885564Google ScholarCross Ref
- Y. Qin, R. Hunt, and C. Yue. 2019. On Improving the Effectiveness of Adversarial Training. In Proceedings of the ACM International Workshop on Security and Privacy Analytics (IWSPA '19). 5--13. https://doi.org/10.1145/3309182.3309190Google ScholarDigital Library
- Lukas Schott, Jonas Rauber, Matthias Bethge, and Wieland Brendel. 2018. Towards the first adversarially robust neural network model on MNIST. arXiv: Computer Vision and Pattern Recognition (2018).Google Scholar
- Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Helen Li, and Yiran Chen. 2022. FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices. ArXiv abs/2209.03839 (2022). https://api.semanticscholar.org/CorpusID:252118699Google Scholar
- Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick Mcdaniel. 2017. Ensemble Adversarial Training: Attacks and Defenses. ArXiv abs/1705.07204 (2017). https://api.semanticscholar.org/CorpusID:21946795Google Scholar
- H. Zhang et al . 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In Advances in Neural Information Processing Systems, Vol. 32. 9802--9813. https://arxiv.org/pdf/1901.08573.pdfGoogle Scholar
Index Terms
- Adversarial Training Method for Machine Learning Model in a Resource-Constrained Environment
Recommendations
Attack-less adversarial training for a robust adversarial defense
AbstractAdversarial examples have proved efficacious in fooling deep neural networks recently. Many researchers have studied this issue of adversarial examples by evaluating neural networks against their attack techniques and increasing the robustness of ...
On Improving the Effectiveness of Adversarial Training
IWSPA '19: Proceedings of the ACM International Workshop on Security and Privacy AnalyticsMachine learning models, including neural networks, are vulnerable to adversarial examples, which are adversarial inputs generated from legitimate examples by applying small perturbations to fool machine learning models to misclassify. Algorithms that ...
A novel method for improving the robustness of deep learning-based malware detectors against adversarial attacks
AbstractMalware is constantly evolving with rising concern for cyberspace. Deep learning-based malware detectors are being used as a potential solution. However, these detectors are vulnerable to adversarial attacks. The adversarial attacks manipulate ...
Graphical abstractDisplay Omitted
Highlights- An approach to combining adversarial attacks is proposed to analyse the robustness of malware detectors against attacks.
- Ten adversarial attacks are created to generate binary-encoded malicious samples, including the proposed combined ...
Comments