ABSTRACT
The robustness of deep neural network (DNN) is critical and challenging to ensure. In this paper, we propose a general data-oriented mutation framework, called Styx, to improve the robustness of DNN. Styx generates new training data by slightly mutating the training data. In this way, Styx ensures the DNN's accuracy on the test dataset while improving the adaptability to small perturbations, i.e., improving the robustness. We have instantiated Styx for image classification and proposed pixel-level mutation rules that are applicable to any image classification DNNs. We have applied Styx on several commonly used benchmarks and compared Styx with the representative adversarial training methods. The preliminary experimental results indicate the effectiveness of Styx.
- Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya V. Nori, and Antonio Criminisi. [n.d.]. Measuring Neural Net Robustness with Constraints. In NeurIPS 2016, pp.2613--2621, 2016.Google Scholar
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572 (2014).Google Scholar
- Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G. Dimakis. 2017. The Robust Manifold Defense: Adversarial Training using Generative Models. CoRR abs/1712.09196 (2017).Google Scholar
- Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In CAV.Google Scholar
- Jiman Kim and Chanjong Park. 2017. End-To-End Ego Lane Estimation Based on Sequential Transfer Learning for Self-Driving Cars. In CVPR 2017. 1194--1202.Google Scholar
- Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. CoRR abs/1607.02533 (2016).Google Scholar
- Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, and Yadong Wang. 2018. DeepGauge: multi-granularity testing criteria for deep learning systems. In ASE 2018.Google ScholarDigital Library
- Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In CVPR 2016.Google ScholarCross Ref
- Aran Nayebi and Surya Ganguli. 2017. Biologically inspired protection of deep networks from adversarial attacks. CoRR abs/1703.09202 (2017).Google Scholar
- Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In S&P 2016.Google ScholarCross Ref
- Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. DeepXplore: Automated Whitebox Testing of Deep Learning Systems. In SOSP 2017.Google ScholarDigital Library
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1986. Leaning internal representations by back-propagating errors. Nature 323, 6088 (1986), 318--362.Google ScholarCross Ref
- Shiwei Shen, Guoqing Jin, Ke Gao, and Yongdong Zhang. 2017. AE-GAN: adversarial eliminating with GAN. CoRR abs/1707.05474 (2017).Google Scholar
- Youcheng Sun, Xiaowei Huang, and Daniel Kroening. 2018. Testing Deep Neural Networks. CoRR abs/1803.04792 (2018).Google Scholar
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013).Google Scholar
Index Terms
- Styx: a data-oriented mutation framework to improve the robustness of DNN
Recommendations
Adversarial Robustness in Deep Learning: From Practices to Theories
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningDeep neural networks (DNNs) have achieved unprecedented accomplishments in various machine learning tasks. However, recent studies demonstrate that DNNs are extremely vulnerable to adversarial examples. They are manually synthesized input samples which ...
Re-thinking model robustness from stability: a new insight to defend adversarial examples
AbstractWe study the model robustness against adversarial examples, referred to as small perturbed input data that may however fool many state-of-the-art deep learning models. Unlike previous research, we establish a novel theory addressing the robustness ...
Styx grid services: lightweight, easy-to-use middleware for scientific workflows
ICCS'06: Proceedings of the 6th international conference on Computational Science - Volume Part IIIThe service-oriented approach to performing distributed scientific research is potentially very powerful but is not yet widely used in many scientific fields. This is partly due to the technical difficulties involved in creating services and composing ...
Comments