Abstract:
Despite achieving great success in multimedia analysis, especially in image recognition, deep neural networks (DNNs) can be easily fooled by maliciously crafted adversari...Show MoreMetadata
Abstract:
Despite achieving great success in multimedia analysis, especially in image recognition, deep neural networks (DNNs) can be easily fooled by maliciously crafted adversarial examples. Attacker who generates adversarial examples can even launch black-box adversarial attack by querying the target DNN model, without access to its internal structure or training set. In this work, we develop Schmidt Augmentation, an image augmentation method better probes decision boundaries of the black-box model. Schmidt Augmentation helps attackers achieve higher accuracy decrease on MNIST and CIFAR-10 datasets. We also shed light on the harshest circumstance that attacker only has access to samples of the target DNN by providing a labeling method based on semi-supervised learning instead of querying the target model.
Date of Conference: 23-27 July 2018
Date Added to IEEE Xplore: 11 October 2018
ISBN Information: