Universal Perturbation Generation for Black-box Attack Using Evolutionary Algorithms | IEEE Conference Publication | IEEE Xplore

Universal Perturbation Generation for Black-box Attack Using Evolutionary Algorithms


Abstract:

Image classifiers based on deep neural networks (DNNs) are vulnerable to tiny, imperceptible perturbations. Maliciously generated adversarial examples can exploit the ins...Show More

Abstract:

Image classifiers based on deep neural networks (DNNs) are vulnerable to tiny, imperceptible perturbations. Maliciously generated adversarial examples can exploit the instability of DNNs and mislead it into outputting a wrong classification result. Prior works showed the transferability of adversarial perturbations between models and between images. In this work, we shed light on the combination of source/target misclassification, black-box attack, and universal perturbation by employing improved evolutionary algorithms. We additionally find that the use of adversarial initialization enhances the efficiency of evolutionary algorithms finding universal perturbations. Experiments demonstrate impressive misclassification rates and surprising transferability for the proposed attack method using different models trained on CIFAR-10 and CIFAR-100 datasets. Our attach method also shows robustness against defensive measures like adversarial training.
Date of Conference: 20-24 August 2018
Date Added to IEEE Xplore: 29 November 2018
ISBN Information:
Print on Demand(PoD) ISSN: 1051-4651
Conference Location: Beijing, China

Contact IEEE to Subscribe

References

References is not available for this document.