ABSTRACT
As neural networks have made remarkable achievements in the field of image classification, a variety of adversarial attack methods have appeared to interfere with neural networks. Adversarial samples apply a tiny perturbation to the original image, which would not make much sense to the human eye, but would produce a massive error to the neural network. In recent years, many articles have made contributions to adversarial sample attack and defense, which aim to generate a maximum classification error while minimizing the perturbation. However, former attacks are focused on the spatial domain. We find that separated attacks based on different components in the frequency domain are more effective. The contribution of this article is: (1) compute the gradient of the neural network for image classification after the discrete Fourier transform. (2) design a stationary filter to generate the adversarial sample according to frequency component and gradient. (3) conduct experiments show that the adverasial samples generated by our method achieve the same attack effect on the premise that they are closer to the original picture.
- LeCun Y, Boser B, Denker J S, Backpropagation applied to handwritten zip code recognition[J]. Neural computation, 1989, 1(4): 541-551.Google Scholar
- Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[J]. arXiv preprint arXiv:1412.6572, 2014.Google Scholar
- Madry A, Makelov A, Schmidt L, Towards deep learning models resistant to adversarial attacks[J]. arXiv preprint arXiv:1706.06083, 2017.Google Scholar
- Boeddeker C, Hanebrink P, Drude L, On the computation of complex-valued gradients with application to statistically optimum beamforming[J]. arXiv preprint arXiv:1701.00392, 2017.Google Scholar
- Dong Y, Liao F, Pang T, Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 9185-9193.Google Scholar
- Moosavi-Dezfooli S M, Fawzi A, Frossard P. Deepfool: a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2574-2582.Google Scholar
Recommendations
Revisiting ensemble adversarial attack
AbstractDeep neural networks have shown vulnerability to adversarial attacks. Adversarial examples generated with an ensemble of source models can effectively attack unseen target models, posing a security threat to practical applications. In ...
Highlights- A finding of the drawback of existing ensemble attacks.
- A new insight into ...
Black-box adversarial attacks on XSS attack detection model
AbstractCross-site scripting (XSS) has been extensively studied, although mitigating such attacks in web applications remains challenging. While there is an increasing number of XSS attack detection approaches designed based on machine ...
Adversarial Attack against Modeling Attack on PUFs
DAC '19: Proceedings of the 56th Annual Design Automation Conference 2019The Physical Unclonable Function (PUF) has been proposed for the identification and authentication of devices and cryptographic key generation. A strong PUF provides an extremely large number of device-specific challenge-response pairs (CRP) which can ...
Comments