Abstract
Deep neural networks (DNNs) have achieved extraordinary successes in many fields such as image classification. However, they are vulnerable to adversarial examples generated by adding slight perturbations to the input images, leading incorrect classification results. Due to the serious threats of adversarial examples, it is necessary to find a simple and practical way to defend against adversarial attacks. In this paper, we present an efficient preprocessing method called War (WebP compression and resizing operation) for defending adversarial examples. WebP compression is first performed on the input sample to remove the imperceptible perturbations from the adversarial example. Then, the compressed image is appropriately resized to further destroy the specific structure of the adversarial perturbations. Finally, we can get a clean sample that can be correctly classified by the model. Extensive experiments show that our method outperforms the state-of-the-art defense methods. It can effectively defend adversarial attacks while ensure the classification accuracy on the normal samples drops slightly. Moreover, it only requires a particularly short pre-processing time.
This research work is partly supported by National Natural Science Foundation of China (61872003, U1636206).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017). https://doi.org/10.1109/SP.2017.49
Chen, C., Seff, A., Kornhauser, A.L., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: International Conference on Computer Vision, pp. 2722–2730 (2015)
Das, N., et al.: Keeping the bad guys out: protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900 (2017)
Ginesu, G., Pintus, M., Giusto, D.D.: Objective assessment of the webP image coding algorithm. Signal Process. Image Commun. 27(8), 867–874 (2012). https://doi.org/10.1016/j.image.2012.01.011
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2014)
Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58 (2011)
Irons, J.L., et al.: Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing. Vis. Res. 137, 61–79 (2017). https://doi.org/10.1016/j.visres.2017.06.002
Jia, X., Wei, X., Cao, X., Foroosh, H.: Comdefend: an efficient image compression model to defend adversarial examples. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6084–6092 (2019)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012). https://doi.org/10.1145/3065386
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)
Lian, L., Shilei, W.: Webp: a new image compression format based on vp8 encoding. Microcontrollers Embed. Syst. 3, 47–50 (2012)
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)
Markatopoulou, F., Mezaris, V., Patras, I.: Implicit and explicit concept relations in deep neural networks for multi-label video/image annotation. IEEE Trans. Circuits Syst. Video Technol. 29(6), 1631–1644 (2019)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016). https://doi.org/10.1109/CVPR.2016.282
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017). https://doi.org/10.1145/3052973.3053009
Raid, A.M., Khedr, W.M., El-Dosuky, M.A., Ahmed, W.: Jpeg image compression using discrete cosine transform - a survey. Int. J. Comput. Sci. Eng. Surv. 5(2), 39–47 (2014)
Sabharwal, A., Selman, B.: S. russell, p. norvig, artificial intelligence: a modern approach, Third edition. Artificial Intelligence, 175(5–6), 935–937 (2011)
Schmidhuber, J.: Deep learning in neural networks. Neural Netw. 61, 85–117 (2015)
Song, Y., Kim, T., Nowozin, S., Ermon, S., Kushman, N.: Pixeldefend: leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766 (2017)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016). https://doi.org/10.1109/CVPR.2016.308
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Machine Learning (2014)
Thang, D.D., Matsui, T.: Image transformation can make neural networks more robust against adversarial examples. arXiv preprint arXiv:1901.03037 (2019)
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)
Wang, X., et al.: Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6629–6638 (2019)
Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.: Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991 (2017)
Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928 (2016)
Zamir, A.R., Sax, A., Shen, W., Guibas, L.J., Malik, J., Savarese, S.: Taskonomy: disentangling task transfer learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3712–3722 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yin, Z., Wang, H., Wang, J. (2020). War: An Efficient Pre-processing Method for Defending Adversarial Attacks. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_46
Download citation
DOI: https://doi.org/10.1007/978-3-030-62460-6_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62459-0
Online ISBN: 978-3-030-62460-6
eBook Packages: Computer ScienceComputer Science (R0)