Abstract
In recent years, deep neural networks (DNNs) have had great success in machine learning and pattern recognition. It has been shown that these networks can match or exceed human-level performance in difficult image recognition tasks. However, recent research has raised a number of critical questions about the robustness and stability of these deep learning architectures. Specifically, it has been shown that they are prone to adversarial attacks, i.e. perturbations added to input images to fool the classifier, and furthermore, trained models can be highly unstable to hyperparameter changes. In this work, we craft a series of experiments with multiple deep learning architectures, varying adversarial attacks, and different class attribution methods on the CIFAR-10 dataset in order to study the effect of sparse regularization to the robustness (accuracy and stability), in deep neural networks. Our results both qualitatively show and empirically quantify the amount of protection and stability sparse representations lend to machine learning robustness in the context of adversarial examples and class attribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bansal, N., Agarwal, C., Nguyen, A.: Sam: the sensitivity of attribution methods to hyperparameters. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8673–8683 (2020)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE. (2017). https://arxiv.org/pdf/1608.04644.pdf
Fong, R., Patrick, M., Vedaldi, A.: Understanding deep networks via extremal perturbations and smooth masks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2950–2958 (2019)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2015). http://arxiv.org/abs/1512.03385
Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
Kim, E., Hannan, D., Kenyon, G.: Deep sparse coding for invariant multimodal halle berry neurons. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1111–1120 (2018)
Kim, E., Rego, J., Watkins, Y., Kenyon, G.T.: Modeling biological immunity to adversarial examples. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4666–4675 (2020)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2015). http://arxiv.org/abs/1511.04599
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), (pp. 582–597. IEEE (2016). https://arxiv.org/pdf/1511.04508.pdf
Nicolae, M., et al.: Adversarial robustness toolbox v0.2.2. CoRR abs/1807.01069 (2018), http://arxiv.org/abs/1807.01069
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Schwartz, D., Alparslan, Y., Kim, E. (2020). Regularization and Sparsity for Adversarial Robustness and Stable Attribution. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2020. Lecture Notes in Computer Science(), vol 12509. Springer, Cham. https://doi.org/10.1007/978-3-030-64556-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-64556-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-64555-7
Online ISBN: 978-3-030-64556-4
eBook Packages: Computer ScienceComputer Science (R0)