skip to main content
10.1145/3497623.3497670acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccprConference Proceedingsconference-collections
research-article

CIraCLoss: Intra-class Distance Loss Makes CNN Robust

Authors Info & Claims
Published:04 February 2022Publication History

ABSTRACT

Convolutional neural networks(CNN) are vulnerable to adversarial samples, which poses a threat to application in some scenes. This paper proposes a novel loss function to improve the robustness of CNN—CIraCLoss, which combines the intra-class distance loss function (IntraCLoss) and the cross-entropy loss function (CELoss). In the training stage, the IntraCLoss encourages each feature extracted by CNN to be close to its intra-class center. With this feature space distribution, the adversarial sample needs a larger intensity of attack so that its feature keeps away from the intra-class center. Therefore, IntraCLoss can make CNN more robust to defend against adversarial attacks. The results on the CIFAR10 and MNIST datasets show that CIraCLoss, which is mainly affected by IntraCLoss, can reduce the DBI index of the feature space and also reduce the fooling rates of models. In addition, our method can be applied to different network structures and has good generalization.

References

  1. He K , Zhang X , Ren S , Deep Residual Learning for Image Recognition[C]// IEEE Conference on Computer Vision & Pattern Recognition. IEEE Computer Society, 2016.Google ScholarGoogle Scholar
  2. Goodfellow I J , Shlens J , Szegedy C . Explaining and harnessing adversarial examples[C]// ICML. 2015.Google ScholarGoogle Scholar
  3. Xu H , Ma Y , Liu H C , Adversarial Attacks and Defenses in Images, Graphs and Text: A Review[J]. International Journal of Automation and Computing, 2020, 17(2):151-178.Google ScholarGoogle ScholarCross RefCross Ref
  4. K. Eykholt , "Robust Physical-World Attacks on Deep Learning Visual Classification," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 1625-1634, doi: 10.1109/CVPR.2018.00175.Google ScholarGoogle ScholarCross RefCross Ref
  5. Ross, A. and Finale Doshi-Velez. “Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients.” AAAI (2018).Google ScholarGoogle ScholarCross RefCross Ref
  6. Drucker H , Le Cun Y . Improving generalization performance using double backpropagation[J]. IEEE Transactions on Neural Networks, 1992, 3(6):P.991-997.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Papernot N , Mcdaniel P , Wu X , Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks[J]. 2015.Google ScholarGoogle Scholar
  8. Hinton G , Vinyals O , Dean J . Distilling the Knowledge in a Neural Network[J]. Computer ence, 2015, 14(7):38-39.Google ScholarGoogle Scholar
  9. Szegedy C , Zaremba W , Sutskever I , Intriguing properties of neural networks. 2013.Google ScholarGoogle Scholar
  10. Moosavi-Dezfooli S M , Fawzi A , Frossard P . DeepFool: a simple and accurate method to fool deep neural networks[C]// Computer Vision & Pattern Recognition. IEEE, 2016.Google ScholarGoogle Scholar
  11. Carlini N , Wagner D . Towards Evaluating the Robustness of Neural Networks[J]. 2016.Google ScholarGoogle Scholar
  12. Su J , Vargas D V , Kouichi S . One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2017.Google ScholarGoogle Scholar
  13. Chen P Y , Zhang H , Sharma Y , ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models[J]. ACM, 2017.Google ScholarGoogle Scholar
  14. Ru, Binxin, “BayesOpt Adversarial Attack.” ICLR 2020: Eighth International Conference on Learning Representations, 2020.Google ScholarGoogle Scholar
  15. S. Chen, Z. He, C. Sun, J. Yang and X. Huang, "Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet," in IEEE Transactions on Pattern Analysis and Machine Intelligence, doi: 10.1109/TPAMI.2020.3033291.Google ScholarGoogle Scholar
  16. Akhtar N , Mian A . Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey[J]. IEEE Access, 2018, 6:14410-14430.Google ScholarGoogle ScholarCross RefCross Ref
  17. Tramèr, Florian, Kurakin A , Papernot N , Ensemble Adversarial Training: Attacks and Defenses[J]. 2017.Google ScholarGoogle Scholar
  18. Dziugaite G K , Ghahramani Z , Roy D M . A study of the effect of JPG compression on adversarial images[J]. 2016.Google ScholarGoogle Scholar
  19. Lyu, C., Huang, K., & Liang, H. (2015). A Unified Gradient Regularization Family for Adversarial Examples. 2015 IEEE International Conference on Data Mining, 301-309.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Xu W , Evans D , Qi Y . Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks[C]// Network and Distributed System Security Symposium. 2018.Google ScholarGoogle Scholar
  21. Lee H , Han S , Lee J . Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN[J]. 2017.Google ScholarGoogle Scholar
  22. Liu H , Tian Y , Wang Y , Deep Relative Distance Learning: Tell the Difference between Similar Vehicles[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016.Google ScholarGoogle Scholar
  23. Gu S , Rigazio L . Towards Deep Neural Network Architectures Robust to Adversarial Examples[J]. Computer ence, 2015.Google ScholarGoogle Scholar
  24. Bengio Y . Learning Deep Architectures for AI[J]. Foundations & Trends in Machine Learning, 2009, 2(1):1-127.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Shaham U , Yamada Y , Negahban S . Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization[J]. Computer ence, 2015.Google ScholarGoogle Scholar
  26. Nguyen L , Wang S , Sinha A . A Learning and Masking Approach to Secure Learning[J]. 2017.Google ScholarGoogle Scholar
  27. Narodytska N , Kasiviswanathan S P . Simple Black-Box Adversarial Perturbations for Deep Networks[J]. 2016.Google ScholarGoogle Scholar
  28. Wen Y , Zhang K , Li Z , A Discriminative Feature Learning Approach for Deep Face Recognition[C]// European Conference on Computer Vision. Springer, Cham, 2016.Google ScholarGoogle Scholar

Index Terms

  1. CIraCLoss: Intra-class Distance Loss Makes CNN Robust
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          ICCPR '21: Proceedings of the 2021 10th International Conference on Computing and Pattern Recognition
          October 2021
          393 pages
          ISBN:9781450390439
          DOI:10.1145/3497623

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 4 February 2022

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited
        • Article Metrics

          • Downloads (Last 12 months)19
          • Downloads (Last 6 weeks)2

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format