Skip to main content

CMMR: A Composite Multidimensional Models Robustness Evaluation Framework for Deep Learning

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14491))

Abstract

Accurately evaluating the defense models against adversarial examples has been proven to be a challenging task. We have recognized the limitations of mainstream evaluation standards, which fail to account for the discrepancies in evaluation results arising from different adversarial attack methods, experimental setups, and metrics sets. To address these disparities, we propose the Composite Multidimensional Model Robustness (CMMR) evaluation framework, which integrates three evaluation dimensions: attack methods, experimental settings, and metrics sets. By comprehensively evaluating the model’s robustness across these dimensions, we aim to effectively mitigate the aforementioned variations. Furthermore, the CMMR framework allows evaluators to flexibly define their own options for each evaluation dimension to meet their specific requirements. We provide practical examples to demonstrate how the CMMR framework can be utilized to assess the performance of models in enhancing robustness through various approaches. The reliability of our methodology is assessed through both practical examinations and theoretical validations. The experimental results demonstrate the excellent reliability of the CMMR framework and its significant reduction of variations encountered in evaluating model robustness in practical scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  2. Jia, X., Zhang, Y., Wu, B., Ma, K., Wang, J., Cao, X.: LAS-AT: adversarial training with learnable attack strategy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13398–13408 (2022)

    Google Scholar 

  3. Jin, G., Yi, X., Huang, W., Schewe, S., Huang, X.: Enhancing adversarial training with second-order statistics of weights. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15273–15283 (2022)

    Google Scholar 

  4. Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning, pp. 7472–7482. PMLR (2019)

    Google Scholar 

  5. Carlini, N., Wagner, D.: Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 (2016)

  6. Cornelius, C., Das, N., Chen, S.-T., Chen, L., Kounavis, M.E., Chau, D.H.: The efficacy of shield under different threat models. arXiv preprint arXiv:1902.00541 (2019)

  7. Dong, Y., et al.: Benchmarking adversarial robustness on image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 321–331 (2020)

    Google Scholar 

  8. Uesato, J., O’donoghue, B., Kohli, P., Oord, A.: Adversarial risk and the dangers of evaluating against weak attacks. In: International Conference on Machine Learning, pp. 5025–5034. PMLR (2018)

    Google Scholar 

  9. Mohammadian, H., Ghorbani, A.A., Lashkari, A.H.: A gradient-based approach for adversarial attack on deep learning-based network intrusion detection systems. Appl. Soft Comput. 137, 110173 (2023)

    Article  Google Scholar 

  10. Zhang, Y., Wang, C., Shi, Q., Feng, Y., Chen, C.: Adversarial gradient-based meta learning with metric-based test. Knowl. Based Syst. 263, 110312 (2023)

    Article  Google Scholar 

  11. Huang, L., Zhang, C., Zhang, H.: Self-adaptive training: beyond empirical risk minimization. Adv. Neural. Inf. Process. Syst. 33, 19365–19376 (2020)

    Google Scholar 

  12. Ling, X., et al.: DEEPSEC: a uniform platform for security analysis of deep learning model. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 673–690. IEEE (2019)

    Google Scholar 

  13. Carlini, N.: A critique of the DeepSec platform for security analysis of deep learning models. arXiv preprint arXiv:1905.07112 (2019)

  14. Wu, J., Zhou, M., Zhu, C., Liu, Y., Harandi, M., Li, L.: Performance evaluation of adversarial attacks: discrepancies and solutions. arXiv preprint arXiv:2104.11103 (2021)

  15. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  16. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC (2018)

    Google Scholar 

  17. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  18. Carlini, N., Wagner, D: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  19. Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  20. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  21. Xiong, Y., Lin, J., Zhang, M., Hopcroft, J.E., He, K.: Stochastic variance reduced ensemble adversarial attack for boosting the adversarial transferability. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14983–14992 (2022)

    Google Scholar 

  22. Byun, J., Cho, S., Kwon, M.-J., Kim, H.-S., Kim, C.: Improving the transferability of targeted adversarial examples through object-based diverse input. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15244–15253 (2022)

    Google Scholar 

  23. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017)

  24. Feng, Y., Wu, B., Fan, Y., Liu, L., Li, Z., Xia, S.-T.: Boosting black-box attack with partially transferred conditional adversarial distribution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15095–15104 (2022)

    Google Scholar 

  25. Piplai, A., Chukkapalli, S.S.L., Joshi, A.: NAttack! Adversarial attacks to bypass a GAN based classifier trained to detect network intrusion. In: 2020 IEEE 6th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing, (HPSC) and IEEE International Conference on Intelligent Data and Security (IDS), pp. 49–54. IEEE (2020)

    Google Scholar 

  26. Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  27. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  28. Zhou, J., et al.: Adversarial training with complementary labels: on the benefit of gradually informative attacks. arXiv preprint arXiv:2211.00269 (2022)

  29. Niu, Z.-H., Yang, Y.-B.: Defense against adversarial attacks with efficient frequency-adaptive compression and reconstruction. Pattern Recogn. 138, 109382 (2023)

    Article  Google Scholar 

  30. Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L.S., Goldstein, T.: Universal adversarial training. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5636–5643 (2020)

    Google Scholar 

  31. Khan, M.A., et al.: A resource-friendly certificateless proxy signcryption scheme for drones in networks beyond 5G. Drones 7(5), 321 (2023)

    Article  Google Scholar 

  32. Hwang, D., Lee, E., Rhee, W.: AID-purifier: a light auxiliary network for boosting adversarial defense. Neurocomputing 541, 126251 (2023)

    Article  Google Scholar 

  33. Wei, J., Yao, L., Meng, Q.: Self-adaptive logit balancing for deep neural network robustness: defence and detection of adversarial attacks. Neurocomputing 531, 180–194 (2023)

    Article  Google Scholar 

  34. Luo, Y., Boix, X., Roig, G., Poggio, T., Zhao, Q.: Foveation-based mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292 (2015)

  35. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)

    Google Scholar 

  36. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  37. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)

  38. Rousseeuw, P.J., Hampel, F.R., Ronchetti, E.M., Stahel, W.A.: Robust Statistics: The Approach Based on Influence Functions. Wiley (2011)

    Google Scholar 

  39. Liu, Y., Cheng, Y., Gao, L., Liu, X., Zhang, Q., Song, J.: Practical evaluation of adversarial robustness via adaptive auto attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15105–15114 (2022)

    Google Scholar 

  40. Shannon, C.E.: A mathematical theory of communication. ACM SIGMOBILE Mob. Comput. Commun. Rev. 5(1), 3–55 (2001)

    Article  MathSciNet  Google Scholar 

  41. Carlini, N., et al.: On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705 (2019)

Download references

Acknowledgements

This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFC3400404 and Grant 2021YFB3101201, the National Science Foundation of China under Grant 62172154 and 62372473. the Hunan Provincial Natural Science Foundation of China under Grant 2023JJ30702 and the Changsha Municipal Natural Science Foundation under Grant kq2208283. The authors are grateful for resources from the High Performance Computing Center of Central South University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shigeng Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, W., Zhang, S., Wang, W., Zhang, J., Liu, X. (2024). CMMR: A Composite Multidimensional Models Robustness Evaluation Framework for Deep Learning. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14491. Springer, Singapore. https://doi.org/10.1007/978-981-97-0808-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0808-6_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0807-9

  • Online ISBN: 978-981-97-0808-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics