Skip to main content

Robustness Verification of Multi-label Neural Network Classifiers

  • Conference paper
  • First Online:
Static Analysis (SAS 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14995))

Included in the following conference series:

  • 92 Accesses

Abstract

Multi-label neural networks are important in various tasks, including safety-critical tasks. Several works show that these networks are susceptible to adversarial attacks, which can remove a target label from the predicted label list or add a target label to this list. To date, no deterministic verifier determines the list of labels for which a multi-label neural network is locally robust. The main challenge is that the complexity of the analysis increases by a factor exponential in the multiplication of the number of labels and the number of predicted labels. We propose MuLLoC, a sound and complete robustness verifier for multi-label image classifiers that determines the robust labels in a given neighborhood of inputs. To scale the analysis, MuLLoC relies on fast optimistic queries to the network or to a constraint solver. Its queries include sampling and pair-wise relation analysis via numerical optimization and mixed-integer linear programming (MILP). For the remaining unclassified labels, MuLLoC performs an exact analysis by a novel mixed-integer programming (MIP) encoding for multi-label classifiers. We evaluate MuLLoC on convolutional networks for three multi-label image datasets. Our results show that MuLLoC classifies all labels as robust or not within 23.22 min on average and that our fast optimistic queries classify 96.84% of the labels.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/julianmour/MuLLoC_git.

References

  1. Balunovic, M., Baader, M., Singh, G., Gehr, T., Vechev, M.T.: Certifying geometric robustness of neural networks. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems NeurIPS, pp. 15287–15297 (2019)

    Google Scholar 

  2. Benussi, E., Patanè, A., Wicker, M., Laurenti, L., Kwiatkowska, M.: Individual fairness guarantees for neural networks. In: Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI, pp. 651–658 (2022)

    Google Scholar 

  3. Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: a fresh approach to numerical computing. SIAM Rev. 59(1), 65–98 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chen, M., Zheng, A.X., Weinberger, K.Q.: Fast image tagging. In: Proceedings of the 30th International Conference on Machine Learning, ICML. JMLR Workshop and Conference Proceedings, vol. 28, pp. 1274–1282 (2013)

    Google Scholar 

  5. Croce, F., Andriushchenko, M., Singh, N.D., Flammarion, N., Hein, M.: Sparse-rs: a versatile framework for query-efficient sparse black-box adversarial attacks. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI, pp. 6437–6445. AAAI Press (2022)

    Google Scholar 

  6. Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2015)

    Article  MATH  Google Scholar 

  7. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: IEEE Symposium on Security and Privacy, SP, pp. 3–18. IEEE Computer Society (2018)

    Google Scholar 

  8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR (2015)

    Google Scholar 

  9. Grodzicki, R., Mańdziuk, J., Wang, L.: Improved multilabel classification with neural networks. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 409–416. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87700-4_41

    Chapter  MATH  Google Scholar 

  10. Hsieh, C., Lin, Y., Lin, H.: A deep model with local surrogate loss for general cost-sensitive multi-label learning. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), pp. 3239–3246. AAAI Press (2018)

    Google Scholar 

  11. Hu, S., Ke, L., Wang, X., Lyu, S.: Tkml-ap: adversarial attacks to top-k multi-label learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7649–7657 (2021)

    Google Scholar 

  12. Jacoby, Y., Barrett, C., Katz, G.: Verifying recurrent neural networks using invariant inference. In: Hung, D.V., Sokolsky, O. (eds.) ATVA 2020. LNCS, vol. 12302, pp. 57–74. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59152-6_3

    Chapter  MATH  Google Scholar 

  13. Ji, S., Wang, K., Peng, X., Yang, J., Zeng, Z., Qiao, Y.: Multiple transfer learning and multi-label balanced training strategies for facial AU detection in the wild. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops, pp. 1657–1661. Computer Vision Foundation / IEEE (2020)

    Google Scholar 

  14. Jia, J., Qu, W., Gong, N.Z.: Multiguard: Provably robust multi-label classification against adversarial examples. In: Advances in Neural Information Processing Systems NeurIPS (2022)

    Google Scholar 

  15. Kabaha, A., Drachsler-Cohen, D.: Boosting robustness verification of semantic feature neighborhoods. In: Static Analysis - 29th International Symposium, SAS, vol. 13790, pp. 299–324. Springer (2022)

    Google Scholar 

  16. Kong, L., Luo, W., Zhang, H., Liu, Y., Shi, Y.: Evolutionary multi-label adversarial examples: an effective black-box attack. IEEE Trans. Artif. Intell. 1–12 (2022). https://doi.org/10.1109/TAI.2022.3198629

  17. Kurata, G., Xiang, B., Zhou, B.: Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence. In: The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies NAACL HLT, pp. 521–526 (2016)

    Google Scholar 

  18. Mahmood, H., Elhamifar, E.: Towards effective multi-label recognition attacks via knowledge graph consistency. CoRR abs/2207.05137 (2022)

    Google Scholar 

  19. Melacci, S., et al.: Can domain knowledge alleviate adversarial attacks in multi-label classifiers? CoRR abs/2006.03833 (2020)

    Google Scholar 

  20. Melacci, S., et al.: Domain knowledge alleviates adversarial attacks in multi-label classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 9944–9959 (2022)

    Article  MATH  Google Scholar 

  21. Papageorgiou, C., Poggio, T.A.: A trainable system for object detection. Int. J. Comput. Vis. 38(1), 15–33 (2000)

    Article  MATH  Google Scholar 

  22. Reshef, R., Kabaha, A., Seleznova, O., Drachsler-Cohen, D.: Verification of neural networks local differential classification privacy. CoRR abs/2310.20299 (2023)

    Google Scholar 

  23. Shapira, Y., Avneri, E., Drachsler-Cohen, D.: Deep learning robustness verification for few-pixel attacks. Proc. ACM Program. Lang. 7(OOPSLA1), 434–461 (2023)

    Article  MATH  Google Scholar 

  24. Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.: Fast and effective robustness certification. In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems NeurIPS (2018)

    Google Scholar 

  25. Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3(POPL) (2019)

    Google Scholar 

  26. Song, Q., Jin, H., Huang, X., Hu, X.: Multi-label adversarial perturbations. In: IEEE International Conference on Data Mining, ICDM, pp. 1242–1247. IEEE Computer Society (2018)

    Google Scholar 

  27. Sumbul, G., Charfuelan, M., Demir, B., Markl, V.: Bigearthnet: a large-scale benchmark archive for remote sensing image understanding. CoRR abs/1902.06148 (2019)

    Google Scholar 

  28. Sun, S.H.: Multi-digit mnist for few-shot learning (2019). https://github.com/shaohua0116/MultiDigitMNIST

  29. Sun, Y., Usman, M., Gopinath, D., Pasareanu, C.S.: VPN: verification of poisoning in neural networks. In: Software Verification and Formal Methods for ML-Enabled Autonomous Systems - 5th International Workshop, FoMLAS, and 15th International Workshop, NSV, vol. 13466, pp. 3–14. Springer (2022)

    Google Scholar 

  30. Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR (2014)

    Google Scholar 

  31. Tjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: 7th International Conference on Learning Representations, ICLR. OpenReview.net (2019)

    Google Scholar 

  32. Tran, H.-D., et al.: Robustness verification of semantic segmentation neural networks using relaxed reachability. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 263–286. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_12

    Chapter  MATH  Google Scholar 

  33. Tsoumakas, G., Katakis, I.: Multi-label classification: an overview. Int. J. Data Warehous. Min. 3(3), 1–13 (2007)

    Article  MATH  Google Scholar 

  34. Urban, C., Christakis, M., Wüstholz, V., Zhang, F.: Perfectly parallel fairness certification of neural networks. Proc. ACM Program. Lang. 4(OOPSLA), 185:1–185:30 (2020)

    Google Scholar 

  35. Weston, J., Bengio, S., Usunier, N.: WSABIE: scaling up to large vocabulary image annotation. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence IJCAI, pp. 2764–2770. IJCAI/AAAI (2011)

    Google Scholar 

  36. Wu, Y., Bamman, D., Russell, S.: Adversarial training for relation extraction. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 1778–1783. Association for Computational Linguistics (2017)

    Google Scholar 

  37. Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: 9th International Conference on Learning Representations, ICLR. OpenReview.net (2021)

    Google Scholar 

  38. Yang, Z., Han, Y., Zhang, X.: Characterizing the evasion attackability of multi-label classifiers. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35. no. 12, pp. 10647–10655 (2021). https://doi.org/10.1609/aaai.v35i12.17273, https://ojs.aaai.org/index.php/AAAI/article/view/17273

  39. Zhang, M., Zhou, Z.: Multi-label neural networks with applications to functional genomics and text categorization. IEEE Trans. Knowl. Data Eng. 18(10), 1338–1351 (2006)

    Article  MATH  Google Scholar 

  40. Zhou, N., Luo, W., Lin, X., Xu, P., Zhang, Z.: Generating multi-label adversarial examples by linear programming. In: International Joint Conference on Neural Networks, IJCNN, pp. 1–8. IEEE (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Julian Mour or Dana Drachsler-Cohen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mour, J., Drachsler-Cohen, D. (2025). Robustness Verification of Multi-label Neural Network Classifiers. In: Giacobazzi, R., Gorla, A. (eds) Static Analysis. SAS 2024. Lecture Notes in Computer Science, vol 14995. Springer, Cham. https://doi.org/10.1007/978-3-031-74776-2_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-74776-2_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-74775-5

  • Online ISBN: 978-3-031-74776-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics