Skip to main content
Log in

Verification on out-of-distribution detectors under natural perturbations

  • Published:
Machine Learning Aims and scope Submit manuscript

Abstract

Out-of-distribution (OOD) detectors play a vital role in distinguishing between OOD data and in-distribution data. However, the vulnerability of OOD detectors to natural perturbations, such as rotation and lighting variations, can potentially lead to catastrophic accidents in safety-critical applications. Current attack techniques lack robustness guarantees for OOD detectors. Neural network (NN) verification methods are limited to standard NN structures and cannot be applied to OOD detectors due to their non-standard structure. To address this issue, we propose a verification framework called Vood that offers robustness guarantees for OOD detectors under natural perturbations. Our approach begins by proving the Lipschitz continuity of most OOD detection functions under natural transformations. We then estimate the Lipschitz constant using Extreme Value Theory, incorporating a dynamically estimated safety factor. Vood transforms the verification problem into an optimization challenge, which is then effectively addressed using space-filling Lipschitz optimization techniques. Additionally, Vood is a black-box verifier, which can tackle natural perturbations on a wide range of OOD detectors. Through empirical analysis, we demonstrate that Vood outperforms baseline methods in both accuracy and efficiency. Our work represents a pioneering effort in establishing robustness verification for OOD detectors with provable guarantees.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Algorithm 2
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

No datasets were generated or analysed during the current study.

References

  • Abdou, M. A. (2022). Literature review: Efficient deep neural networks techniques for medical image analysis. Neural Computing and Applications, 34(8), 5791–5812.

    Article  MATH  Google Scholar 

  • Ambrosio, L., & Dal Maso, G. (1990). A general chain rule for distributional derivatives. Proceedings of the American Mathematical Society, 108(3), 691–702.

    Article  MathSciNet  MATH  Google Scholar 

  • Armijo, L. (1966). Minimization of functions having lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16, 1–3.

    Article  MathSciNet  MATH  Google Scholar 

  • Azizmalayeri, M., Moakhar, A. S., Zarei, A., Zohrabi, R., Manzuri, M. T., & Rohban, M. H. (2022). Your out-of-distribution detection method is not robust! arXiv preprint arXiv:2209.15246

  • Bendale, A., & Boult, T. E. (2016). Towards open set deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

  • Bitterwolf, J., Meinke, A., & Hein, M. (2020). Certifiably adversarially robust detection of out-of-distribution data. Advances in Neural Information Processing Systems, 33, 16085–16095.

    Google Scholar 

  • Buslaev, A., Iglovikov, V. I., Khvedchenya, E., Parinov, A., Druzhinin, M., & Kalinin, A. A. (2020). Albumentations: Fast and flexible image augmentations. Information, 11(2), 125. https://doi.org/10.3390/info11020125

    Article  Google Scholar 

  • Cao, C., Zhong, Z., Zhou, Z., Liu, Y., Liu, T., & Han, B. (2024). Envisioning outlier exposure by large language models for out-of-distribution detection. arXiv preprint arXiv:2406.00806

  • Chen, J., Li, Y., Wu, X., Liang, Y., & Jha, S. (2020). Robust out-of-distribution detection for neural networks. arXiv preprint arXiv:2003.09711

  • Chen, J., Li, Y., Wu, X., Liang, Y., & Jha, S. (2021). Atom: Robustifying out-of-distribution detection using outlier mining. In: Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD . Springer

  • Chen, L., Lin, S., Lu, X., Cao, D., Wu, H., Guo, C., Liu, C., & Wang, F.-Y. (2021). Deep neural network based vehicle and pedestrian detection for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 22, 3234–3246.

    Article  MATH  Google Scholar 

  • Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., & Vedaldi, A. (2014). Describing textures in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3606–3613

  • Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 . IEEE

  • Fazlyab, M., Morari, M., & Pappas, G. J. (2020). Safety verification and robustness analysis of neural networks via quadratic constraints and semidefinite programming. IEEE Transactions on Automatic Control, 67(1), 1–15.

    Article  MathSciNet  MATH  Google Scholar 

  • Finner, H. (1992). A generalization of holder’s inequality and some probability inequalities. The Annals of Probability, 1, 1893–1901.

    MATH  Google Scholar 

  • Ge, Z., Demyanov, S., Chen, Z., & Garnavi, R. (2017). Generative openmax for multi-class open set classification. arXiv preprint arXiv:1707.07418

  • Goodfellow, I. (2018). Gradient masking causes clever to overestimate adversarial perturbation size. arXiv preprint arXiv:1804.07870

  • Hein, M., & Andriushchenko, M. (2017). Formal guarantees on the robustness of a classifier against adversarial manipulation. Advances in neural information processing systems

  • Hein, M., Andriushchenko, M., & Bitterwolf, J. (2019). Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: Proceedings of CVPR

  • Hendrycks, D., & Gimpel, K. (2016). A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136

  • Hendrycks, D., Basart, S., Mazeika, M., Zou, A., Kwon, J., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. In: Proceedings of the 39th International Conference on Machine Learning

  • Hilbert, D. (1891). Ueber die stetige abbildung einer line auf ein flächenstück. Mathematische Annalen

  • Jung, A. B., Wada, K., Crall, J., Tanaka, S., Graving, J., et al. (2020). imgaug. https://github.com/aleju/imgaug. Online; accessed 01-Feb-2020

  • Kobyzev, I., Prince, S. J., & Brubaker, M. A. (2020). Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11), 3964–3979.

    Article  MATH  Google Scholar 

  • Kocić, J., Jovičić, N., & Drndarević, V. (2019). An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms. Sensors, 19(9), 2064.

    Article  Google Scholar 

  • Lee, K., Lee, K., Lee, H., & Shin, J. (2018). A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems 31

  • Lera, D., & Sergeyev, Y. D. (2010). Lipschitz and hölder global optimization using space-filling curves. Applied Numerical Mathematics, 60(1–2), 115–129.

    Article  MathSciNet  MATH  Google Scholar 

  • Liang, S., Li, Y., & Srikant, R. (2017). Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690

  • Liu, W., Wang, X., Owens, J., & Li, Y. (2020). Energy-based out-of-distribution detection. Advances in Neural Information Processing Systems

  • Liu, C., Arnon, T., Lazarus, C., Strong, C., Barrett, C., & Kochenderfer, M. J. (2021). Algorithms for verifying deep neural networks. Foundations and Trends ® in Optimization, 4(3–4), 244–404.

    Article  MATH  Google Scholar 

  • Meinke, A., & Hein, M. (2020). Towards neural networks that provably know when they don’t know. In: International Conference on Learning Representations

  • Müller, C., Serre, F., Singh, G., Püschel, M., & Vechev, M. (2021). Scaling polyhedral neural network verification on GPUs. Proceedings of Machine Learning and Systems, 3, 733–746.

    MATH  Google Scholar 

  • Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 427–436

  • Podolskiy, A., Lipin, D., Bout, A., Artemova, E., & Piontkovskaya, I. (2021). Revisiting mahalanobis distance for transformer-based out-of-domain detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35

  • Ruan, W., Huang, X., & Kwiatkowska, M. (2018). Reachability analysis of deep neural networks with provable guarantees. arXiv preprint arXiv:1805.02242

  • Sarvamangala, D., & Kulkarni, R. V. (2022). Convolutional neural networks in medical image understanding: A survey. Evolutionary Intelligence, 15(1), 1–22.

    Article  MATH  Google Scholar 

  • Shi, Z., Zhang, H., Chang, K.-W., Huang, M., & Hsieh, C.-J. (2020). Robustness verification for transformers. In: International Conference on Learning Representations

  • Singh, G., Gehr, T., Püschel, M., & Vechev, M. (2019). An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3, 1–30.

    Article  MATH  Google Scholar 

  • Tremblay, M., Halder, S. S., De Charette, R., & Lalonde, J.-F. (2021). Rain rendering for evaluating and improving robustness to bad weather. International Journal of Computer Vision, 129, 341–360.

    Article  Google Scholar 

  • Virmaux, A., & Scaman, K. (2018). Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Advances in Neural Information Processing Systems 31

  • Wang, H., Li, Z., Feng, L., & Zhang, W. (2022). Vim: Out-of-distribution with virtual-logit matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4921–4930

  • Wang, Q., Liu, F., Zhang, Y., Zhang, J., Gong, C., Liu, T., & Han, B. (2022). Watermarking for out-of-distribution detection. Advances in Neural Information Processing Systems, 35, 15545–15557.

    MATH  Google Scholar 

  • Weng, T.-W., Zhang, H., Chen, P.-Y., Yi, J., Su, D., Gao, Y., Hsieh, C.-J., & Daniel, L. (2018). Evaluating the robustness of neural networks: An extreme value theory approach. arXiv preprint arXiv:1801.10578

  • Wu, Q., Chen, Y., Yang, C., & Yan, J. (2023). Energy-based out-of-distribution detection for graph neural networks. In: The Eleventh International Conference on Learning Representations

  • Xue, H., Zeng, X., Lin, W., Yang, Z., Peng, C., & Zeng, Z. (2022). An RNN-based framework for the milp problem in robustness verification of neural networks. In: Proceedings of the Asian Conference on Computer Vision (ACCV)

  • Yoon, S., Choi, J., Lee, Y., Noh, Y.-K., & Park, F. C. (2022). Evaluating out-of-distribution detectors through adversarial generation of outliers. arXiv preprint arXiv:2208.10940

  • Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., & Xiao, J. (2015). Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365

Download references

Author information

Authors and Affiliations

Authors

Contributions

C. Z. wrote the main manuscript. Z.C. and P.X. contributed to the experiments and writing. G.M. and W.R. contributed to the idea and writing. All authors reviewed the manuscript.

Corresponding author

Correspondence to Wenjie Ruan.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Editors: Kee-Eung Kim, Shou-De Lin.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, C., Chen, Z., Xu, P. et al. Verification on out-of-distribution detectors under natural perturbations. Mach Learn 114, 77 (2025). https://doi.org/10.1007/s10994-024-06666-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10994-024-06666-0

Keywords