Skip to main content
Log in

Visual Out-of-Distribution Detection in Open-Set Noisy Environments

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The presence of noisy examples in the training set inevitably hampers the performance of out-of-distribution (OOD) detection. In this paper, we investigate a previously overlooked problem called OOD detection under asymmetric open-set noise, which is frequently encountered and significantly reduces the identifiability of OOD examples. We analyze the generating process of asymmetric open-set noise and observe the influential role of the confounding variable, entangling many open-set noisy examples with partial in-distribution (ID) examples referred to as hard-ID examples due to spurious-related characteristics. To address the issue of the confounding variable, we propose a novel method called Adversarial Confounder REmoving (ACRE) that utilizes progressive optimization with adversarial learning to curate three collections of potential examples (easy-ID, hard-ID, and open-set noisy) while simultaneously developing invariant representations and reducing spurious-related representations. Specifically, by obtaining easy-ID examples with minimal confounding effect, we learn invariant representations from ID examples that aid in identifying hard-ID and open-set noisy examples based on their similarity to the easy-ID set. By triplet adversarial learning, we achieve the joint minimization and maximization of distribution discrepancies across the three collections, enabling the dual elimination of the confounding variable. We also leverage potential open-set noisy examples to optimize a K+1-class classifier, further removing the confounding variable and inducing a tailored K+1-Guided scoring function. Theoretical analysis establishes the feasibility of ACRE, and extensive experiments demonstrate its effectiveness and generalization. Code is available at https://github.com/Anonymous-re-ssl/ACRE0.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

Not applicable.

Code Availability

Not applicable.

References

  • Chen, J., Li, Y., Wu, X., et al. (2021). Atom: Robustifying out-of-distribution detection using outlier mining. In ECML, pp. 430–445.

  • Deng, J., Dong, W., Socher, R., et al. (2009). Imagenet: A large-scale hierarchical image database. In CVPR, IEEE, pp. 248–255.

  • Du, X., Wang, Z., Cai, M., et al. (2022). Vos: Learning what you don’t know by virtual outlier synthesis. In ICLR.

  • Du, X., Sun, Y., Zhu, X., et al. (2023). Dream the impossible: Outlier imagination with diffusion models. In Advances in Neural Information Processing Systems

  • Fang, Z., Li, Y., Lu, J., et al. (2022). Is out-of-distribution detection learnable? In NeurIPS.

  • Ganin, Y., Ustinova, E., Ajakan, H., et al. (2016). Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17, 59:1-59:35.

    MathSciNet  Google Scholar 

  • Goldberger, J., & Ben-Reuven, E. (2017). Training deep neural-networks using a noise adaptation layer. In 5th international conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.

  • Gomes, E. D. C., Alberge, F., Duhamel, P., et al. (2022). Igeood: An information geometry approach to out-of-distribution detection. In ICLR.

  • Gui, J., Sun, Z., Wen, Y., et al. (2023). A review on generative adversarial networks: Algorithms, theory, and applications. IEEE Transactions on Knowledge and Data Engineering, 35(4), 3313–3332.

    Article  Google Scholar 

  • Han, B., Yao, Q., Yu, X., et al. (2018). Co-teaching: Robust training of deep neural networks with extremely noisy labels. NeurIPS 31.

  • Han, Z., Gui, X. J., Sun, H., et al. (2022a). Towards accurate and robust domain adaptation under multiple noisy environments. In IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Han, Z., Sun, H., & Yin, Y. (2022). Learning transferable parameters for unsupervised domain adaptation. IEEE Transactions on Image Processing, 31, 6424–6439.

    Article  Google Scholar 

  • He, R., Han, Z., Lu, X., et al. (2022a). Ronf: Reliable outlier synthesis under noisy feature space for out-of-distribution detection. In ACM MM, pp. 4242–4251.

  • He, R., Han, Z., Lu, X., et al. (2022b). Safe-student for safe deep semi-supervised learning with unseen-class unlabeled data. In CVPR, pp. 14585–14594.

  • He, R., Han, Z., Lu, X., et al. (2024). SAFER-STUDENT for safe deep semi-supervised learning with unseen-class unlabeled data. IEEE Transactions on Knowledge and Data Engineering, 36(1), 318–334. https://doi.org/10.1109/TKDE.2023.3279139

    Article  Google Scholar 

  • He, R., Yuan, Y., Han, Z., et al. (2024b). Exploring channel-aware typical features for out-of-distribution detection. In Proceedings of the AAAI conference on artificial intelligence, pp. 12402–12410.

  • Hell, F., Hinz, G., Liu, F., et al. (2021). Monitoring perception reliability in autonomous driving: Distributional shift detection for estimating the impact of input data on prediction accuracy. In Computer science in cars symposium, pp 1–9.

  • Hendrycks, D., & Gimpel, K. (2017). A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR.

  • Hendrycks, D., Mazeika, M., & Dietterich, T. (2018). Deep anomaly detection with outlier exposure. In ICLR.

  • Huang, R., Geng, A., & Li, Y. (2021). On the importance of gradients for detecting distributional shifts in the wild. NeurIPS, 34, 677–689.

    Google Scholar 

  • Jang, J., Na, B., Shin, D., et al. (2022). Unknown-aware domain adversarial learning for open-set domain adaptation. In NeurIPS.

  • Jiang, D., Sun, S., & Yu, Y. (2021). Revisiting flow generative models for out-of-distribution detection. In ICLR.

  • Jiang, L., Zhou, Z., Leung, T., et al. (2018). Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, pp. 2304–2313.

  • Katz-Samuels, J., Nakhleh, J. B., Nowak, R., et al. (2022). Training ood detectors in their natural habitats. In International conference on machine learning, PMLR, pp. 10848–10865.

  • Lee, K., Lee, K., Lee, H., et al. (2018). A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NeurIPS 31.

  • Li, J., Xiong, C., & Hoi, S. C. (2020). Mopro: Webly supervised learning with momentum prototypes. arXiv preprint arXiv:2009.07995.

  • Li, J., Xiong, C., & Hoi, S. C. (2021). Learning from noisy data with robust representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9485–9494.

  • Liang, S., Li, Y., & Srikant, R. (2017). Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR.

  • Lin, Z., Roy, S. D., & Li, Y. (2021). Mood: Multi-level out-of-distribution detection. In CVPR, pp. 15313–15323.

  • Liu, W., Wang, X., Owens, J., et al. (2020). Energy-based out-of-distribution detection. NeurIPS, 33, 21464–21475.

    Google Scholar 

  • Liu, W., Wang, X., Owens, J. D., et al. (2020b). Energy-based out-of-distribution detection. In NeurIPS.

  • Ming, Y., & Li, Y. (2023). How does fine-tuning impact out-of-distribution detection for vision-language models? International Journal of Computer Vision.

  • Ming. Y., Fan. Y., & Li. Y. (2022). Poem: Out-of-distribution detection with posterior sampling. In ICML, pp 15650–15665.

  • Ming, Y., Sun, Y., Dia, O., et al. (2023). How to exploit hyperspherical embeddings for out-of-distribution detection? In Proceedings of the international conference on learning representations.

  • Morningstar, W., Ham, C., Gallagher, A., et al. (2021). Density of states estimation for out of distribution detection. In AISTATS, pp 3232–3240.

  • Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, pp. 427–436.

  • Nguyen, A. T., Tran, T., Gal, Y., et al. (2021). Domain invariant representation learning with domain density transformations. NeurIPS, 34, 5264–5275.

    Google Scholar 

  • Patrini, G., Rozza, A., Krishna Menon, A., et al. (2017). Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1944–1952.

  • Pearl, J. (2009). Causality. Cambridge University Press.

  • Reed, S., Lee, H., Anguelov, D., et al. (2014). Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596.

  • Ren, J., Liu, P. J., Fertig, E., et al. (2019). Likelihood ratios for out-of-distribution detection. NeurIPS 32.

  • Sachdeva, R., Cordeiro, F. R., Belagiannis, V., et al. (2021). Evidentialmix: Learning with combined open-set and closed-set noisy labels. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 3607–3615.

  • Song, Y., Sebe, N., & Wang, W. (2022a). Rankfeat: Rank-1 feature removal for out-of-distribution detection. arXiv preprint arXiv:2209.08590.

  • Song, Y., Sebe, N., & Wang, W. (2022b). Rankfeat: Rank-1 feature removal for out-of-distribution detection. In NeurIPS.

  • Sun, Y., Guo, C., & Li, Y. (2021a). React: Out-of-distribution detection with rectified activations. In NeurIPS, pp. 144–157.

  • Sun, Y., Guo, C., & Li, Y. (2021b). React: Out-of-distribution detection with rectified activations. In NeurIPS.

  • Sun, Y., Ming, Y., Zhu, X., et al. (2022a) Out-of-distribution detection with deep nearest neighbors. In ICML.

  • Sun, Z., Hua, X. S., Yao, Y., et al. (2020). Crssc: salvage reusable samples from noisy data for robust learning. In Proceedings of the 28th ACM international conference on multimedia, pp. 92–101.

  • Sun, Z., Shen, F., Huang, D., et al. (2022b). Pnp: Robust learning from noisy labels by probabilistic noise prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5311–5320.

  • Tack, J., Mo, S., Jeong, J., et al. (2020). Csi: Novelty detection via contrastive learning on distributionally shifted instances. Advances in Neural Information Processing Systems, 33, 11839–11852.

    Google Scholar 

  • Tang, K., Miao, D., Peng, W., et al. (2021). Codes: Chamfer out-of-distribution examples against overconfidence issue. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1153–1162.

  • Wan, W., Wang, X., Xie, M. K., et al. (2024). Unlocking the power of open set: A new perspective for open-set noisy label learning. In Proceedings of the AAAI conference on artificial intelligence, pp. 15438–15446.

  • Wang, F., Han, Z., Gong, Y., et al. (2022a). Exploring domain-invariant parameters for source free domain adaptation. In CVPR, pp. 7151–7160.

  • Wang, H., Li, Z., Feng, L., et al. (2022b). Vim: Out-of-distribution with virtual-logit matching. In IEEE/CVF conference on computer vision and pattern recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022. IEEE, pp. 4911–4920.

  • Wang, Q., Fang, Z., Zhang, Y., et al. (2023). Learning to augment distributions for out-of-distribution detection. In Advances in Neural Information Processing Systems.

  • Wang, Y., Liu, W., Ma, X., et al. (2018). Iterative learning with open-set noisy labels. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8688–8696.

  • Wei, H., Feng, L., Chen, X., et al. (2020). Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 13726–13735.

  • Wei, H., Tao, L., Xie, R., et al. (2021). Open-set label noise can improve robustness against inherent label noise. Advances in Neural Information Processing Systems, 34, 7978–7992.

  • Wei, H., Xie, R., Cheng, H., et al. (2022). Mitigating neural network overconfidence with logit normalization. In ICML.

  • Wu, Z. F., Wei, T., Jiang, J., et al. (2021). Ngc: A unified framework for learning with open-world noisy data. In ICCV, pp. 62–71.

  • Xia, X., Han, B., Wang, N., et al. (2022). Extended t: Learning with mixed closed-set and open-set noisy labels. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Xiao, Z., Yan, Q., & Amit, Y. (2020). Likelihood regret: An out-of-distribution detection score for variational auto-encoder. Advances in Neural Information Processing Systems, 33, 20685–20696.

    Google Scholar 

  • Yang, J., Wang, H., Feng, L., et al. (2021a). Semantically coherent out-of-distribution detection. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 8301–8309.

  • Yang, J., Zhou, K., Li, Y., et al. (2021b). Generalized out-of-distribution detection: A survey. arXiv preprint arXiv:2110.11334.

  • Yang, J., Zhou, K., & Liu, Z. (2023). Full-spectrum out-of-distribution detection. International Journal of Computer Vision, 131(10), 2607–2622. https://doi.org/10.1007/S11263-023-01811-Z

    Article  Google Scholar 

  • Yao, Y., Sun, Z., Zhang, C., et al. (2021). Jo-src: A contrastive approach for combating noisy labels. In CVPR, pp. 5192–5201.

  • Yao, Y., Gong, M., Du, Y., et al. (2023). Which is better for learning with noisy labels: the semi-supervised method or modeling label noise? In International conference on machine learning, PMLR, pp. 39660–39673.

  • Yu, Q., & Aizawa, K. (2019). Unsupervised out-of-distribution detection by maximum classifier discrepancy. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9518–9526.

  • Yu, Q., & Aizawa, K. (2020). Unknown class label cleaning for learning with open-set noisy labels. In ICIP, pp. 1731–1735.

  • Zhang, L., Goldstein, M., & Ranganath, R. (2021). Understanding failures in out-of-distribution detection with deep generative models. In ICML, pp. 12427–12436.

  • Zhang, Z., & Sabuncu, M. (2018). Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in Neural Information Processing Systems 31.

  • Zhou, A., & Levine, S. (2021). Amortized conditional normalized maximum likelihood: Reliable out of distribution uncertainty estimation. In ICML, pp. 12803–12812.

  • Zhou, B., Lapedriza, A., Khosla, A., et al. (2017). Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6), 1452–1464.

    Article  Google Scholar 

  • Zhou, Z., Guo, L. Z., Cheng, Z., et al. (2021). Step: Out-of-distribution detection in the presence of limited in-distribution labeled data. Advances in Neural Information Processing Systems, 34, 29168–29180.

    Google Scholar 

  • Zhu, Y., Chen, Y., Xie, C., et al. (2022). Boosting out-of-distribution detection with typical features. In NeurIPS.

Download references

Funding

This work is supported by the National Natural Science Foundation of China (62176139, 62176141), the Major Basic Research Project of the Natural Science Foundation of Shandong Province (ZR2021ZD15), the Shandong Provincial Natural Science Foundation for Distinguished Young Scholars (ZR2021JQ26), and the Taishan Scholar Project of Shandong Province (tsqn202103088).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: H-RD, H-ZY; Methodology: H-RD, H-ZY; Theoretical analysis: H-RD; Writing-original draft preparation: H-RD, H-ZY; Writing-review and editing: H-RD, H-ZY, N-XS, Y-YL, C-XJ; Funding acquisition: Y-YL.

Corresponding author

Correspondence to Zhongyi Han.

Ethics declarations

Conflict of interest

The author declares that he has no confict of interest.

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Additional information

Communicated by ZHUN ZHONG.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: The Proof of Theorem 1

Appendix A: The Proof of Theorem 1

Proof

First, we fix the feature extractor G, and minimize the distribution discrimination loss \(\mathcal {L}_D\).

$$\begin{aligned} \min _{D} \mathcal {L}_D (x) =&\mathbb {E}_{P_E(x)}[ -\log D_0(G(x))]\nonumber \\&\quad + \mathbb {E}_{P_H(x)} [ -\log D_1(G(x))]\nonumber \\&\quad \quad + \mathbb {E}_{P_O(x)} [ -\log D_2(G(x))] \nonumber \\ =&-\int _{x \sim P_E(x)} \log D_0(G(x)) d x \nonumber \\&\quad {-} \int _{x \sim P_H(x)} \log D_{1}(G(x)) d x\nonumber \\&\quad - \int _{x \sim P_O(x)} \log D_{2}(G(x)) d x \nonumber \\ =&{-}\int _{z \sim P_E(z)} \log D_0(z) d z \nonumber \\ {}&\quad - \int _{z \sim P_H(z)} \log D_{1}(z) d z\nonumber \\&\quad - \int _{z \sim P_O(z)} \log D_{2}(z) d z \nonumber \\ =&\int _z\left( -P_E(z) \log D_0(z)- P_H(z) \log D_{1}(z)\right. \nonumber \\&\quad \left. - P_O(z) \log D_{2}(z)\right) d z \end{aligned}$$
(A1)

\(D_0(z) + D_1(z) + D_2(z) = 1\) for all z. Therefore, we transform the above optimization problem into an optimization problem with constraints as follows:

$$\begin{aligned}&\min _{D} -P_E(z) \log D_0(z)- P_H(z) \log D_{1}(z)\nonumber \\&\quad - P_O(z) \log D_{2}(z) \nonumber \\&s.t. \quad D_0(z) + D_1(z) + D_2(z) = 1 \end{aligned}$$
(A2)

To solve the optimization problem with constraints, we use the Lagrange multiplier method.

$$\begin{aligned} \min _{D} \tilde{\mathcal {L}}_D&:= -P_E(z) \log D_0(z) -P_H(z) \log D_{1}(z)\nonumber \\&\quad - P_O(z) \log D_{2}(z) \nonumber \\&\quad + v(D_0(z) + D_1(z) + D_2(z) - 1)\, \end{aligned}$$
(A3)

where v denotes the Lagrange variable.

We compute the derivative of \(\tilde{\mathcal {L}}_D\) with respect to D and v as follows:

$$\begin{aligned}&\frac{\partial \tilde{\mathcal {L}}_D}{\partial D_0(z)} =\frac{-P_E(z)}{D_0(z)}+v=0 \quad \Leftrightarrow \quad D_0(z) =\frac{P_E(z)}{v} \nonumber \\&\frac{\partial \tilde{\mathcal {L}}_D}{\partial D_{1}(z)} =\frac{- P_H(z)}{D_{1}(z)}+v=0 \quad \Leftrightarrow \quad D_{1}(z)=\frac{P_H(z)}{v} \nonumber \\&\frac{\partial \tilde{\mathcal {L}}_D}{\partial D_{2}(z)} =\frac{- P_O(z)}{D_{2}(z)}+v=0 \quad \Leftrightarrow \quad D_{2}(z)=\frac{P_O(z)}{v} \nonumber \\&\frac{\partial \tilde{\mathcal {L}}_D}{\partial v}=D_0(z) +D_{1}(z)+D_{2}(z)-1=0 \nonumber \\&\quad \Leftrightarrow \quad D_0(z)+D_{1}(z)+D_{2}(z)=1 \end{aligned}$$
(A4)

According to the above equations, we can know

$$\begin{aligned} D_0(z)+D_{1}(z)+D_{2}(z) = \frac{P_E(z)}{v} + \frac{P_H(z)}{v} + \frac{P_O(z)}{v} = 1\,, \end{aligned}$$
(A5)

where

$$\begin{aligned} v = P_E(z) + P_H(z) + P_O(z) = 3P_{avg}\,. \end{aligned}$$
(A6)

Thus, we obtain optimal \(D^{*}\) as

$$\begin{aligned} D^{*}(z)&= [D^{*}_0(z), D^{*}_1(z), D^{*}_2(z)] \nonumber \\&= \left[ \frac{P_E(z)}{3P_{avg}(z)}, \frac{P_H(z)}{3P_{avg}(z)}, \frac{P_O(z)}{3P_{avg}(z)}\right] \,. \end{aligned}$$
(A7)

Then, during optimizing G through minimizing \(\mathcal {L}_{OTA}\), we fix D with \(D^*\).

$$\begin{aligned}&\min _{G} \mathcal {L}_{OTA} (x) = \mathbb {E}_{P_E (x)}[ -\log D_1^{*}(G(x))] \nonumber \\&\qquad + \mathbb {E}_{P_H(x)}[ -\log D_0^{*}(G(x))] + \mathbb {E}_{P_O(x)} [ -\log D_2^{*}(G(x))] \nonumber \\&\quad =\int _z\left( -P_E(z) \log D_1^*(z) - P_H(z) \log D_{0}^*(z)\right. \nonumber \\&\qquad \left. -P_O(z) \log D_{2}^*(z)\right) d z \nonumber \\&\quad =\int _z\left( -P_E(z) \log \frac{P_H(z)}{3P_{avg}(z)} - P_H(z) \log \frac{P_E(z)}{3P_{avg}(z)}\right. \nonumber \\&\qquad \left. -P_O(z) \log \frac{P_O(z)}{3P_{avg}(z)}\right) d z \nonumber \\&\quad =\int _z\left( -P_E(z) \log \frac{P_H(z)}{3P_{avg}(z)} - P_H(z) \log \frac{P_E(z)}{3P_{avg}(z)}\right. \nonumber \\&\qquad \left. -P_O(z) \log \frac{P_O(z)}{3P_{avg}(z)}\right) d z \nonumber \\&\quad =\int _z\left( (P_H(z)+P_O(z)-3P_{avg}) \log \frac{P_H(z)}{3P_{avg}(z)}\right. \nonumber \\&\qquad \left. +(P_E(z)+P_O(z)-3P_{avg}) \log \frac{P_E(z)}{3P_{avg}(z)}\right. \nonumber \\&\qquad \left. -P_O(z) \log \frac{P_O(z)}{3P_{avg}(z)}\right) d z \nonumber \\&\quad =\int _z\left( P_H(z)\log \frac{P_H(z)}{3P_{avg}(z)} +P_O(z)\log \frac{P_H(z)}{3P_{avg}(z)}\right. \nonumber \\&\qquad \left. -3P_{avg}\log \frac{P_H(z)}{3P_{avg}(z)} + P_E(z)\log \frac{P_E(z)}{3P_{avg}(z)}\right. \nonumber \\&\qquad \left. +P_O(z)\log \frac{P_E(z)}{3P_{avg}(z)} -3P_{avg}\log \frac{P_E(z)}{3P_{avg}(z)} \right. \nonumber \\&\qquad \left. -P_O(z) \log \frac{P_O(z)}{3P_{avg}(z)}\right) d z \nonumber \\&\quad {=}KL\left( P_H \Vert 3P_{avg}\right) {+} KL\left( 3P_{avg} \Vert P_{H}\right) {+} KL\left( P_E \Vert 3P_{avg}\right) \nonumber \\&\qquad + KL\left( 3P_{avg} \Vert P_{E}\right) - KL\left( P_O \Vert 3P_{avg}\right) \nonumber \\&\qquad + \int _z\left( P_O(z)\log \frac{P_H(z)}{3P_{avg}(z)} + P_O(z)\log \frac{P_E(z)}{3P_{avg}(z)}\right) d z\nonumber \\&\quad =KL\left( P_H \Vert P_{avg}\right) + 3KL\left( P_{avg} \Vert P_{H}\right) + KL\left( P_E \Vert P_{avg}\right) \nonumber \\&\qquad + 3KL\left( P_{avg} \Vert P_{E}\right) - KL\left( P_O \Vert P_{avg}\right) + 5 \log 3\nonumber \\&\qquad + \int _z\left( P_O(z)\log \frac{P_H(z)}{3P_{avg}(z)} + P_O(z)\log \frac{P_E(z)}{3P_{avg}(z)}\right) d z \nonumber \\&\quad = KL\left( P_H \Vert P_{avg}\right) + 3KL\left( P_{avg} \Vert P_{H}\right) + KL\left( P_E \Vert P_{avg}\right) \nonumber \\&\qquad + 3KL\left( P_{avg} \Vert P_{E}\right) - KL\left( P_O \Vert P_{avg}\right) \nonumber \\&\qquad + O_{EH} + 5 \log 3\,, \end{aligned}$$
(A8)

where \(O_{EH}\) denotes \(\int _z\left( P_O(z)\log \frac{P_H(z)}{3P_{avg}(z)}+ P_O(z) \log \right. \left. \frac{P_E(z)}{3P_{avg}(z)}\right) d z\) for convenience. Then, we analyze \(KL\left( P_H \Vert P_{avg}\right) + 3KL \left( P_{avg} \Vert P_{H}\right) + KL\left( P_E \Vert P_{avg}\right) + 3KL\left( P_{avg} \Vert P_{E}\right) - KL \left( P_O \Vert P_{avg}\right) \) by the analysis of forces in the field of physics. Since the KL dispersion is asymmetric, it can be viewed as a force approximately. As shown in Fig. 5, we use \(F_{ea}, F_{ha}, F_{ah}, F_{ae}\) to denote \(KL\left( P_E \Vert P_{avg}\right) \), \(KL\left( P_H \Vert P_{avg}\right) \), \(KL\left( P_{avg} \Vert P_{H}\right) \), \(\left( P_{avg} \Vert P_{E}\right) \), respectively. \(\mathcal {E}, \mathcal {H}, \mathcal {O}, \mathcal {A}\) denote \(P_E, P_H, P_O, P_A\), located at the three vertices and the center of the triangle, respectively. \(F_{aeh}\) denotes the resultant force, and its direction represents the direction \(\mathcal {A}\) moves. \(F_{ha}\) and \(F_{ea}\) will keep \(\mathcal {E}\) and \(\mathcal {H}\) moving closer to \(\mathcal {A}\). By optimizing \(\mathcal {L}_{OTA}\), \(KL\left( P_E \Vert P_{avg}\right) \), \(KL\left( P_H \Vert P_{avg}\right) \), \(KL\left( P_{avg} \Vert P_{H}\right) \), \(\left( P_{avg} \Vert P_{E}\right) \) will keep decreasing until \(P_E \approx P_H \approx P_{avg}\). We use \(F_a\) denote \(- KL\left( P_O \Vert P_{avg}\right) \). Minimizing \(\mathcal {L}_{OTA}\) increases \(KL\left( P_O \Vert P_{avg}\right) \), resulting in \(\mathcal {O}\) constantly moving away from \(\mathcal {A}\). \(D_{AO}\) denotes the distance of \(\mathcal {A}\) and \(\mathcal {O}\) in the optimal G. Moreover, minimizing \(\mathcal {L}_{OTA}\) will decrease \(O_{EH}\) and the output of the open-set data on \(D_0\) and \(D_1\), contributing to enhancing the separability of ID and OOD distribution as well. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, R., Han, Z., Nie, X. et al. Visual Out-of-Distribution Detection in Open-Set Noisy Environments. Int J Comput Vis 132, 5453–5470 (2024). https://doi.org/10.1007/s11263-024-02139-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-024-02139-y

Keywords