Abstract
Identifying sources of uncertainty in an image classifier is a crucial challenge. Indeed, the decision process of those models is opaque and does not necessarily correspond to what we might expect. To help characterize classifiers, generative models can be used as they allow the control of visual attributes. Here we use a generative adversarial network to generate images corresponding to how a classifier sees the image. More specifically, we consider the classifier maximum softmax probability as an uncertainty estimation and use it as an additional input to condition the generative model. This allows us to generate images that result in uncertain predictions, giving us a global view of which images are harder to classify. We can also increase the uncertainty of a given image and observe the impact of an attribute, providing a more local understanding of the decision process. We perform experiments on the MNIST dataset, augmented with corruptions. We believe that generative models are a helpful tool to explain the behavior and uncertainties of image classifiers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223. PMLR (2017)
Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI, December 2019. https://doi.org/10.48550/arXiv.1910.10045
Corbière, C., Thome, N., Saporta, A., Vu, T.H., Cord, M., Pérez, P.: Confidence estimation via auxiliary models. arXiv:2012.06508 [cs, stat], May 2021
Deng, L.: The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012). https://doi.org/10.1109/MSP.2012.2211477
Goodfellow, I.J., et al.: Generative adversarial networks, June 2014. https://doi.org/10.48550/arXiv.1406.2661
Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations, June 2019. https://doi.org/10.48550/arXiv.1904.07451
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations, March 2019. https://doi.org/10.48550/arXiv.1903.12261
Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv:1610.02136 [cs], October 2018
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models, December 2020. https://doi.org/10.48550/arXiv.2006.11239
Jeanneret, G., Simon, L., Jurie, F.: Adversarial counterfactual visual explanations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16425–16435 (2023)
Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. arXiv:2006.06676 [cs, stat], October 2020
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. arXiv:1912.04958 [cs, eess, stat], March 2020
Lang, O., et al.: Explaining in style: training a GAN to explain a classifier in StyleSpace. arXiv:2104.13369 [cs, eess, stat], September 2021
Le Coz, A., Herbin, S., Adjed, F.: Leveraging generative models to characterize the failure conditions of image classifiers. In: The IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety (AISafety 2022), July 2022
Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1), 18 (2021). https://doi.org/10.3390/e23010018
Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv:1411.1784 [cs, stat], November 2014
Oberdiek, P., Fink, G.A., Rottmann, M.: UQGAN: a unified model for uncertainty quantification of deep classifiers trained via conditional GANs, October 2022. https://doi.org/10.48550/arXiv.2201.13279
Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12(85), 2825–2830 (2011)
Sauer, A., Geiger, A.: Counterfactual generative networks. arXiv:2101.06046 [cs] (Jan 2021)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR, March 2018. https://doi.org/10.48550/arXiv.1711.00399
Wiles, O., Albuquerque, I., Gowal, S.: Discovering bugs in vision models using off-the-shelf image generation and captioning, August 2022. https://doi.org/10.48550/arXiv.2208.08831
Wu, Z., Lischinski, D., Shechtman, E.: Stylespace analysis: disentangled controls for stylegan image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12863–12872 (2021)
Zhao, Z., Dua, D., Singh, S.: Generating natural adversarial examples, February 2018. https://doi.org/10.48550/arXiv.1710.11342
Acknowledgments
This work has been supported by the French government under the “Investissements d’avenir” program, as part of the SystemX Technological Research Institute. This work was granted access to the HPC/AI resources of IDRIS under the allocation 2022-AD011013372 made by GENCI.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Le Coz, A., Herbin, S., Adjed, F. (2025). Explaining an Image Classifier with a Generative Model Conditioned by Uncertainty. In: Meo, R., Silvestri, F. (eds) Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2023. Communications in Computer and Information Science, vol 2134. Springer, Cham. https://doi.org/10.1007/978-3-031-74627-7_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-74627-7_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-74626-0
Online ISBN: 978-3-031-74627-7
eBook Packages: Artificial Intelligence (R0)