Skip to main content

Calibration Error Estimation Using Fuzzy Binning

  • Conference paper
  • First Online:
Fuzzy Information Processing 2023 (NAFIPS 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 751))

Included in the following conference series:

Abstract

Neural network-based decisions tend to be overconfident, where their raw outcome probabilities do not align with the true decision probabilities. Calibration of neural networks is an essential step towards more reliable deep learning frameworks. Prior metrics of calibration error primarily utilize crisp bin membership-based measures. This exacerbates skew in model probabilities and portrays an incomplete picture of calibration error. In this work, we propose a Fuzzy Calibration Error metric (FCE) that utilizes a fuzzy binning approach to calculate calibration error. This approach alleviates the impact of probability skew and provides a tighter estimate while measuring calibration error. We compare our metric with ECE across different data populations and class memberships. Our results show that FCE offers better calibration error estimation, especially in multi-class settings, alleviating the effects of skew in model confidence scores on calibration error estimation. We make our code and supplementary materials available at: https://github.com/bihani-g/fce.

G. Bihani and J. T. Rayz—Contributing equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bihani, G., Rayz, J.T.: Low anisotropy sense retrofitting (LASeR): towards isotropic and sense enriched representations. In: NAACL-HLT 2021, p. 81 (2021)

    Google Scholar 

  2. Chen, Q., Zhuo, Z., Wang, W.: BERT for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909 (2019)

  3. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  4. Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  5. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  6. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.: On calibration of modern neural networks. In: ICML 2017 (2017)

    Google Scholar 

  7. Kong, L., Jiang, H., Zhuang, Y., Lyu, J., Zhao, T., Zhang, C.: Calibrated language model fine-tuning for in- and out-of-distribution data. ArXiv (2020). https://doi.org/10.18653/v1/2020.emnlp-main.102

  8. Jiang, Z., Araki, J., Ding, H., Neubig, G.: How can we know when language models know? On the calibration of language models for question answering. Trans. Assoc. Comput. Linguist. 9, 962–977 (2021)

    Article  Google Scholar 

  9. Naeini, M.P., Cooper, G., Hauskrecht, M.: Obtaining well calibrated probabilities using Bayesian binning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29 (2015)

    Google Scholar 

  10. Ovadia, Y., et al.: Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  11. Huang, Y., Li, W., Macheret, F., Gabriel, R.A., Ohno-Machado, L.: A tutorial on calibration measurements and calibration models for clinical prediction models. J. Am. Med. Inform. Assoc. 27(4), 621–633 (2020)

    Article  Google Scholar 

  12. Tack, J., Mo, S., Jeong, J., Shin, J.: CSI: novelty detection via contrastive learning on distributionally shifted instances. In: Advances in Neural Information Processing Systems, vol. 33, pp. 11839–11852 (2020)

    Google Scholar 

  13. Nixon, J., Dusenberry, M.W., Zhang, L., Jerfel, G., Tran, D.: Measuring calibration in deep learning (2019)

    Google Scholar 

  14. Roelofs, R., Cain, N., Shlens, J., Mozer, M.C.: Mitigating bias in calibration error estimation. In: International Conference on Artificial Intelligence and Statistics, pp. 4036–4054. PMLR (2022)

    Google Scholar 

  15. Ding, Y., Liu, J., Xiong, J., Shi, Y.: Revisiting the evaluation of uncertainty estimation and its application to explore model complexity-uncertainty trade-off. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 4–5 (2020)

    Google Scholar 

  16. Zhang, J., Kailkhura, B., Han, T.Y.-J.: Mix-n-match: ensemble and compositional methods for uncertainty calibration in deep learning. In: International Conference on Machine Learning, pp. 11117–11128. PMLR (2020)

    Google Scholar 

  17. Bihani, G., Rayz, J.T.: Fuzzy classification of multi-intent utterances. In: Rayz, J., Raskin, V., Dick, S., Kreinovich, V. (eds.) NAFIPS 2021. LNNS, vol. 258, pp. 37–51. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-82099-2_4

    Chapter  Google Scholar 

  18. Thulasidasan, S., Chennupati, G., Bilmes, J.A., Bhattacharya, T., Michalak, S.: On mixup training: improved calibration and predictive uncertainty for deep neural networks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  19. Malinin, A., Gales, M.: Predictive uncertainty estimation via prior networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  20. Hendrycks, D., Liu, X., Wallace, E., Dziedzic, A., Krishnan, R., Song, D.: Pretrained transformers improve out-of-distribution robustness. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2744–2751. Association for Computational Linguistics, Online (2020). https://doi.org/10.18653/v1/2020.acl-main.244

  21. Platt, J., et al.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif. 10(3), 61–74 (1999)

    Google Scholar 

  22. Zadrozny, B., Elkan, C.: Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers. In: ICML, vol. 1, pp. 609–616 (2001)

    Google Scholar 

  23. Mitchell, T.: Twenty newsgroups data set. UCI Machine Learning Repository (1999)

    Google Scholar 

  24. Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. In: NIPS (2015)

    Google Scholar 

  25. Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150. Association for Computational Linguistics, Portland (2011)

    Google Scholar 

  26. Kim, J., Na, D., Choi, S., Lim, S.: Bag of tricks for in-distribution calibration of pretrained transformers. arXiv. arXiv:2302.06690 [cs] (2023)

  27. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis (2019). https://doi.org/10.18653/v1/N19-1423

Download references

Acknowledgments

This work was partially supported by the Department of Justice grant #15PJDP-21-GK-03269-MECP.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Geetanjali Bihani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bihani, G., Rayz, J.T. (2023). Calibration Error Estimation Using Fuzzy Binning. In: Cohen, K., Ernest, N., Bede, B., Kreinovich, V. (eds) Fuzzy Information Processing 2023. NAFIPS 2023. Lecture Notes in Networks and Systems, vol 751. Springer, Cham. https://doi.org/10.1007/978-3-031-46778-3_9

Download citation

Publish with us

Policies and ethics