Skip to main content

A Framework for Building Uncertainty Wrappers for AI/ML-Based Data-Driven Components

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops (SAFECOMP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12235))

Included in the following conference series:

Abstract

More and more software-intensive systems include components that are data-driven in the sense that they use models based on artificial intelligence (AI) or machine learning (ML). Since the outcomes of such models cannot be assumed to always be correct, related uncertainties must be understood and taken into account when decisions are made using these outcomes. This applies, in particular, if such decisions affect the safety of the system. To date, however, hardly any AI-/ML-based model provides dependable estimates of the uncertainty remaining in its outcomes. In order to address this limitation, we present a framework for encapsulating existing models applied in data-driven components with an uncertainty wrapper in order to enrich the model outcome with a situation-aware and dependable uncertainty statement. The presented framework is founded on existing work on the concept and mathematical foundation of uncertainty wrappers. The application of the framework is illustrated using pedestrian detection as an example, which is a particularly safety-critical feature in the context of autonomous driving. The Brier score and its components are used to investigate how the key aspects of the framework (scoping, clustering, calibration, and confidence limits) can influence the quality of uncertainty estimates.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kläs, M.: Towards Identifying and Managing Sources of Uncertainty in AI and Machine Learning Models - An Overview. arXiv:1811.11669 (2018)

  2. Kläs, M., Sembach, L.: Uncertainty wrappers for data-driven models – increase the transparency of AI/ML-based models through enrichment with dependable situation-aware uncertainty estimates. In: WAISE 2019, Turku, Finland (2019)

    Google Scholar 

  3. Kläs, M., Vollmer, A.M.: Uncertainty in machine learning applications – a practice-driven classification of uncertainty. In: WAISE 2018, Västerås, Sweden (2018)

    Google Scholar 

  4. Phan, B., Khan, S., Salay, R., Czarnecki, K.: Bayesian uncertainty quantification with synthetic data. In: WAISE 2019, Turku, Finland (2019)

    Google Scholar 

  5. Gal, Y.: Uncertainty in Deep Learning. University of Cambridge, Cambridge (2016)

    Google Scholar 

  6. Henne, M., Schwaiger, A., Roscher, K., Weiss, G.: Benchmarking uncertainty estimation methods for deep learning with safety-related metrics. In: SafeAI 2020, New York, USA (2020)

    Google Scholar 

  7. Snoek, J., et al.: Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  8. Niculescu-Mizil, A., Caruana, R.: Predicting good probabilities with supervised learning. In: 22nd International Conference on Machine Learning (2005)

    Google Scholar 

  9. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  10. Czarnecki, K., Salay, R.: Towards a framework to manage perceptual uncertainty for safe automated driving. In: WAISE 2018, Västerås, Sweden (2018)

    Google Scholar 

  11. Matsuno, Y., Ishikawa, F., Tokumoto, S.: Tackling uncertainty in safety assurance for machine learning: continuous argument engineering with attributed tests. In: WAISE 2019, Turku, Finland (2019)

    Google Scholar 

  12. Cheng, C.-H., Huang, C.-H., Nührenberg, G.: nn-dependability-kit: engineering neural networks for safety-critical systems. https://arxiv.org/abs/1811.06746 (2018)

  13. Brier, G.W.: Verification of forecasts expressed in terms of probability. Mon. Weather Rev. 78(1), 1–3 (1950)

    Article  Google Scholar 

  14. Murphy, A.H.: A new vector partition of the probability score. J. Appl. Meteorol. 12(4), 595–600 (1973)

    Article  Google Scholar 

  15. Dumicic, K.: Representative samples. In: Lovric, M. (ed.) International Encyclopedia of Statistical Science. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-04898-2

    Chapter  Google Scholar 

  16. Developer Survey Results. https://insights.stackoverflow.com/survey/2019 (2019)

  17. Redmond, J., Farhadi, A.: YOLOv3: An Incremental Improvement. arXiv:1804.02767

  18. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: 1st Annual Conference on Robot Learning (2017)

    Google Scholar 

  19. Pimentel, M., Clifton, D., Clifton, L., Tarassenko, L.: A review of novelty detection. Sig. Process. 99, 215–249 (2014)

    Article  Google Scholar 

  20. Kumar, A., Liang, P.S., Ma, T.: Verified uncertainty calibration. In: NIPS 2019 (2019)

    Google Scholar 

  21. Jöckel, L., Kläs, M.: Increasing trust in data-driven model validation. In: Romanovsky, A., Troubitsyna, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11698, pp. 155–164. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26601-1_11

    Chapter  Google Scholar 

Download references

Acknowledgments

Parts of this work have been funded by the Ministry of Science, Education, and Culture of the German State of Rhineland-Palatinate in the context of the project MInD and the Observatory for Artificial Intelligence in Work and Society (KIO) of the Denkfabrik Digitale Arbeitsgesellschaft in the project “KI Testing & Auditing”. We would like to thank especially Naveed Akram and Pascal Gerber for providing the dataset we used to illustrate the framework application, and Jan Reich and Sonnhild Namingha for the initial review of the paper.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Michael Kläs or Lisa Jöckel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kläs, M., Jöckel, L. (2020). A Framework for Building Uncertainty Wrappers for AI/ML-Based Data-Driven Components. In: Casimiro, A., Ortmeier, F., Schoitsch, E., Bitsch, F., Ferreira, P. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops. SAFECOMP 2020. Lecture Notes in Computer Science(), vol 12235. Springer, Cham. https://doi.org/10.1007/978-3-030-55583-2_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-55583-2_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-55582-5

  • Online ISBN: 978-3-030-55583-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics