Skip to main content

A Deductive Approach to Safety Assurance: Formalising Safety Contracts with Subjective Logic

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security. SAFECOMP 2024 Workshops (SAFECOMP 2024)

Abstract

The increasing adoption of autonomous systems in safety-critical applications raises severe concerns regarding safety and reliability. Due to the distinctive characteristics of these systems, conventional approaches to safety assurance are not directly transferable and novel approaches are required. One of the main challenges is the ability to deal with significant uncertainty resulting from (1) the inherent complexity of autonomous system models, (2) potential insufficiencies of data and/or rules, and (3) the open nature of the operational environment. The validity of assumptions made about these three layers greatly impact the confidence in the guarantees provided by a safety argument. In this paper we view the problem of safety assurance as the satisfaction of a safety contract, more specifically as a conditional deduction operation from assumptions to guarantees. We formalise this idea using Subjective Logic and derive from this formalisation an argument structure in GSN that allows for automated reasoning about the uncertainty in the guarantees given the assumptions and any further available evidence. We illustrate the idea using a simple ML-based traffic sign classification example.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.iso.org/standard/77490.html.

  2. 2.

    https://www.iso.org/standard/83303.html.

  3. 3.

    https://www.iso.org/standard/73567.html.

  4. 4.

    https://scsc.uk/GSN.

  5. 5.

    \(\mathcal {R}(X)\)) denotes the reduced powerset of X, i.e., the set of all subsets excluding the empty set \(\emptyset \) and the full set X.

  6. 6.

    The non-informative prior weight W ensures that when evidence begins to accumulate (i.e. r gets larger), uncertainty \(u_X\) decreases accordingly. W is typically set to the same value as the cardinality of the domain (2 in our binary case), thus artificially adding one “success” r and one “failure” s. Higher values of r and s require more evidence for uncertainty to decrease.

  7. 7.

    A more formal treatment of the operator is provided in the literature [12].

  8. 8.

    We denote with \(E(\omega )\) the expectation value of the Beta PDF of opinion \(\omega \).

  9. 9.

    By a slight abuse of GSN notation, we use A2 and A3 to ‘override’ A1. An alternative (more GSN-compliant) representation would be to replace A1 with A2 and formulate A3 as the negation of A2.

  10. 10.

    https://benchmark.ini.rub.de/index.html.

References

  1. Ayoub, A., Chang, J., Sokolsky, O., Lee, I.: Assessing the overall sufficiency of safety arguments. In: 21st Safety-critical Systems Symposium (SSS’13), Bristol, United Kingdom, pp. 127–144 (2013)

    Google Scholar 

  2. Burton, S., Gauerhof, L., Sethy, B.B., Habli, I., Hawkins, R.: Confidence arguments for evidence of performance in machine learning for highly automated driving functions. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 365–377. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_30

    Chapter  Google Scholar 

  3. Burton, S., Herd, B.: Addressing uncertainty in the safety assurance of machine-learning. Front. Comput. Sci. 5 (2023)

    Google Scholar 

  4. Denney, E., Pai, G., Habli, I.: Towards measurement of confidence in safety cases. In: 2011 International Symposium on Empirical Software Engineering and Measurement, pp. 380–383 (2011)

    Google Scholar 

  5. Duan, L., Rayadurgam, S., Heimdahl, M., Sokolsky, O., Lee, I.: Representation of confidence in assurance cases using the beta distribution. In: 2016 IEEE 17th International Symposium on High Assurance Systems Engineering (HASE), pp. 86–93. IEEE (2016)

    Google Scholar 

  6. Good enough, J.B., Weinstock, C.B., Klein, A.Z.: Eliminative induction: a basis for arguing system confidence. In: 2013 35th International Conference on Software Engineering (ICSE), pp. 1161–1164 (2013)

    Google Scholar 

  7. Graydon, P.J., Holloway, C.M.: An investigation of proposed techniques for quantifying confidence in assurance arguments. Saf. Sci. 92, 53–65 (2017)

    Article  Google Scholar 

  8. Guo, B.: Knowledge representation and uncertainty management: applying Bayesian belief networks to a safety assessment expert system. In: International Conference on Natural Language Processing and Knowledge Engineering, 2003. Proceedings. 2003, pp. 114–119 (2003)

    Google Scholar 

  9. Hawkins, R., Kelly, T., Knight, J., Graydon, P.: A new approach to creating clear safety arguments. In: Dale, C., Anderson, T. (eds.) Advances in Systems Safety, pp. 3–23. Springer, London (2011). https://doi.org/10.1007/978-0-85729-133-2_1

    Chapter  Google Scholar 

  10. Herd, B., Burton, S.: Can you trust your ML metrics? Using subjective logic to determine the true contribution of ML metrics for safety. In: Proceedings of the the 39th ACM/SIGAPP Symposium On Applied Computing (SAC24) (2024)

    Google Scholar 

  11. Hobbs, C., Lloyd, M.: The application of Bayesian belief networks to assurance case preparation. In: Dale, C., Anderson, T. (eds.) Achieving Systems Safety, pp. 159–176. Springer, London (2012). https://doi.org/10.1007/978-1-4471-2494-8_12

    Chapter  Google Scholar 

  12. Jøsang, A.: Subjective Logic. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-42337-1

    Book  Google Scholar 

  13. Varadarajan, S.: CLARISSA: foundations, tools and automation for assurance cases. In: 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC) (2023)

    Google Scholar 

  14. Wang, R., Guiochet, J., Motet, G., Schön, W.: Safety case confidence propagation based on Dempster-Shafer theory. Int. J. Approximate Reasoning 107, 46–64 (2019)

    Article  MathSciNet  Google Scholar 

  15. Yuan, C., Wu, J., Liu, C., Yang, H.: A subjective logic-based approach for assessing confidence in assurance case. Int. J. Performability Eng. 13(6), 807 (2017)

    Google Scholar 

  16. Zadeh, L.A.: Book review: a mathematical theory of evidence. AI Mag. 5(3), 81–83 (1984)

    Google Scholar 

Download references

Acknowledgments

This work was performed as part of the ML4Safety project supported by the Fraunhofer Internal Programs under Grant No. PREPARE 40-02702.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benjamin Herd .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Herd, B., Zacchi, JV., Burton, S. (2024). A Deductive Approach to Safety Assurance: Formalising Safety Contracts with Subjective Logic. In: Ceccarelli, A., Trapp, M., Bondavalli, A., Schoitsch, E., Gallina, B., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2024 Workshops. SAFECOMP 2024. Lecture Notes in Computer Science, vol 14989. Springer, Cham. https://doi.org/10.1007/978-3-031-68738-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-68738-9_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-68737-2

  • Online ISBN: 978-3-031-68738-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics