Skip to main content

Mechanized Support for Assurance Case Argumentation

  • Conference paper
  • First Online:
New Frontiers in Artificial Intelligence (JSAI-isAI 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8417))

Included in the following conference series:

Abstract

An assurance case provides an argument that certain claims (usually concerning safety or other critical properties) are justified, based on given evidence concerning the context, design, and implementation of a system. An assurance case serves two purposes: reasoning and communication. For the first, the argument in the case should approach the standards of mathematical proof (though it may be grounded on premises—i.e., evidence—that are equivocal); for the second it must assist human stakeholders to grasp the essence of the case, to explore its details, and to challenge it. Because of the scale and complexity of assurance cases, both purposes benefit from mechanized assistance. We propose simple ways in which an assurance case, formalized in a mechanized verification system to support the first purpose, can be adapted to serve the second.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    There are persuasive claims that human consciousness evolved to enable communication and cooperative behavior, and that reasoning evolved to evaluate the epistemic claims of others [20]. Thus, argument is a fundamental human capability, constructive reasoning is an epiphenomenon, and confirmation bias is intrinsic.

  2. 2.

    I prefer not to cite specific works from the vast repertoire of articles and books on these topics; an Internet search will provide many good references.

  3. 3.

    Some treatments of defeasible reasoning distinguish “undercutting,” “undermining,” and “rebutting” defeaters, but the distinctions are not sharp and are not used here.

  4. 4.

    The prose description in [27] suggests that the system under consideration has a primary and a secondary protection system; a standard concern in these kinds of system is that both protection systems fail on the same demand [30].

References

  1. Bishop, P., Bloomfield, R.: A methodology for safety case development. In: Safety-Critical Systems Symposium, Birmingham, UK (1998)

    Google Scholar 

  2. Kelly, T.: arguing safety–a systematic approach to safety case management. Ph.D. thesis, Department of Computer Science, University of York, UK (1998)

    Google Scholar 

  3. Greenwell, W.S., Knight, J.C., Holloway, C.M., Pease, J.J.: A taxonomy of fallacies in system safety arguments. In: Proceedings of the 24th International System Safety Conference, Albuquerque, NM (2006)

    Google Scholar 

  4. Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., et al.: seL4: formal verification of an OS kernel. In: Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles, pp. 207–220. ACM (2009)

    Google Scholar 

  5. Miner, P., Geser, A., Pike, L., Maddalon, J.: A unified fault-tolerance protocol. In: Lakhnech, Y., Yovine, S. (eds.) FORMATS 2004 and FTRTFT 2004. LNCS, vol. 3253, pp. 167–182. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  6. Narkawicz, A., Muñoz, C.: Formal verification of conflict detection algorithms for arbitrary trajectories. Reliable Comput. 17, 209–237 (2012)

    MathSciNet  Google Scholar 

  7. Rushby, J.: Formalism in safety cases. In: Dale, C., Anderson, T. (eds.) Making Systems Safer: Proceedings of the Eighteenth Safety-Critical Systems Symposium, Bristol, UK, pp. 3–17. Springer (2010)

    Google Scholar 

  8. Basir, N., Denney, E., Fischer, B.: Deriving safety cases from automatically constructed proofs. In: 4th IET International Conference on System Safety, London, UK. The Institutions of Engineering and Technology (2009)

    Google Scholar 

  9. Takeyama, M., Kido, H., Kinoshita, Y.: Using a proof assistant to construct assurance cases: correctness by construction (fast abstract). In: The International Conference on Dependable Systems and Networks, Boston, MA. IEEE Computer Society (2012)

    Google Scholar 

  10. Hawkins, R., Kelly, T., Knight, J., Graydon, P.: A new approach to creating clear safety arguments. In: Dale, C., Anderson, T. (eds.) Advances in System Safety: Proceedings of the Nineteenth Safety-Critical Systems Symposium, Southampton, UK. Springer (2011)

    Google Scholar 

  11. Rushby, J.: Logic and epistemology in safety cases. In: Bitsch, F., Guiochet, J., Kaâniche, M. (eds.) SAFECOMP 2013. LNCS, vol. 8153, pp. 1–7. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  12. Spriggs, J.: GSN–The Goal Structuring Notation. Springer, London (2012)

    Book  Google Scholar 

  13. Denney, E., Pai, G., Pohl, J.: AdvoCATE: an assurance case automation toolset. In: Proceedings of the Workshop on Next Generation of System Assurance Approaches for Safety Critical Systems (SASSUR), Magdeburg, Germany (2012)

    Google Scholar 

  14. ASCE: ASCE. http://www.adelard.com/web/hnav/ASCE/index.html

  15. SACM: OMG Structured Assurance Case Metamodel (SACM). http://www.omg.org/spec/SACM/

  16. MACL: OMG Machine-Checkable Assurance Case Language (MACL). http://www.omg.org/cgi-bin/doc?sysa/2012-9-4/

  17. Cruanes, S., Hamon, G., Owre, S., Shankar, N.: Tool integration with the evidential tool bus. In: Giacobazzi, R., Berdine, J., Mastroeni, I. (eds.) VMCAI 2013. LNCS, vol. 7737, pp. 275–294. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  18. Miller, S.P., Whalen, M.W., Cofer, D.D.: Software model checking takes off. Commun. ACM 53, 58–64 (2010)

    Article  Google Scholar 

  19. Rushby, J.: Harnessing disruptive innovation in formal verification. In: Hung, D.V., Pandya, P. (eds.) Fourth International Conference on Software Engineering and Formal Methods (SEFM), India, Pune, pp. 21–28. IEEE Computer Society (2006)

    Google Scholar 

  20. Mercier, H., Sperber, D.: Why do humans reason? Arguments for an argumentative theory. Behav. Brain Sci. 34, 57–111 (2011); See also the commentary on page 74 by Baumeister, R.F., Masicampo, E.J., Nathan DeWall, C.: Arguing, Reasoning, and the Interpersonal (Cultural) Functions of Human Consciousness

    Google Scholar 

  21. Chesñevar, C.I., Maguitman, A.G., Loui, R.P.: Logical models of argument. ACM Comput. Surv. 32, 337–383 (2000)

    Article  Google Scholar 

  22. McCarthy, J.: Circumscription-a form of non-monotonic reasoning. Artif. Intell. 13, 27–39 (1980)

    Article  Google Scholar 

  23. Pollock, J.L.: Defeasible reasoning. Cogn. Sci. 11, 481–518 (1987)

    Article  Google Scholar 

  24. Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32, 57–95 (1987)

    Article  MathSciNet  Google Scholar 

  25. Crow, J., Rushby, J.: Model-based reconfiguration: toward an integration with diagnosis. In: Proceedings of AAAI-91, Anaheim, CA, vol. 2, pp. 836–841 (1991)

    Google Scholar 

  26. Yices: Yices. http://yices.csl.sri.com/

  27. Holloway, C.M.: Safety case notations: alternatives for the non-graphically inclined? In: 3rd IET International Conference on System Safety, Birmingham, UK. The Institutions of Engineering and Technology (2008)

    Google Scholar 

  28. Owre, S., Rushby, J., Shankar, N., von Henke, F.: Formal verification for fault-tolerant architectures: prolegomena to the design of PVS. IEEE Trans. Softw. Eng. 21, 107–125 (1995)

    Article  Google Scholar 

  29. PVS: PVS. http://pvs.csl.sri.com/

  30. Littlewood, B., Rushby, J.: Reasoning about the reliability of diverse two-channel systems in which one channel is “possibly perfect”. IEEE Trans. Softw. Eng. 38, 1178–1194 (2012)

    Article  Google Scholar 

  31. Kinoshita, Y., Takeyama, M.: Assurance case as a proof in a theory: towards formulation of rebuttals. In: Dale, C., Anderson, T. (eds.) Assuring the Safety of Systems: Proceedings of the 21st Safety-Critical Systems Symposium, SCSC, pp. 205–230 (2013)

    Google Scholar 

  32. Pollock, J.L.: Defeasible reasoning with variable degrees of justification. Artif. Intell. 133, 233–282 (2001)

    Article  MathSciNet  Google Scholar 

  33. Staples, M.: Critical rationalism and engineering: Ontology. Synthese (to appear, 2014)

    Google Scholar 

  34. Caminada, M.W.A.: A formal account of Socratic-style argumentation. J. Appl. Logic 6, 109–132 (2008)

    Article  MathSciNet  Google Scholar 

  35. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and \(n\)-person games. Artif. Intell. 77, 321–357 (1995)

    Article  MathSciNet  Google Scholar 

  36. Steele, P., Knight, J.: Analysis of critical system certication. In: 15th IEEE International Symposium on High Assurance Systems Engineering, Miami, FL (2014)

    Google Scholar 

  37. Goodenough, J.B., Weinstock, C.B., Klein, A.Z., Ernst, N.: Analyzing a multi-legged argument using eliminative argumentation. In: Layered Assurance Workshop, New Orleans, LA (2013)

    Google Scholar 

  38. Weinstock, C.B., Goodenough, J.B., Klein, A.Z.: Measuring assurance case confidence using Baconian probabilities. In: 1st International Workshop on Assurance Cases for Software-Intensive Systems (ASSURE), San Francisco, CA (2013)

    Google Scholar 

Download references

Acknowledgements

I am grateful for helpful comments by the reviewers that caused me to rethink some of the presentation, and to stimulating discussions with Michael Holloway and John Knight.

This work was supported by NASA under contracts NNA13AB02C with Drexel University and NNL13AA00B with the Boeing Company, and by SRI International. The content is solely the responsibility of the author and does not necessarily represent the official views of NASA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Rushby .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Rushby, J. (2014). Mechanized Support for Assurance Case Argumentation. In: Nakano, Y., Satoh, K., Bekki, D. (eds) New Frontiers in Artificial Intelligence. JSAI-isAI 2013. Lecture Notes in Computer Science(), vol 8417. Springer, Cham. https://doi.org/10.1007/978-3-319-10061-6_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-10061-6_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-10060-9

  • Online ISBN: 978-3-319-10061-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics