Skip to main content

Model-Centered Assurance for Autonomous Systems

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security (SAFECOMP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12234))

Included in the following conference series:

Abstract

The functions of an autonomous system can generally be partitioned into those concerned with perception and those concerned with action. Perception builds and maintains an internal model of the world (i.e., the system’s environment) that is used to plan and execute actions to accomplish a goal established by human supervisors.

Accordingly, assurance decomposes into two parts: a) ensuring that the model is an accurate representation of the world as it changes through time and b) ensuring that the actions are safe (and effective), given the model. Both perception and action may employ AI, including machine learning (ML), and these present challenges to assurance. However, it is usually feasible to guard the actions with traditionally engineered and assured monitors, and thereby ensure safety, given the model. Thus, the model becomes the central focus for assurance.

We propose an architecture and methods to ensure the accuracy of models derived from sensors whose interpretation uses AI and ML. Rather than derive the model from sensors bottom-up, we reverse the process and use the model to predict sensor interpretation. Small prediction errors indicate the world is evolving as expected and the model is updated accordingly. Large prediction errors indicate surprise, which may be due to errors in sensing or interpretation, or unexpected changes in the world (e.g., a pedestrian steps into the road). The former initiate error masking or recovery, while the latter requires revision to the model. Higher-level AI functions assist in diagnosis and execution of these tasks.

Although this two-level architecture where the lower level does “predictive processing” and the upper performs more reflective tasks, both focused on maintenance of a world model, is derived by engineering considerations, it also matches a widely accepted theory of human cognition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The terms “guard,” or “shield,” or “safety bag” are also used.

References

  1. Thorn, E., Kimmel, S., Chaka, M.: A framework for automated driving system testable cases and scenarios. DOT HS 812 623, NHTSA, Washington DC (2018)

    Google Scholar 

  2. Kalra, N., Paddock, S.M.: Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transp. Res. Part A Policy Pract. 94, 182–193 (2016)

    Article  Google Scholar 

  3. ASTM: Standard Practice for Methods to Safely Bound Flight Behavior of Unmanned Aircraft Systems Containing Complex Functions (2017). ASTM F3269–17

    Google Scholar 

  4. Shalev-Shwartz, S., Shammah, S., Shashua, A.: On a formal model of safe and scalable self-driving cars. arXiv preprint arXiv:1708.06374 (2017)

  5. NHTSA: Automated driving systems 2.0: A vision for safety. DOT HS 812 442, Washington DC (2018)

    Google Scholar 

  6. Lin, S.C., et al.: The architectural implications of autonomous driving: constraints and acceleration. ACM SIGPLAN Notices 53, 751–766 (2018)

    Article  Google Scholar 

  7. Koller, D., Daniilidis, K., Thórhallson, T., Nagel, H.-H.: Model-based object tracking in traffic scenes. In: Sandini, G. (ed.) ECCV 1992. LNCS, vol. 588, pp. 437–452. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-55426-2_49

    Chapter  Google Scholar 

  8. Pearl, J., Mackenzie, D.: The Book of Why: The New Science of Cause and Effect. Basic Books, New York (2018)

    MATH  Google Scholar 

  9. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv:1312.6199 (2013)

  10. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765–1773 (2017)

    Google Scholar 

  11. Brown, T.B., et al.: Adversarial patch. arXiv preprint arXiv:1712.09665 (2017)

  12. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM W’shop on AI & Security, pp. 3–14 (2017)

    Google Scholar 

  13. Shamir, A., Safran, I., Ronen, E., Dunkelman, O.: A simple explanation for the existence of adversarial examples with small Hamming distance. arXiv preprint arXiv:1901.10861 (2019)

  14. Kilbertus, N., Parascandolo, G., Schölkopf, B.: Generalization in anti-causal learning. arXiv preprint arXiv:1812.00524 (2018)

  15. Pimentel, M.A.F., Clifton, D.A., Clifton, L., Tarassenko, L.: A review of novelty detection. Sig. Process. 99, 215–249 (2014)

    Article  Google Scholar 

  16. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)

  17. Jha, S., Jang, U., Jha, S., Jalaian, B.: Detecting adversarial examples using data manifolds. In: MILCOM 2018, pp. 547–552. IEEE (2018)

    Google Scholar 

  18. Ju, C., Bibaut, A., van der Laan, M.: The relative performance of ensemble methods with deep convolutional neural networks for image classification. J. Appl. Stat. 45, 2800–2818 (2018)

    Article  MathSciNet  Google Scholar 

  19. National Transportation Safety Board: Vehicle Automation Report; Tempe, AZ (2019). HWY18MH010

    Google Scholar 

  20. Duda, R.O., Hart, P.E.: Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 15, 11–15 (1972)

    Article  Google Scholar 

  21. Spratling, M.W.: A review of predictive coding algorithms. Brain Cogn. 112, 92–97 (2017)

    Article  Google Scholar 

  22. von Helmholtz, H.: Handbuch der Physiologischen Optik III, vol. 9. Verlag von Leopold Voss, Leipzig, Germany (1867)

    Google Scholar 

  23. Wiese, W., Metzinger, T.K.: Vanilla PP for philosophers: a primer on Predictive Processing. MIND Group, Frankfurt am Main (2017). Many papers: https://predictive-mind.net/papers

  24. Clark, A.: Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204 (2013)

    Article  Google Scholar 

  25. Hohwy, J.: The Predictive Mind. Oxford University Press, Oxford (2013)

    Google Scholar 

  26. Friston, K.: The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 11, 127 (2010)

    Article  Google Scholar 

  27. Frankish, K.: Dual-process and dual-system theories of reasoning. Philos. Compass 5, 914–926 (2010)

    Article  Google Scholar 

  28. Evans, J.S.B.T., Stanovich, K.E.: Dual-process theories of higher cognition: Advancing the debate. Perspect. Psychol. Sci. 8, 223–241 (2013)

    Article  Google Scholar 

  29. Kahneman, D.: Thinking. Fast and Slow. Farrar, Straus and Giroux (2011)

    Google Scholar 

  30. Bloomfield, R., Rushby, J.: Assurance 2.0: A Manifesto. arXiv:2004.10474 (2020)

Download references

Acknowledgments

We thank the reviewers for their constructive comments, and Bev Littlewood of City, University of London, and Wilfried Steiner of TTTech Vienna for their challenges and discussion. The work was funded by DARPA contract FA8750-19-C-0089. We would also like to acknowledge support from the US Army Research Laboratory Cooperative Research Agreement W911NF-17-2-0196, and National Science Foundation (NSF) grant 1740079. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Rushby .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jha, S., Rushby, J., Shankar, N. (2020). Model-Centered Assurance for Autonomous Systems. In: Casimiro, A., Ortmeier, F., Bitsch, F., Ferreira, P. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2020. Lecture Notes in Computer Science(), vol 12234. Springer, Cham. https://doi.org/10.1007/978-3-030-54549-9_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-54549-9_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-54548-2

  • Online ISBN: 978-3-030-54549-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics