Skip to main content

Assumption Generation for Learning-Enabled Autonomous Systems

  • Conference paper
  • First Online:
Runtime Verification (RV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14245))

Included in the following conference series:

  • 338 Accesses

Abstract

Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze due to their size (they can have thousands or millions of parameters), lack of formal specifications (DNNs are typically learnt from labeled data, in the absence of any formal requirements), and sensitivity to small changes in the environment. We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties of such autonomous systems. Our insight is that we can analyze the system in the absence of the DNN perception components by automatically synthesizing assumptions on the DNN behaviour that guarantee the satisfaction of the required safety properties. The synthesized assumptions are the weakest in the sense that they characterize the output sequences of all the possible DNNs that, plugged into the autonomous system, guarantee the required safety properties. The assumptions can be leveraged as run-time monitors over a deployed DNN to guarantee the safety of the overall system; they can also be mined to extract local specifications for use during training and testing of DNNs. We illustrate our approach on a case study taken from the autonomous airplanes domain that uses a complex DNN for perception.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use “commands” instead of “actions” since we already use actions to refer to the transition labels of LTSs.

  2. 2.

    We provide the code of the monitor in the appendix.

References

  1. X-plane flight simulator. https://www.x-plane.com/

  2. Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., Topcu, U.: Safe reinforcement learning via shielding. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, 2–7 February 2018, pp. 2669–2678. AAAI Press (2018). https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17211

  3. Beland, S., et al.: Towards assurance evaluation of autonomous systems. In: IEEE/ACM International Conference On Computer Aided Design, ICCAD 2020, San Diego, CA, USA, 2–5 November 2020, pp. 84:1–84:6. IEEE (2020)

    Google Scholar 

  4. Bogomolov, S., Frehse, G., Greitschus, M., Grosu, R., Pasareanu, C., Podelski, A., Strump, T.: Assume-guarantee abstraction refinement meets hybrid systems. In: Yahav, E. (ed.) HVC 2014. LNCS, vol. 8855, pp. 116–131. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13338-6_10

    Chapter  Google Scholar 

  5. Clevert, D., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, 2–4 May 2016, Conference Track Proceedings (2016). http://arxiv.org/abs/1511.07289

  6. Cobleigh, J.M., Giannakopoulou, D., PĂsĂreanu, C.S.: Learning assumptions for compositional verification. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 331–346. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-36577-X_24

    Chapter  MATH  Google Scholar 

  7. Dawson, C., Gao, S., Fan, C.: Safe control with learned certificates: a survey of neural lyapunov, barrier, and contraction methods for robotics and control. IEEE Trans. Rob. 39(3), 1749–1767 (2023). https://doi.org/10.1109/TRO.2022.3232542

    Article  Google Scholar 

  8. Dawson, C., Lowenkamp, B., Goff, D., Fan, C.: Learning safe, generalizable perception-based hybrid control with certificates. IEEE Rob. Autom. Lett. 7(2), 1904–1911 (2022)

    Article  Google Scholar 

  9. Dreossi, T., Donzé, A., Seshia, S.A.: Compositional falsification of cyber-physical systems with machine learning components. J. Autom. Reason. 63, 1031–1053 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  10. Gheorghiu, M., Giannakopoulou, D., Păsăreanu, C.S.: Refining interface alphabets for compositional verification. In: Grumberg, O., Huth, M. (eds.) TACAS 2007. LNCS, vol. 4424, pp. 292–307. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-71209-1_23

    Chapter  MATH  Google Scholar 

  11. Ghosh, S., Pant, Y.V., Ravanbakhsh, H., Seshia, S.A.: Counterexample-guided synthesis of perception models and control. In: 2021 American Control Conference (ACC), pp. 3447–3454. IEEE (2021)

    Google Scholar 

  12. Giannakopoulou, D., Magee, J.: Fluent model checking for event-based systems. In: Paakki, J., Inverardi, P. (eds.) Proceedings of the 11th ACM SIGSOFT Symposium on Foundations of Software Engineering 2003 held jointly with 9th European Software Engineering Conference, ESEC/FSE 2003, Helsinki, Finland, 1–5 September 2003, pp. 257–266. ACM (2003). https://doi.org/10.1145/940071.940106

  13. Giannakopoulou, D., Pasareanu, C.S.: Abstraction and learning for infinite-state compositional verification. In: Banerjee, A., Danvy, O., Doh, K., Hatcliff, J. (eds.) Semantics, Abstract Interpretation, and Reasoning about Programs: Essays Dedicated to David A. Schmidt on the Occasion of his Sixtieth Birthday, Manhattan, Kansas, USA, 19–20 September 2013, EPTCS, vol. 129, pp. 211–228 (2013). https://doi.org/10.4204/EPTCS.129.13

  14. Giannakopoulou, D., Pasareanu, C.S., Barringer, H.: Assumption generation for software component verification. In: 17th IEEE International Conference on Automated Software Engineering (ASE 2002), Edinburgh, Scotland, UK, 23–27 September 2002, pp. 3–12. IEEE Computer Society (2002). https://doi.org/10.1109/ASE.2002.1114984

  15. Gopinath, D., Katz, G., Păsăreanu, C.S., Barrett, C.: DeepSafe: a data-driven approach for assessing robustness of neural networks. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01090-4_1

    Chapter  MATH  Google Scholar 

  16. Hsieh, C., Li, Y., Sun, D., Joshi, K., Misailovic, S., Mitra, S.: Verifying controllers with vision-based perception using safe approximate abstractions. IEEE Trans. Comput.-Aided Des. Integr. Circ. Syst. 41(11), 4205–4216 (2022)

    Article  Google Scholar 

  17. Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Incer, I., et al.: Pacti: scaling assume-guarantee reasoning for system analysis and design. arXiv preprint arXiv:2303.17751 (2023)

  19. Ivanov, R., Carpenter, T., Weimer, J., Alur, R., Pappas, G., Lee, I.: Verisig 2.0: verification of neural network controllers using taylor model preconditioning. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 249–262. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_11

    Chapter  MATH  Google Scholar 

  20. Ivanov, R., Carpenter, T.J., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verifying the safety of autonomous systems with neural network controllers. ACM Trans. Embed. Comput. Syst. (TECS) 20(1), 1–26 (2020)

    MATH  Google Scholar 

  21. Ivanov, R., Jothimurugan, K., Hsu, S., Vaidya, S., Alur, R., Bastani, O.: Compositional learning and verification of neural network controllers. ACM Trans. Embed. Comput. Syst. (TECS) 20(5s), 1–26 (2021)

    Article  Google Scholar 

  22. Ivanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers. In: Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, pp. 169–178 (2019)

    Google Scholar 

  23. Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26

    Chapter  Google Scholar 

  24. Katz, S.M., Corso, A.L., Strong, C.A., Kochenderfer, M.J.: Verification of image-based neural network controllers using generative models. J. Aeros. Inf. Syst. 19(9), 574–584 (2022)

    Google Scholar 

  25. Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_47

    Chapter  Google Scholar 

  26. Magee, J., Kramer, J.: Concurrency: State Models and Java Programs. John Wiley and Sons Inc., Hoboken (2000)

    MATH  Google Scholar 

  27. Habeeb, P., Deka, N., D’Souza, D., Lodaya, K., Prabhakar, P.: Verification of camera-based autonomous systems. IEEE Trans. Comput.-Aided Des. Integr. Circ. Syst. (2023). https://doi.org/10.1109/TCAD.2023.3240131

  28. Pasareanu, C.S., Giannakopoulou, D., Bobaru, M.G., Cobleigh, J.M., Barringer, H.: Learning to divide and conquer: applying the l* algorithm to automate assume-guarantee reasoning. Formal Methods Syst. Des. 32(3), 175–205 (2008). https://doi.org/10.1007/s10703-008-0049-6

    Article  MATH  Google Scholar 

  29. Pasareanu, C.S., et al.: Closed-loop analysis of vision-based autonomous systems: A case study. In: Enea, C., Lal, A. (eds.) Computer Aided Verification - 35th International Conference, CAV 2023, Paris, France, 17–22 July 2023, Proceedings, Part I. Lecture Notes in Computer Science, vol. 13964, pp. 289–303. Springer, Heideleberg (2023). https://doi.org/10.1007/978-3-031-37706-8_15

  30. Santa Cruz, U., Shoukry, Y.: Nnlander-verif: a neural network formal verification framework for vision-based autonomous aircraft landing. In: NASA Formal Methods Symposium, pp. 213–230. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-06773-0_11

  31. Seshia, S.A.: Introspective environment modeling. In: Finkbeiner, B., Mariani, L. (eds.) RV 2019. LNCS, vol. 11757, pp. 15–26. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32079-9_2

    Chapter  Google Scholar 

  32. Yang, Y., Zu, Q., Ke, W., Zhang, M., Li, X.: Real-time system modeling and verification through labeled transition system analyzer. IEEE Access 7, 26314–26323 (2019). https://doi.org/10.1109/ACCESS.2019.2899761

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Corina S. Păsăreanu .

Editor information

Editors and Affiliations

Appendix: \(\textsc {PRISM}\) Encoding for TaxiNet with Safety Monitor

Appendix: \(\textsc {PRISM}\) Encoding for TaxiNet with Safety Monitor

We show the \(\textsc {PRISM}\) code for \(M_2\) and the safety monitor in Fig. 6. We use the output of step 4 in procedure BuildAssumption(Algorithm 3.1) as a safety monitor, i.e., the assumption LTS has both err and sink states, with a transition to err state interpreted as the system aborting. The encoding closely follows the transitions of the assumption computed for \(M_1\) over alphabet \(\varSigma =Est\).

Fig. 6.
figure 6

TaxiNet \(M_2\) and safety monitor in \(\textsc {PRISM}\).

Variable \(\mathtt{{pc}}\) encodes a program counter. \(M_2\) is encoded as mapping the actual system state (represented with variables \(\texttt{cte}\) and \(\texttt{he}\)) to different estimated states (represented with variables \(\mathtt{{cte\_est}}\) and \(\mathtt{{he\_est}}\)). The transition probabilities are empirically estimated based on profiling the DNN; for simplicity we update \(\mathtt{{cte\_est}}\) and \(\mathtt{{he\_est}}\) in sequence. The monitor maintains its state using variable \(\mathtt{{Q}}\) (initially \(\mathtt{{0}}\)); it transitions to its next state after \(\mathtt{{cte\_est}}\) and \(\mathtt{{he\_est}}\) have been updated; the abort state (\(\mathtt{{Q=-1}}\)) traps behaviours that are not allowed by the assumption; there are no outgoing transitions from such an abort state.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Păsăreanu, C.S., Mangal, R., Gopinath, D., Yu, H. (2023). Assumption Generation for Learning-Enabled Autonomous Systems. In: Katsaros, P., Nenzi, L. (eds) Runtime Verification. RV 2023. Lecture Notes in Computer Science, vol 14245. Springer, Cham. https://doi.org/10.1007/978-3-031-44267-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44267-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44266-7

  • Online ISBN: 978-3-031-44267-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics