Skip to main content

Safer Than Perception: Assuring Confidence in Safety-Critical Decisions of Automated Vehicles

  • Chapter
  • First Online:
Applicable Formal Methods for Safe Industrial Products

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14165))

  • 227 Accesses

Abstract

We address one of the key challenges in assuring safety of autonomous cyber-physical systems that rely on learning-enabled classification within their environmental perception: How can we achieve confidence in the perception chain, especially when dealing with percepts safe-guarding critical manoeuvres? We present a methodology which allows to mathematically prove that the risk of misevaluating a safety-critical guard conditions referring to environmental artefacts can be bounded to a considerably lower frequency than the risk of individual misclassifications, and can thereby be adjusted to a value less than a given level of societally accepted risk.

M. Fränzle—Supported by the State of Lower Saxony within the Zukunftslabor Mobilität as well as by Deutsche Forschungsgemeinschaft under grant no. DFG FR 2715/5-1.

M. Swaminathan—Contribution while employed at the University of Oldenburg.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baig, Q., Perrollaz, M., Laugier, C.: A robust motion detection technique for dynamic environment monitoring: a framework for grid-based monitoring of the dynamic environment. IEEE Robot. Automat. Mag. 21(1), 40–48 (2014)

    Article  Google Scholar 

  2. Dreossi, T., Donzé, A., Seshia, S.A.: Compositional falsification of cyber-physical systems with machine learning components. In: Barrett, C., Davies, M., Kahsai, T. (eds.) NFM 2017. LNCS, vol. 10227, pp. 357–372. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57288-8_26

    Chapter  Google Scholar 

  3. Fawcett, T.: ROC graphs: notes and practical considerations for researchers. Mach. Learn. 31(1), 1–38 (2004)

    MathSciNet  Google Scholar 

  4. Fawcett, T.: An introduction to ROC analysis. Pattern Recogn. Lett. 27(8), 861–874 (2006)

    Article  MathSciNet  Google Scholar 

  5. Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., Herrera, F.: A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans. Syst. Man Cybernet. Part C (Appl. Rev.) 42(4), 463–484 (2011)

    Google Scholar 

  6. Geirhos, R., Janssen, D.H.J., Schütt, H.H., Rauber, J., Bethge, M., Wichmann, F.A.: Comparing deep neural networks against humans: object recognition when the signal gets weaker. CoRR abs/1706.06969 (2017). http://arxiv.org/abs/1706.06969

  7. Hammer, P.L., Rudeanu, S.: Pseudo-Boolean programming. Oper. Res. 17(2), 233–261 (1969). https://doi.org/10.1287/opre.17.2.233

    Article  MathSciNet  MATH  Google Scholar 

  8. Junges, S., Jansen, N., Katoen, J.-P., Topcu, U., Zhang, R., Hayhoe, M.: Model checking for safe navigation among humans. In: McIver, A., Horvath, A. (eds.) QEST 2018. LNCS, vol. 11024, pp. 207–222. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99154-2_13

    Chapter  Google Scholar 

  9. Khreich, W., Granger, E., Miri, A., Sabourin, R.: Iterative Boolean combination of classifiers in the ROC space: an application to anomaly detection with HMMs. Pattern Recogn. 43(8), 2732–2752 (2010). https://doi.org/10.1016/j.patcog.2010.03.006

    Article  MATH  Google Scholar 

  10. Levinson, J., Montemerlo, M., Thrun, S.: Map-based precision vehicle localization in urban environments. In: Proceedings of Robotics: Science and Systems. Atlanta, GA, USA, June 2007. https://doi.org/10.15607/RSS.2007.III.016

  11. Levinson, J., Thrun, S.: Robust vehicle localization in urban environments using probabilistic maps. In: IEEE International Conference on Robotics and Automation, pp. 4372–4378 (2010)

    Google Scholar 

  12. Moras, J., Cherfaoui, V., Bonnifait, P.: Moving objects detection by conflict analysis in evidential grids. In: IEEE Intelligent Vehicles Symposium (IV 2011), pp. 1120–1125 (2011)

    Google Scholar 

  13. Păsăreanu, C.S., Gopinath, D., Yu, H.: Compositional verification for autonomous systems with deep learning components. In: Yu, H., Li, X., Murray, R.M., Ramesh, S., Tomlin, C.J. (eds.) Safe, Autonomous and Intelligent Vehicles. UST, pp. 187–197. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-97301-2_10

    Chapter  Google Scholar 

  14. Petrovskaya, A., Thrun, S.: Model based vehicle detection and tracking for autonomous urban driving. Auton. Robots 26(2–3), 123–139 (2009)

    Article  Google Scholar 

  15. Powers, D.: Evaluation: From precision, recall and f-measure to ROC, informedness, markedness & correlation. J. Mach. Learn. Technol. 2(1), 37–63 (2011)

    MathSciNet  Google Scholar 

  16. Radtke, P.V., Granger, E., Sabourin, R., Gorodnichy, D.O.: Skew-sensitive Boolean combination for adaptive ensembles – an application to face recognition in video surveillance. Inf. Fus. 20, 31–48 (2014). https://doi.org/10.1016/j.inffus.2013.11.001

    Article  Google Scholar 

  17. Sagi, O., Rokach, L.: Ensemble learning: a survey. WIREs Data Min. Knowl. Discovery 8(4), e1249 (2018). https://doi.org/10.1002/widm.1249

    Article  Google Scholar 

  18. Schumann, J., Liu, Y. (eds.): Applications of Neural Networks in High Assurance Systems, Studies in Computational Intelligence, vol. 268. Springer, Cham (2010). https://doi.org/10.1007/978-3-642-10690-3

  19. Scott, M.J.J., Niranjan, M., Prager, R.W.: Realisable classifiers: improving operating performance on variable cost problems. In: Proceedings of the British Machine Vision Conference, pp. 31.1–31.10. BMVA Press (1998)

    Google Scholar 

  20. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). The MIT Press, Cambridge (2005)

    MATH  Google Scholar 

  21. U.S. Department of Transportation, N.H.T.S.A.: Automated driving systems 2.0. a vision for safety (2017). www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf

Download references

Acknowledgements

The research reported herein has been supported by the State of Lower Saxony within the Zukunftslabor Mobilität as well as by Deutsche Forschungsgemeinschaft under grant no. DFG FR 2715/5-1 “Konfliktresolution und kausale Inferenz mittels integrierter sozio-technischer Modellbildung”. It furthermore benefit from technical discussions with Jan Peleska, and we dedicate it to him on the occasion of his 65th anniversary.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Willem Hagemann .

Editor information

Editors and Affiliations

A Proofs

A Proofs

Proof of

Lemma 1. We use induction over n to show that the inequality

$$\begin{aligned}\begin{gathered} \min _{i=1,\dots ,n}\left\{ \frac{a_i}{b_i}\right\} \le \frac{\sum _{i=1}^n a_i}{\sum _{i=1}^n b_i} \le \max _{i=1,\dots ,n}\left\{ \frac{a_i}{b_i}\right\} \end{gathered}\end{aligned}$$

holds for any positive integer n and fractions \(\frac{a_1}{b_1}, \dots , \frac{a_n}{b_n}\) with real nominators \(a_1,\dots ,a_n\) and positive real denominators \(b_1,\dots ,b_n\). The base case \(n=1\) is trivial and the case \(n=2\) follows immediately from the mediant inequality \(\frac{a}{b} \le \frac{c}{d} \implies \frac{a}{b} \le \frac{a + c}{b + d} \le \frac{c}{d}\) for real numbers ab and positive real numbers bd. Assume that the induction hypothesis holds for \(n-1\). For any fraction \(\frac{a_n}{b_n}\) with real \(a_n,b_n>0\) at least one of the inequalities (i) \(\frac{\sum _{i=1}^{n-1} a_i}{\sum _{i=1}^{n-1} b_i} \le \frac{a_n}{b_n}\) or (ii) \(\frac{a_n}{b_n} \le \frac{\sum _{i=1}^{n-1} a_i}{\sum _{i=1}^{n-1} b_i}\) holds. From case (i) it follows

$$\begin{aligned} \min _{i=1,\dots ,n}\left\{ \frac{a_i}{b_i}\right\} = \min _{i=1,\dots ,n-1}\left\{ \frac{a_i}{b_i}\right\} \le \frac{\sum _{i=1}^{n-1}a_i}{\sum _{i=1}^{n-1}b_i} \overset{(*)}{\le }\frac{\sum _{i=1}^{n}a_i}{\sum _{i=1}^{n}b_i} \overset{(*)}{\le }\frac{a_n}{b_n} = \max _{i=1,\dots ,n}\left\{ \frac{a_i}{b_i}\right\} \end{aligned}$$

and from case (ii) it follows

$$\begin{aligned} \min _{i=1,\dots ,n}\left\{ \frac{a_i}{b_i}\right\} = \frac{a_n}{b_n} \overset{(*)}{\le }\frac{\sum _{i=1}^{n}a_i}{\sum _{i=1}^{n-1}b_i} \overset{(*)}{\le }\frac{\sum _{i=1}^{n-1}a_i}{\sum _{i=1}^{n-1}b_i} \le \max _{i=1,\dots ,n-1}\left\{ \frac{a_i}{b_i}\right\} = \max _{i=1,\dots ,n}\left\{ \frac{a_i}{b_i}\right\} \end{aligned}$$

where \((*)\) denotes the application of the mediant inequality.    \(\square \)

Proof of

Lemma 2. We have to show that the inequalities

figure e

hold for all disjoint events \(B_1,\dots ,B_n\). Using the identity , an application of Lemma 1 yields

$$\begin{aligned}\begin{gathered} \min _{i=1,\dots ,n}\left\{ \frac{P(A,B_i)}{P(B_i)} \right\} \le \frac{ \sum _{i=1}^n P(A,B_i)}{ \sum _{i=1}^n P(B_i)} \le \max _{i=1,\dots ,n}\left\{ \frac{P(A,B_i)}{P(B_i)} \right\} , \end{gathered}\end{aligned}$$

which finally rewrites to the asserted inequalities.    \(\square \)

Proof of

Lemma 3. We have to show that the identity

$$\begin{aligned} P( A_i^{\lambda _i} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}, A_j^{\lambda _j}) = P( A_i^{\lambda _i} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}) \end{aligned}$$
(3)

is equivalent to the identity

$$\begin{aligned} P( A_i^{\lambda _i}, A_j^{\lambda _i} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}) = P( A_i^{\lambda _i} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}) P( A_j^{\lambda _j} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}) \end{aligned}$$
(4)

for all positive integers n, atoms \(A_1,\dots A_n\), and \(i\ne j\), \(i\le n\), \(j\le n\). Note that we implicitly assume the well-definedness of Eq. (3) and Eq. (4). I.e., both identities stipulate \(P(A_1^{\pi _1},\dots ,A_n^{\pi _n})>0\) and Eq. (3) additionally stipulates \(P(A_j^{\lambda _j} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}) > 0\). The equivalent transformation from Eq. (3) to Eq. (4) is obtained by multiplying both sides of Eq. (3) with \(P(A_j^{\lambda _j} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n})\). The equivalent transformation from Eq. (4) to Eq. (3) by division is valid as long as the stronger stipulation \(P(A_j^{\lambda _j} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}) > 0\) imposed by Eq. (3) holds. Finally, note that for \(P(A_j^{\lambda _j} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n}) = 0\) the identity Eq. (4) does not contain any deeper findings, as it degenerates to the trivial identity \(0=0\) in this case.    \(\square \)

Proof of

Lemma 4. We have to show that the identities

$$\begin{aligned}\begin{gathered} \underline{P}( A_i^{\lambda _i}, A_j^{\lambda _j} \mid A_i^{\pi _i}, A_j^{\pi _j} )= \underline{P}( A_i^{\lambda _i}\mid A_i^{\pi _i} )\underline{P}( A_j^{\lambda _j} \mid A_j^{\pi _j} ),\\ \overline{P}( A_i^{\lambda _i}, A_j^{\lambda _j} \mid A_i^{\pi _i}, A_j^{\pi _j} )= \overline{P}( A_i^{\lambda _i}\mid A_i^{\pi _i} )\overline{P}( A_j^{\lambda _j} \mid A_j^{\pi _j} ) \end{gathered}\end{aligned}$$

hold for the limit probabilities \(\underline{P}\) and \(\overline{P}\) for all \(i\ne j\). To see this consider the following chain of rewritings.

$$\begin{aligned} \underline{P}( A_i^{\lambda _i} \mid A_i^{\pi _i})\underline{P}(A_j^{\lambda _j} \mid A_j^{\pi _j})&= \underline{P}( A_i^{\lambda _i} \mid A_i^{\pi _i}, A_j^{\pi _j} ) \underline{P}( A_j^{\lambda _j} \mid A_i^{\pi _i}, A_j^{\pi _j} )\\&= \underline{P}( A_i^{\lambda _i},A_j^{\lambda _j} \mid A_i^{\pi _i}, A_j^{\pi _j} ). \end{aligned}$$

The corresponding identity for \(\overline{P}\) follows analogously.    \(\square \)

Proof of

Thm. 1. We have to show that

$$\begin{aligned}\begin{gathered} \underline{P}( \phi ^{\lambda _\phi } \mid \phi ^{\pi _\phi } ) \le P( \phi ^{\lambda _\phi } \mid \phi ^{\pi _\phi } ) \le \overline{P}( \phi ^{\lambda _\phi } \mid \phi ^{\pi _\phi } ) \end{gathered}\end{aligned}$$

holds for any formula \(\phi \) with label \(\phi ^{\lambda _\phi }\) and truth value \(\phi ^{\pi _\phi }\). In Eq. (1) we already showed

$$\begin{aligned}\begin{gathered} \min _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}P(l \mid m) \le P(\phi ^{\lambda _\phi } \mid \phi ^{\pi _\phi }) \le \max _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}P(l \mid m). \end{gathered}\end{aligned}$$

As all involved probabilities are nonnegative, monotonicity of \(\le \) yields

$$\begin{aligned} \min _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}\underline{P}(l \mid m) \le \min _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}P(l \mid m)\\ \max _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}P(l \mid m) \le \max _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}\overline{P}(l \mid m) \end{aligned}$$

Combining the inequalities yields the asserted estimation.    \(\square \)

Proof of

Thm. 2. Let \(\phi \) be an arbitrary formula with given label \(\phi ^\lambda _\phi \) and truth value \(\phi ^{\pi _\phi }\). Further let \(m=(A_1^{\pi _1},\dots ,A_n^{\pi _n})\) be a truth assignment and \(l=(A_1^{\lambda _1},\dots ,A_n^{\lambda _n})\) a label assignment for all atoms \(A_i\). In order to establish the theorem we show

$$\begin{aligned} \min _{m \models \phi ^{\pi _\phi }}\!\sum _{l \models \phi ^{\lambda _\phi }}\prod _{i=1}^n \underline{P}({A_i}^{\lambda _i} \mid {A_i}^{\pi _i})&\le P({\phi }^{\lambda _\phi } \mid {\phi }^{\pi _\phi }) \le \max _{m \models \phi ^{\pi _\phi }}\!\sum _{l \models \phi ^{\lambda _\phi }}\prod _{i=1}^n \overline{P}({A_i}^{\lambda _i} \mid {A_i}^{\pi _i}). \end{aligned}$$

The bounds

$$\begin{aligned}\begin{gathered} \min _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}P(l \mid m) \le P({\phi }^{\lambda _\phi } \mid {\phi }^{\pi _\phi }) \le \max _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}P(l \mid m) \end{gathered}\end{aligned}$$

can be obtained without any further assumption and have already been established in Eq. (1). In Eq. (2) we already argued that the bounds

$$\begin{aligned}\begin{gathered} \min _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}\prod _{i=1}^n P(A_i^{\lambda _i} \mid m) \le P({\phi }^{\lambda _\phi } \mid {\phi }^{\pi _\phi }) \le \max _{m \models \phi ^{\pi _\phi }}\sum _{l \models \phi ^{\lambda _\phi }}\prod _{i=1}^n P(A_i^{\lambda _i} \mid m) \end{gathered}\end{aligned}$$

can be derived under the Independent Labelling Assumption 1. Finally, Assumption 2 allows us to bound each term of the form \(P(A_i^{\lambda _i} \mid m) = P(A_i^{\lambda _i} \mid A_1^{\pi _1},\dots ,A_n^{\pi _n})\) by its respective lower and upper bound \(\underline{P}({A_i}^{\lambda _i} \mid {A_i}^{\pi _i})\) and \(\overline{P}({A_i}^{\lambda _i} \mid {A_i}^{\pi _i})\). As all involved terms are nonnegative, we obtain

$$\begin{aligned} \min _{m \models \phi ^{\pi _\phi }}\!\sum _{l \models \phi ^{\lambda _\phi }}\prod _{i=1}^n \underline{P}({A_i}^{\lambda _i} \mid {A_i}^{\pi _i})&\le P({\phi }^{\lambda _\phi } \mid {\phi }^{\pi _\phi }) \le \max _{m \models \phi ^{\pi _\phi }}\!\sum _{l \models \phi ^{\lambda _\phi }}\prod _{i=1}^n \overline{P}({A_i}^{\lambda _i} \mid {A_i}^{\pi _i}). \end{aligned}$$

   \(\square \)

Proof of

Thm. 4. We show the identities for the lower limiting rates \(\underline{\textrm{TPR}}_{\phi \wedge \psi }=\underline{\textrm{TPR}}_\phi \underline{\textrm{TPR}}_\psi \) and \(\underline{\textrm{FNR}}_{\phi \wedge \psi }=\underline{\textrm{FNR}}_\phi + \underline{\textrm{FNR}}_\psi - \underline{\textrm{FNR}}_\phi \underline{\textrm{FNR}}_\psi \). The identities for the upper limiting rates follow analogously. The remaining estimations for \(\underline{\textrm{TNR}}_{\phi \vee \psi }\), \(\underline{\textrm{FPR}}_{\phi \vee \psi }\), \(\overline{\textrm{TNR}}_{\phi \vee \psi }\), and \(\overline{\textrm{FPR}}_{\phi \vee \psi }\) can be obtained from the identity \(\phi \vee \psi \equiv \lnot (\lnot \phi \wedge \lnot \psi )\) and Thm. 3.

We decompose the conditional probabilities into proper assignments using Lemma 3 and Thm. 1.

$$\begin{aligned} \underline{\textrm{TPR}}_{\phi \wedge \psi }&= \underline{P}((\phi \wedge \psi )^{+} \mid (\phi \wedge \psi )^\top ) = \underline{P}(\phi ^+, \psi ^+\mid \phi ^\top , \psi ^\top )\\&= \underline{P}(\phi ^+\mid \phi ^\top )\underline{P}(\psi ^+\mid \psi ^\top ) = \underline{\textrm{TPR}}_{\phi }\underline{\textrm{TPR}}_{\psi },\\ \underline{\textrm{FNR}}_{\phi \wedge \psi }&= \underline{P}((\phi \wedge \psi )^-\mid (\phi \wedge \psi )^\top )\\&= \underline{P}(\phi ^-\mid \phi ^\top , \psi ^\top ) + \underline{P}(\psi ^-\mid \phi ^\top , \psi ^\top ) - \underline{P}(\phi ^-\mid \phi ^\top , \psi ^\top ) \underline{P}(\psi ^-\mid \phi ^\top , \psi ^\top )\\&= \underline{P}(\phi ^-\mid \phi ^\top ) + \underline{P}(\psi ^-\mid \psi ^\top ) - \underline{P}(\phi ^-\mid \phi ^\top )\underline{P}(\psi ^-\mid \psi ^\top )\\&= \underline{\textrm{FNR}}_\phi + \underline{\textrm{FNR}}_\psi - \underline{\textrm{FNR}}_\phi \underline{\textrm{FNR}}_\psi . \end{aligned}$$

   \(\square \)

Proof of

Theorem 5. We show the inequalities for the lower limiting rates only. The corresponding inequalities for the upper limiting rates follow analogously. Note that for the conditional probabilities \(\underline{\textrm{TPR}}_{\phi \vee \psi }=\underline{P}((\phi \vee \psi )^+\mid (\phi \vee \psi )^\top )\) and \(\underline{\textrm{FNR}}_{\phi \vee \psi }=\underline{P}((\phi \vee \psi )^-\mid (\phi \vee \psi )^\top ))\) the conditioning event \((\phi \vee \psi )^\top \) can be decomposed into disjoint models yielding

figure g

Lemma 2 allows us to infer a lower estimate of the rates:

$$\begin{aligned} \min \left\{ \begin{array}{l} \underline{P}((\phi \vee \psi )^+\mid \phi ^\top ,\psi ^\top ),\\ \underline{P}((\phi \vee \psi )^+\mid \phi ^\top ,\psi ^\bot ),\\ \underline{P}((\phi \vee \psi )^+\mid \phi ^\bot ,\psi ^\top ) \end{array} \right\}&\le \underline{\textrm{TPR}}_{\phi \vee \psi }, \\ \min \left\{ \begin{array}{l} \underline{P}((\phi \vee \psi )^-\mid \phi ^\top ,\psi ^\top ),\\ \underline{P}((\phi \vee \psi )^-\mid \phi ^\top ,\psi ^\bot ),\\ \underline{P}((\phi \vee \psi )^-\mid \phi ^\bot ,\psi ^\top ) \end{array} \right\}&\le \underline{\textrm{FNR}}_{\phi \vee \psi }. \end{aligned}$$

We decompose the labelling of \((\phi \vee \psi )^+\) and \((\phi \vee \psi )^-\) of each term in the minimum and maximum expression individually, where conjunctive events can further be decomposed into products using Lemma 3 E.g., the first term of the estimate for \(\underline{\textrm{TPR}}_{\phi \vee \psi }\) is rewritten as follows:

$$\begin{aligned}&\ \ \ \underline{P}((\phi \vee \psi )^+\mid \phi ^\top ,\psi ^\top )\\&= \underline{P}(\phi ^+\mid \phi ^\top ,\psi ^\top ) + \underline{P}(\psi ^+\mid \phi ^\top ,\psi ^\top ) - \underline{P}(\phi ^+, \psi ^+\mid \phi ^\top ,\psi ^\top )\\&= \underline{P}(\phi ^+\mid \phi ^\top ) + \underline{P}(\psi ^+\mid \psi ^\top ) - \underline{P}(\phi ^+\mid \phi ^\top )P(\psi ^+\mid \psi ^\top )\\&= \underline{\textrm{TPR}}_\phi + \underline{\textrm{TPR}}_\psi - \underline{\textrm{TPR}}_\phi \underline{\textrm{TPR}}_\psi , \end{aligned}$$

and the second term as follows:

$$\begin{aligned}&\ \ \ \underline{P}((\phi \vee \psi )^+\mid \phi ^\top ,\psi ^\bot )\\&= \underline{P}(\phi ^+\mid \phi ^\top ,\psi ^\bot ) + \underline{P}(\psi ^+\mid \phi ^\top ,\psi ^\bot ) - \underline{P}(\phi ^+, \psi ^+\mid \phi ^\top ,\psi ^\bot )\\&= \underline{P}(\phi ^+\mid \phi ^\top ) + \underline{P}(\psi ^+\mid \psi ^\bot ) - \underline{P}(\phi ^+\mid \phi ^\top )P(\psi ^+\mid \psi ^\bot )\\&= \underline{\textrm{TPR}}_\phi + \underline{\textrm{FPR}}_\psi - \underline{\textrm{TPR}}_\phi \underline{\textrm{FPR}}_\psi . \end{aligned}$$

After rewriting all terms accordingly, the lower bounds for \(\underline{\textrm{TPR}}_{\phi \vee \psi }\) and \(\underline{\textrm{FNR}}_{\phi \vee \psi }\) are established. The upper bounds for \(\overline{\textrm{TPR}}_{\phi \vee \psi }\) and \(\overline{\textrm{FNR}}_{\phi \vee \psi }\) follow analogously, and the remaining bounds for for \(\underline{\textrm{TNR}}_{\phi \wedge \psi }\), \(\overline{\textrm{TNR}}_{\phi \wedge \psi }\), \(\underline{\textrm{FPR}}_{\phi \wedge \psi }\), and \(\underline{\textrm{FPR}}_{\phi \wedge \psi }\) are obtained using De Morgan’s law.    \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fränzle, M., Hagemann, W., Damm, W., Rakow, A., Swaminathan, M. (2023). Safer Than Perception: Assuring Confidence in Safety-Critical Decisions of Automated Vehicles. In: Haxthausen, A.E., Huang, Wl., Roggenbach, M. (eds) Applicable Formal Methods for Safe Industrial Products. Lecture Notes in Computer Science, vol 14165. Springer, Cham. https://doi.org/10.1007/978-3-031-40132-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40132-9_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40131-2

  • Online ISBN: 978-3-031-40132-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics