Skip to main content
Log in

A new algorithm for low-deterministic security

  • Regular Contribution
  • Published:
International Journal of Information Security Aims and scope Submit manuscript

Abstract

We present a new algorithm for checking probabilistic noninterference in concurrent programs. The algorithm, named RLSOD, is based on the Low-Security Observational Determinism criterion. It utilizes program dependence graphs for concurrent programs and is flow-sensitive, context-sensitive, object-sensitive, and optionally time-sensitive. Due to a new definition of low-equivalency for infinite traces, the algorithm avoids restrictions or soundness leaks of previous approaches. A soundness proof is provided. Flow sensitivity turns out to be the key to precision and avoids prohibition of useful nondeterminism. The algorithm has been implemented for full Java byte code with unlimited threads. Precision and scalability have been experimentally validated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. joana.ipd.kit.edu provides download, webstart application, and other information.

  2. The implementation can handle arbitrary security lattices.

  3. Reads and writes on unopened streams are assumed to throw an exception, and PDGs can handle exceptions precisely [14, 16].

  4. If read(stream,x) is classified low, but stream is classified high, the resulting explicit illegal flow is trivially discovered in the PDG [16]. Similarly if print(stream,x) is classified high, but stream is classified low.

  5. The bottom left program was proposed by an anonymous reviewer of a previous version of this work.

  6. If the scheduler always executes thread 1 completely before thread 2, Definition 1 is not violated.

  7. That is, \(b\) is a branching point with immediate post-dominator \(\textit{PD}(b)\) and \(b<o<\textit{PD}(b)\) [52].

  8. Note that exceptions and handlers generate additional control dependencies in PDGs and traces [16]. Thus, if o1 may throw an exception, the dependency situation is more complex than in a “regular” if(b){o1;o2}. Still, the subsequent argument for traces holds.

  9. We thank one reviewer for observing this.

  10. PDGs and the backward slice \(BS\) are explained in detail in Sect. 4; here we rely on some preliminary understanding.

  11. HRB use so-called system dependence graphs, which in this article are subsumed under the PDG notion.

  12. “Interference dependencies” have nothing to do with “noninterference” in the IFC sense; the naming is for historical reasons.

  13. Where the light grey node is pruned by time-sensitive analysis; see below.

  14. HRB slicing has two phases, hence the name I2P.

  15. “Time sensitivity” has nothing to do with “timing leaks” in the IFC sense, the naming is for historical reasons.

  16. The latter notation was already used in Sect. 5.2.

  17. An X10 extension for JOANA is in preparation.

References

  1. Abadi, M., Banerjee, A., Heintze, N., Riecke, J.G.: A core calculus of dependency. In: POPL ’99: Proceedings of the 26th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 147–160. ACM, New York (1999)

  2. Askarov, A., Hunt, S., Sabelfeld, A., Sands, D.: Termination-insensitive noninterference leaks more than just a bit. In Proceedings of ESORICS, volume 5283 of LNCS, pp. 333–348 (2008)

  3. Binkley, D., Harman, M.: A survey of empirical results on program slicing. Adv. Comput. 62, 105–178 (2004)

  4. Binkley, D., Harman, M., Krinke, J.: Empirical study of optimization techniques for massive slicing. ACM Trans. Program. Lang. Syst. 30(1), 3 (2007)

  5. Bouajjani, A., Müller-Olm, M., Touili, T.: Regular symbolic analysis of dynamic networks of pushdown systems. In: Concurrency. Theory (CONCUR 2005), pp. 473–487. Springer, LNCS 3653 (2005)

  6. Gawlitza, T.M., Lammich, P., Müller-Olm, M., Seidl, H., Wenner, A.: Join-lock-sensitive forward reachability analysis for concurrent programs with dynamic process creation. In: VMCAI, pp. 199–213 (2011)

  7. Giffhorn, D.: Advanced chopping of sequential and concurrent programs. Softw. Qual. J. 19(2), 239–294 (2011)

    Article  Google Scholar 

  8. Giffhorn, D.: Slicing of concurrent programs and its application to information flow control. PhD thesis, Karlsruher Institut für Technologie, Fakultät für Informatik, May 2012. http://pp.info.uni-karlsruhe.de/uploads/publikationen/giffhorn12thesis.pdf

  9. Giffhorn, D., Hammer, C.: Precise slicing of concurrent programs—an evaluation of precise slicing algorithms for concurrent programs. J. Autom. Softw. Eng. 16(2), 197–234 (2009)

    Article  Google Scholar 

  10. Giffhorn, D., Snelting, G.: Probabilistic noninterference based on program dependence graphs. Karlsruhe Reports in Informatics, 6, April 2012. http://pp.info.uni-karlsruhe.de/uploads/publikationen/giffhorn12kri.pdf

  11. Graf, J.: Speeding up context-, object- and field-sensitive sdg generation. In Proceedings of 9th SCAM, pp. 105–114, September (2010)

  12. Graf, J., Hecker, M., Mohr, M.: Using joana for information flow control in java programs—a practical guide. In Proceedings of 6th Working Conference on Programming Languages (ATPS’13), Lecture Notes in Informatics (LNI) 215. Springer, Berlin (2013)

  13. Graf, J., Hecker, M., Mohr, M., Nordhoff, B.: Lock-sensitive interference analysis for java: Combining program dependence graphs with dynamic pushdown networks. In Proceedings of 1st International Workshop on Interference and Dependence, January (2013)

  14. Hammer, C.: Information Flow Control for Java. PhD thesis, Universität Karlsruhe (TH) (2009)

  15. Hammer, C.: Experiences with PDG-based IFC. In: Massacci, F., Wallach, D., Zannone, N. (eds.) Proceedings of ESSoS’10, volume 5965 of LNCS, pp 44–60. Springer, Berlin (2010)

  16. Hammer, C., Snelting, G.: Flow-sensitive, context-sensitive, and object-sensitive information flow control based on program dependence graphs. Int. J. Inform. Secur. 8(6), December (2009)

  17. Horwitz, S., Prins, J., Reps, T.: On the adequacy of program dependence graphs for representing programs. In: Proceedings of POPL ’88, pp. 146–157, ACM, New York (1988)

  18. Horwitz, S., Reps, T., Binkley, D.: Interprocedural slicing using dependence graphs. ACM Trans. Program. Lang. Syst. 12(1), 26–60 (1990)

  19. Huisman, M., Ngo, T.M.: Scheduler-specific confidentiality for multi-threaded programs and its logic-based verification. In: Proceedings of Formal Verification of Object-Oriented Systems (2011)

  20. Huisman, M., Worah, P., Sunesen, K.: A temporal logic characterisation of observational determinism. In: Proceedings of 19th CSFW, p. 3. IEEE (2006)

  21. Hunt, S., Sands, D.: On flow-sensitive security types. In: POPL ’06, pp. 79–90. ACM (2006)

  22. Krinke, J.: Context-sensitive slicing of concurrent programs. In: Proceedings ESEC/FSE-11, pp. 178–187, ACM, New York (2003)

  23. Krinke, J.: Program slicing. In: Handbook of Software Engineering and Knowledge Engineering, vol. 3: Recent Advances. World Scientific Publishing (2005)

  24. Küsters, R., Truderung, T., Graf, J.: A framework for the cryptographic verification of java-like programs. In Computer Security Foundations Symposium (CSF), 2012 IEEE 25th. IEEE Computer Society, June (2012)

  25. Li, L., Verbrugge, C.: A practical MHP information analysis for concurrent Java programs. In: Proceedings LCPC’04, volume 3602 of LNCS, pp. 194–208. Springer, Berlin (2004)

  26. Lochbihler, A.: Java and the Java memory model—a unified, machine-checked formalisation. In: Helmut, S., (ed.) Proceedings of ESOP ’12, volume 7211 of LNCS, pp. 497–517, March (2012)

  27. Manson, J., Pugh, W., Adve, S.V..: The Java memory model. In: POPL, pp. 378–391 (2005)

  28. Mantel, H., Sands, D., Sudbrock, H.: Assumptions and guarantees for compositional noninterference. In: CSF, pp. 218–232 (2011)

  29. Mantel, H., Sudbrock, H.: Flexible scheduler-independent security. In: Proceedings ESORICS, volume 6345 of LNCS, pp. 116–133 (2010)

  30. Mantel, H., Sudbrock, H.: Types vs. pdgs in information flow analysis. In: LOPSTR, pp. 106–121 (2012)

  31. Mantel, H., Sudbrock, H., Kraußer, T.: Combining different proof techniques for verifying information flow security. In: Proceedings of LOPSTR, volume 4407 of LNCS, pp. 94–110 (2006)

  32. Muller, S., Chong, S.: Towards a practical secure concurrent language. In: OOPSLA, pp. 57–74 (2012)

  33. Nanda, M.G., Ramesh, S.: Interprocedural slicing of multithreaded programs with applications to Java. ACM Trans. Program. Lang. Syst. 28(6), 1088–1144 (2006)

    Article  Google Scholar 

  34. Naumovich, G., Avrunin, G.S., Clarke, L.A.: An efficient algorithm for computing MHP information for concurrent Java programs. In: Proceedings ESEC/FSE-7, volume 1687 of LNCS, pp. 338–354, London, UK (1999)

  35. Ngo, T.M., Stoelinga, M., Huisman, M.: Confidentiality for probabilistic multi-threaded programs and its verification. In: ESSoS, pp. 107–122 (2013)

  36. Ranganath, V.P., Amtoft, T., Banerjee, A., Hatcliff, J., Dwyer, M.B.: A new foundation for control dependence and slicing for modern program structures. ACM Trans. Program. Lang. Syst. 29(5), 27 (2007)

    Article  Google Scholar 

  37. Reps, T., Horwitz, S., Sagiv, M., Rosay, G.: Speeding up slicing. In: Proceedings of FSE ’94, pp. 11–20, ACM, New York (1994)

  38. Reps, T., Yang, W.: The semantics of program slicing. Technical Report 777, Computer Sciences Department, University of Wisconsin-Madison (1988)

  39. Roscoe, A.W., Woodcock, J., Wulf, L.: Non-interference through determinism. In: ESORICS, volume 875 of LNCS, pp. 33–53 (1994)

  40. Sabelfeld, A., Myers, A.: Language-based information-flow security. IEEE J. Select. Areas Commun. 21(1), 5–19 (January 2003)

  41. Sabelfeld, A.: Confidentiality for multithreaded programs via bisimulation. In: Proceeding 5th International Andrei Ershov Memorial Conference, volume 2890 of LNCS, Akademgorodok, Novosibirsk, Russia, July (2003)

  42. Sabelfeld, A., Sands, D.: Probabilistic noninterference for multi-threaded programs. In Proceedings of CSFW ’00, p. 200, Washington, DC, USA. IEEE Computer Society (2000)

  43. Smith, G.: Improved typings for probabilistic noninterference in a multi-threaded language. J. Comput. Secur. 14(6), 591–623 (2006)

    Google Scholar 

  44. Smith, G., Volpano, D.: Secure information flow in a multi-threaded imperative language. In: Proceedings of POPL ’98, pp. 355–364. ACM, January (1998)

  45. Snelting, G.: Combining slicing and constraint solving for validation of measurement software. In SAS ’96: Proceedings of the Third International Symposium on Static Analysis, pp. 332–348. Springer, London (1996)

  46. Snelting, G., Robschink, T., Krinke, J.: Efficient path conditions in dependence graphs for software safety analysis. ACM Trans. Softw. Eng. Methodol. 15(4), 410–457 (2006)

  47. Terauchi, T.: A type system for observational determinism. In: CSF, pp. 287–300 (2008)

  48. Volpano, D.M., Smith, G.: Probabilistic noninterference in a concurrent language. J. Comput. Secur. 7(1) (1999)

  49. Wasserrab, D.: From Formal Semantics to Verified Slicing—A Modular Framework with Applications in Language Based Security. PhD thesis, Karlsruher Institut für Technologie, Fakultät für Informatik, October (2010)

  50. Wasserrab, D.: Information flow noninterference via slicing. Archive of Formal Proofs (2010)

  51. Wasserrab, D., Lohner, D., Snelting, G.: On PDG-based noninterference and its modular proof. In: Proceedings PLAS ’09. ACM, June (2009)

  52. Xin, B., Zhang, X.: Efficient online detection of dynamic control dependence. In: Proceedings of ISSTA, pp. 185–195. ACM (2007)

  53. Zdancewic, S., Myers, A.C.: Observational determinism for concurrent program security. In: Proceedings of CSFW, pp. 29–43. IEEE (2003)

Download references

Acknowledgments

We thank the reviewers for their very insightful observations and suggestions. Joachim Breitner, Jürgen Graf, Martin Hecker, and Martin Mohr provided valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregor Snelting.

Additional information

This work was partially supported by DFG grants Sn11/9-2 and Sn11/12-1 in the scope of the priority program “Reliably Secure Software Systems”. It is based on [8] with additional contributions by the second author. A preliminary version was published as an unreviewed technical report [10].

Appendix A: Proof Sketch for Theorem 1 and 2

Appendix A: Proof Sketch for Theorem 1 and 2

In the following, we describe the central steps in the soundness proof. All details can be found in [8].

Theorem 1

A program is low-security observational deterministic if

  1. 1.

    no low-observable operation \(o\) is potentially influenced by an operation reading high input,

  2. 2.

    no low-observable operation \(o\) is potentially influenced by a data conflict, and

  3. 3.

    there is no order conflict between any two low-observable operations.

Proof

Let two low-equivalent inputs be given. We have to demonstrate that, under conditions 1.–3., all possible traces resulting from these inputs are low equivalent. The proof proceeds in a sequence of steps.

  1. 1.

    Definition. For a trace \(T\) and operation \(o\), the trace slice \(S(o,T)\) consists of all operations and dependences in \(T\) which form a path from \( start\) to \(o\) (see Fig. 4). \(S(o,T)\) is thus similar to a dynamic backward slice for \(o\). Similarly the data slice \(D(o,T)\) is the dynamic backward slice which considers only dynamic data dependencies, but not control dependencies. Trace and data slices are cycle free. Note that every operation in \(S(o,T)\) has exactly one predecessor on which it is control dependent, the \( start\) operation being the only exception. Note also that \(S(o,T)\) can be soundly approximated by a static slice on \(stmt(o)\), the source code statement containing \(o\).

  2. 2.

    Lemma. Let \(q\) and \(r\) be two different operations of the same thread, and let \(T\) and \(U\) be two traces which both execute \(q\) and \(r\). Further, let \(T\) execute \(q\) before \(r\). Then \(U\) also executes \(q\) before \(r\). This is a consequence of the fact that any dynamic branching point \(b\) imposes a total execution order on all operations \(\in {DCD(b)}\), because according to 1., all operations have at most one control predecessor.

  3. 3.

    Lemma. Let \(q\), \(r\) be operations which cannot happen in parallel, and let \(T\) and \(U\) be traces which both execute \(q\) and \(r\). Further, let \(T\) execute \(q\) before \(r\). Then \(U\) also executes \(q\) before \(r\). Indeed, if \(q,r\) are in the same thread, this is just the last lemma. Otherwise, MHP guarantees \(q\) executes before \(r\)’s thread is forked, or \(r\) executes after \(q\)’s thread has joined. Hence, \(U\) executes \(q\) before \(r\).

  4. 4.

    Lemma. Let \((\overline{m}, o, m)\) be a configuration in trace \(T\). \(\overline{T_o} = \overline{m}|_{{use}(o)}\) denotes the part of memory \(\overline{m}\) that contains the variables used by \(o\), and \(T_o = m|_{{def}(o)}\) denotes the part of memory \(m\) that contains the variables defined by \(o\). Now let \(T\) and \(U\) be two traces with low-equivalent inputs. Let \(o\) be an operation. If \(D(o, T) = D(o, U)\) and no operation in these data slices reads high input, then \(\overline{T_o} = \overline{U_o}\) and \(T_o = U_o\). This lemma is proved by induction on the structure of \(D(o,T)\) (remember \(D(o,T)\) is acyclic).

  5. 5.

    Corollary. Let \(T, U\) be two traces with low-equivalent inputs. Let \(o\) be an operation. If \(S(o, T) = S(o, U)\) and no operation in these trace-slices reads high values, then \(\overline{T_o} = \overline{U_o}\) and \(T_o = U_o\). That is, the low memory parts in both traces are identical for low-equivalent inputs, if all operations do not depend on high values.

  6. 6.

    Lemma. Let \(T\) and \(U\) be two finite traces of \(p\) with low-equivalent inputs. \(T\) and \(U\) are low equivalent if for every low-observable operation \(o\), \(S(o, T) = S(o, U)\) holds and no operation in the trace-slices depends on high values, and \(T\) and \(U\) execute the same low-observable operations in the same relative order. This lemma, which seems quite natural, gives us an instrument for finite traces to prove the low equivalence of traces resulting from low-equivalent input, which is necessary for theorem 1. The infinite cases are treated in the next two lemmata.

  7. 7.

    Lemma. Let \(T\) and \(U\) be two infinite traces of \(p\) with low-equivalent inputs such that \({obs}_{{low}}(T)\) is of equal length or longer than \({obs}_{{low}}(U)\) (switch the names if necessary). \(T\) and \(U\) are low equivalent if

    • they execute the shared low-observable operations in the same relative order,

    • for every low-observable operation \(o \in U\) \(S(o, T) = S(o, U)\) holds and no operation in the trace-slices reads high input

    • and for every low-observable operation \(o \in T\) and \(o \notin U\) \(U\) infinitely delays an operation \(b \in {\textit{DCD}}(o)\).

  8. 8.

    Lemma. Let \(T\) and \(U\) be two traces of \(p\) with low-equivalent inputs, such that \(T\) is finite and \(U\) is infinite. \(T\) and \(U\) are low equivalent if

    • \({obs}_{{low}}(T)\) is of equal length or longer than \({obs}_{{low}}(U)\),

    • \(T\) and \(U\) execute the shared low-observable operations in the same relative order,

    • for every low-observable operation \(o \in U\) \(S(o, T) = S(o, U)\) holds and no operation in the trace-slices reads high input

    • and for every low-observable operation \(o \in T\) and \(o \notin U\) \(U\) infinitely delays an operation \(b \in {\textit{DCD}}(o)\).

  9. 9.

    Corollary. Traces \(T,U\) are low equivalent if one of the last three lemmata can be applied. What remains to be shown is that the preconditions of the lemmata are a consequence of the conditions 1.–3. in theorem 1.

  10. 10.

    Lemma. If operation \(o\) is not potentially influenced by a data conflict, then \(S(o, T) = S(o, U)\) holds for all traces \(T\) and \(U\) which execute \(o\). Note that only at this point, data or order conflicts are exploited. This lemma needs an induction over the length of \(T\). The base case is trivial, because both \(T,U\) consist only of the \( start\) operation, and trivially \(S({start},T)=S({start},U)\). For the induction step, let \(q\) be the next operation in \(T\). If \(o\not \in {DFS}(q)\), then \(q\not \in S(o,T)\), and the induction step trivially holds. Otherwise, one can show that every dynamic data or control dependence \(r \mathop {\dashrightarrow }\limits ^{v} q\) and \(r \mathop {\dashrightarrow }\limits ^{dcd} q\) in \(S(q, T)\) is also in \(S(q, U)\). Furthermore, \(q\) does not depend on additional operations in \(U\). Thus, \(q\) has the same incoming dependences in \(T\) and \(U\). By induction hypothesis, \(S(r,T)=S(r,U)\) for every \(r\) on which \(q\) is dependent in \(T\) and \(U\). Hence, \(S(q,T)=S(r,T)\).

  11. 11.

    Lemma (see section 4.3, lemma 1). Let \(o\) be an operation that is not potentially influenced by a data conflict or an operation reading high input. Let \(T\) be a trace and \(\varTheta \) be the set of possible traces whose inputs are low equivalent to the one of \(T\). If \(o \in T\), then every \(U \in \varTheta \) either executes \(o\) or infinitely delays an operation in \({\textit{DCD}}(o)\).

  12. 12.

    Lemma. Let \(T\) and \(U\) be two traces with low-equivalent inputs. If there are no order conflicts between any two low-observable operations, then all low-observable operations executed by both \(T\) and \(U\) are executed in the same relative order.

  13. 13.

    Theorem 1 holds. Lemma 12 guarantees that \(T\) and \(U\) execute the shared low-observable operations in the same relative order. Lemma 11 can be applied to all low-observable operations \(o\) executed by both \(T\) and \(U\); hence, \(S(o, T) = S(o, U)\). Since the potential influence of a low-observable operation \(o\) does not contain operations reading high input, this also holds for the operations in \(S(o, T)\) and \(S(o, U)\). To prove low equivalence of \(T\) and \(U\), we apply one of the lemmata 6,7, or 8, depending whether \(T\) resp. \(U\) are finite or infinite. \(\square \)

Remember that the three conditions for theorem 1 can naturally be checked using PDGs and slicing. This fact justifies our definition of low-equivalent traces and our PDG-based approach.

Theorem 2

If a program is LSOD according to definition 6, it is probabilistically noninterferent.

Proof

We write \(\varTheta _i\) for the set of possible traces for input \(i\); for a trace \(r\in \varTheta _i\), \(P_i(r)\) is its execution probability. Now let two low-equivalent inputs \(t,u\) be given. Let \(\varTheta =\varTheta _t\cup \varTheta _u\). Let \(T\in \varTheta \). Let \(\mathfrak {T}=\{r\in \varTheta _t \mid r \sim _{{low}} T\}\), \(\mathfrak {U}=\{r\in \varTheta _u\mid r \sim _{{low}} T\}\). We have to show that \(\sum _{r\in \mathfrak {T}}P_t(r)=\sum _{r\in \mathfrak {U}}P_u(r)\) if LSOD holds.

Due to LSOD, all traces in \(\varTheta \) and thus in \(\mathfrak {T}\cup \mathfrak {U}\) are low equivalent, and \(\forall r\in \varTheta : T\sim _{{low}} r\). Therefore, under LSOD, \(\mathfrak {T}=\varTheta _t\), because the condition \(T\sim _{{low}} r\) in the definition of \(\mathfrak {T}\) always holds. That is, \(\mathfrak {T}\) contains all possible traces for inputs \(t,u\). Therefore, \(\sum _{r\in \mathfrak {T}}P_t(r)=1\). Similarly, \(\sum _{r\in \mathfrak {U}}P_u(r)=1\), and the required equality holds. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giffhorn, D., Snelting, G. A new algorithm for low-deterministic security. Int. J. Inf. Secur. 14, 263–287 (2015). https://doi.org/10.1007/s10207-014-0257-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10207-014-0257-6

Keywords

Navigation