Skip to main content

Revising the Membrane Computing Model for Byzantine Agreement

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10105))

Abstract

We refine our earlier version of P systems with complex symbols. The new version, called cP systems, enables the creation and manipulation of high-level data structures which are typical in high-level languages, such as: relations (graphs), associative arrays, lists, trees. We assess these capabilities by attempting a revised version of our previously best solution for the Byzantine agreement problem – a famous problem in distributed algorithms, with non-trivial data structures and algorithms. In contrast to our previous solutions, which use a greater than exponential number of symbols and rules, the new solution uses a fixed sized alphabet and ruleset, independent of the problem size. The new ruleset follows closely the conceptual description of the algorithm. This revised framework opens the way to further extensions, which may bring P systems closer to the conceptual Actor model.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abd-El-Malek, M., Ganger, G.R., Goodson, G.R., Reiter, M.K., Wylie, J.J.: Fault-scalable Byzantine fault-tolerant services. In: Herbert, A., Birman, K.P. (eds.) SOSP, pp. 59–74. ACM (2005)

    Google Scholar 

  2. Ben-Or, M., Hassidim, A.: Fast quantum Byzantine agreement. In: Gabow, H.N., Fagin, R. (eds.) STOC, pp. 481–485. ACM (2005)

    Google Scholar 

  3. Cachin, C., Kursawe, K., Shoup, V.: Random oracles in constantinople: practical asynchronous Byzantine agreement using cryptography. J. Cryptol. 18(3), 219–246 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Castro, M., Liskov, B.: Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. 20(4), 398–461 (2002)

    Article  Google Scholar 

  5. Ciobanu, G.: Distributed algorithms over communicating membrane systems. Biosystems 70(2), 123–133 (2003)

    Article  MathSciNet  Google Scholar 

  6. Ciobanu, G., Desai, R., Kumar, A.: Membrane systems and distributed computing. In: PĂun, G., Rozenberg, G., Salomaa, A., Zandron, C. (eds.) WMC 2002. LNCS, vol. 2597, pp. 187–202. Springer, Heidelberg (2003). doi:10.1007/3-540-36490-0_12

    Chapter  Google Scholar 

  7. Dinneen, M.J., Kim, Y.-B., Nicolescu, R.: A faster P solution for the Byzantine agreement problem. In: Gheorghe, M., Hinze, T., Păun, G., Rozenberg, G., Salomaa, A. (eds.) CMC 2010. LNCS, vol. 6501, pp. 175–197. Springer, Heidelberg (2010). doi:10.1007/978-3-642-18123-8_15

    Chapter  Google Scholar 

  8. Dinneen, M.J., Kim, Y.B., Nicolescu, R.: A faster P solution for the Byzan- tine agreement problem. Report CDMTCS-388, Centre for Discrete Mathematics and Theoretical Computer Science, The University of Auckland, Auckland, New Zealand, July 2010. http://www.cs.auckland.ac.nz/CDMTCS/researchreports/388-DKN.pdf

  9. Dinneen, M.J., Kim, Y.B., Nicolescu, R.: P systems and the Byzantine agreement. J. Logic Algebraic Program. 79, 334–349 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dinneen, M.J., Kim, Y.B., Nicolescu, R.: P systems and the Byzantine agreement. Report CDMTCS-375, Centre for Discrete Mathematics and Theoretical Computer Science, The University of Auckland, Auckland, New Zealand, January 2010. http://www.cs.auckland.ac.nz/CDMTCS//researchreports/375Byzantine.pdf

  11. Dinneen, M.J., Kim, Y.B., Nicolescu, R.: A faster P solution for the Byzantine agreement problem. In: Gheorghe, M., Păun, G., Hinze, T. (eds.) Eleventh International Conference on Membrane Computing (CMC11), 24–27 August 2010, Friedrich Schiller University, Jena, Germany, pp. 167–192. Pro Business GmbH, Berlin (2015)

    Google Scholar 

  12. Lamport, L., Shostak, R.E., Pease, M.C.: The Byzantine generals problem. ACM Trans. Program. Lang. Syst. 4(3), 382–401 (1982)

    Article  MATH  Google Scholar 

  13. Lynch, N.A.: Distributed Algorithms. Morgan Kaufmann Publishers Inc., San Francisco (1996)

    MATH  Google Scholar 

  14. Martin, J.P., Alvisi, L.: Fast Byzantine consensus. IEEE Trans. Dependable Sec. Comput. 3(3), 202–215 (2006)

    Article  Google Scholar 

  15. Nicolescu, R.: Parallel and distributed algorithms in P systems. In: Gheorghe, M., Păun, G., Rozenberg, G., Salomaa, A., Verlan, S. (eds.) CMC 2011. LNCS, vol. 7184, pp. 35–50. Springer, Heidelberg (2012). doi:10.1007/978-3-642-28024-5_4

    Chapter  Google Scholar 

  16. Nicolescu, R.: Parallel thinning with complex objects and actors. In: Gheorghe, M., Rozenberg, G., Salomaa, A., Sosík, P., Zandron, C. (eds.) CMC 2014. LNCS, vol. 8961, pp. 330–354. Springer, Heidelberg (2014). doi:10.1007/978-3-319-14370-5_21

    Google Scholar 

  17. Nicolescu, R.: Structured grid algorithms modelled with complex objects. In: Rozenberg, G., Salomaa, A., Sempere, J.M., Zandron, C. (eds.) CMC 2015. LNCS, vol. 9504, pp. 321–337. Springer, Heidelberg (2015). doi:10.1007/978-3-319-28475-0_22

    Chapter  Google Scholar 

  18. Nicolescu, R., Dinneen, M.J., Kim, Y.B.: Towards structured modelling with hyper- dag P systems. Int. J. Comput. Commun. Control 2, 209–222 (2010)

    Google Scholar 

  19. Nicolescu, R., Ipate, F., Wu, H.: Programming P systems with complex objects. In: Alhazov, A., Cojocaru, S., Gheorghe, M., Rogozhin, Y., Rozenberg, G., Salomaa, A. (eds.) CMC 2013. LNCS, vol. 8340, pp. 280–300. Springer, Heidelberg (2014). doi:10.1007/978-3-642-54239-8_20

    Chapter  Google Scholar 

  20. Nicolescu, R., Ipate, F., Wu, H.: Towards high-level P systems programming using complex objects. In: Alhazov, A., Cojocaru, S., Gheorghe, M., Rogozhin, Y. (eds.) Proceedings of the 14th International Conference on Membrane Computing, CMC14, Chişinău, Moldova, 20–23 August 2013, pp. 255–276. Institute of Mathematics and Computer Science, Academy of Sciences of Moldova, Chişinău (2013)

    Google Scholar 

  21. Nicolescu, R., Wu, H.: Complex objects for complex applications. Rom. J. Inf. Sci. Technol. 17(1), 46–62 (2014)

    Google Scholar 

  22. Pease, M.C., Shostak, R.E., Lamport, L.: Reaching agreement in the presence of faults. J. ACM 27(2), 228–234 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  23. Păun, G., Rozenberg, G., Salomaa, A. (eds.): The Oxford Handbook of Membrane Computing. Oxford University Press, Inc., New York (2010)

    Google Scholar 

Download references

Acknowledgments

We are deeply indebted to the co-authors of our former studies on the Byzantine agreement and to the anonymous reviewers, for their most valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Radu Nicolescu .

Editor information

Editors and Affiliations

A Appendix. cP Systems: P Systems with Complex Symbols

A Appendix. cP Systems: P Systems with Complex Symbols

We present the details of our complex-symbols framework, slightly revised from our earlier papers [16, 17].

1.1 A.1 Complex Symbols as Subcells

Complex symbols play the roles of cellular micro-compartments or substructures, such as organelles, vesicles or cytoophidium assemblies (“snakes”), which are embedded in cells or travel between cells, but without having the full processing power of a complete cell. In our proposal, complex symbols represent nested data compartments which have no own processing power: they are acted upon by the rules of their enclosing cells.

Technically, our complex symbols, also called subcells, are similar to Prolog-like first-order terms, recursively built from multisets of atoms and variables. Atoms are typically denoted by lower case letters (or, occasionally, digits), such as a, b, c, \(\textit{1}\). Variables are typically denoted by uppercase letters, such as X, Y, Z. For improved readability, we also consider anonymous variables, which are denoted by underscores (“_”). Each underscore occurrence represents a new unnamed variable and indicates that something, in which we are not interested, must fill that slot.

Terms are either (i) simple atoms, or (ii) atoms (called functors), followed by one or more parenthesized multisets (called arguments) of other symbols (terms or variables), e.g. \(a(b^2 X), a(X^2 c(Y)), a(b^2)(c(Z))\). Functors that are followed by more than one parenthesized argument are called curried (by analogy to functional programming) and, as we see later, are useful to precisely described deep “micro-surgical” changes which only affect inner nested symbols, without directly touching their enclosing outer symbols. Terms that do not contain variables are called ground, e.g.:

  • Ground terms: a, \(a(\lambda )\), a(b), a(bc), \(a(b^2 c)\), a(b(c)), \(a(bc(\lambda ))\), a(b(c)d(e)), a(b(c)d(e)), \(a(b(c)d(e(\lambda )))\), \(a(bc^2 d)\); or, a curried form: \(a(b^2)(c(d)e^3)\).

  • Terms which are not ground: a(X), a(bX), a(b(X)), a(XY), \(a(X^2)\), a(XdY), a(Xc()), a(b(X)d(e)), a(b(c)d(Y)), a(b(X)d(e(Y))), \(a(b(X^2)d(e(Xf^2)))\); or, a curried form: \(a(b(X))(d(Y)e^3)\); also, using anonymous variables: \(a(b\_)\), \(a(X\_)\), \(a(b(X)d(e(\_)))\).

Note that we may abbreviate the expression of complex symbols by removing inner \(\lambda \)’s as explicit references to the empty multiset, e.g. \(a(\lambda ) = a()\).

Complex symbols (subcells, terms) can be formally defined by the following grammar:

figure a

Natural Numbers. Natural numbers can be represented via bags containing repeated occurrences of the same atom. For example, considering that \(\textit{1}\) represents an ad-hoc unary digit, then the following complex symbols can be used to describe the contents of a virtual integer variable a: \(a () = a(\lambda )\) — the value of a is 0; \(a(\textit{1}^3)\) — the value of a is 3. For concise expressions, we may alias these number representations by their corresponding numbers, e.g. \(a() \equiv a(0), b(\textit{1}^3) \equiv b(3)\). Nicolescu et al. [19,20,21] show how arithmetic operations can be efficiently modelled by P systems with complex symbols.

Lists. Using complex symbols, the list [uvw] can be represented as

\(\gamma (u~\gamma (v~\gamma (w~\gamma ())))\), where the ad-hoc atom \(\gamma \) represents the list constructor cons and \(\gamma ()\) the empty list. Hiding the less relevant representation choices, we may alias this list by the more expressive notation \(\gamma [u, v, w]\).

Trees. Consider the binary tree \(z = (a, (b), (c, (d), (e)))\), i.e. z points to a root node which has: (i) the value a; (ii) a left node with value b; and (iii) a right node with value c, left leaf d, and right leaf e. Using complex symbols, tree y can be represented as \(z(a ~ \phi (b) ~ \psi (c ~ \phi (d) ~ \psi (e)))\), where ad-hoc atoms \(\phi , \psi \) introduce left subtrees, right subtrees (respectively).

Associative Arrays. Consider the associative array \(\{ \textit{1}\mapsto a, \textit{1}^3 \mapsto c, \textit{1}^7 \mapsto g \}\), where the “mapsto” operator, \(\mapsto \), indicates key-value mappings. Using complex symbols, this array can be represented as a multiset with three items, \(\{ \, \mu (\kappa (\textit{1})\,\upsilon (a)), \; \mu (\kappa (\textit{1}^3)\,\upsilon (c)), \; \mu (\kappa (\textit{1}^7)\,\upsilon (g)) \, \}\), where ad-hoc atoms \(\mu , \kappa , \upsilon \) introduce mappings, keys, values (respectively). Hiding the less relevant representation choices, we may alias the items of this multiset by the more expressive notation \(\{ \, (\textit{1}\mathop {\mapsto }\limits ^{\mu }a), \; (\textit{1}^3 \mathop {\mapsto }\limits ^{\mu }c), \; (\textit{1}^7 \mathop {\mapsto }\limits ^{\mu }g) \, \}\) \(\equiv \) \(\{ \, \textit{1}\mathop {\mapsto }\limits ^{\mu }a, \; \textit{1}^3 \mathop {\mapsto }\limits ^{\mu }c, \; \textit{1}^7 \mathop {\mapsto }\limits ^{\mu }g \, \}\). In this context, the “mapsto” operator, \(\mapsto \), is considered to have a high associative priority, so the enclosing parentheses are mostly required for increasing the readability (e.g. in text). If we are not interested in the actual mapping value, instead of \((a \mathop {\mapsto }\limits ^{\mu }\_)\), we refer to this term by the succinct abbreviation x[a].

Unification. All terms (ground or not) can be (asymmetrically) matched against ground terms, using an ad-hoc version of pattern matching, more precisely, a one-way first-order syntactic unification, where an atom can only match another copy of itself, and a variable can match any bag of ground terms (including the empty bag, \(\lambda \)). This may create a combinatorial non-determinism, when a combination of two or more variables are matched against the same bag, in which case an arbitrary matching is chosen. For example:

  • Matching \(a(b(X)fY) = a(b(cd(e))f^2g)\) deterministically creates a single set of unifiers: \(X, Y = cd(e), fg\).

  • Matching \(a(XY^2) = a(de^2f)\) deterministically creates a single set of unifiers: \(X, Y = df, e\).

  • Matching \(a(XY) = a(df)\) non-deterministically creates one of the following four sets of unifiers: \(X, Y = \lambda , df\); \(X, Y = df, \lambda \); \(X, Y = d, f\); \(X, Y = f, d\).

Performance Note. If the rules avoid any matching non-determinism, then this proposal should not affect the performance of P simulators running on existing machines. Assuming that bags are already taken care of, e.g. via hash-tables, our proposed unification probably adds an almost linear factor. Let us recall that, in similar contexts (no occurs check needed), Prolog unification algorithms can run in O(ng(n)) steps, where g is the inverse Ackermann function. Our conjecture must be proven though, as the novel presence of multisets may affect the performance.

1.2 A.2 Generic Rules

Rules use states and are applied top-down, in the so-called weak priority order. Rules may contain any kind of terms, ground and not-ground. In concrete models, cells can only contain ground terms. Cells which contain unground terms can only be used to define abstract models, i.e. high-level patterns which characterise families of similar concrete models.

Pattern Matching. Rules are matched against cell contents using the above discussed pattern matching, which involves the rule’s left-hand side, promoters and inhibitors. Moreover, the matching is valid only if, after substituting variables by their values, the rule’s right-hand side contains ground terms only (so no free variables are injected in the cell or sent to its neighbours), as illustrated by the following sample scenario:

  • The cell’s current content includes the ground term:

    \(n(a \, \phi (b \, \phi (c) \, \psi (d)) \, \psi (e))\)

  • The following rewriting rule is considered:

    \(n(X \, \phi (Y \, \phi (Y_1) \, \psi (Y_2)) \, \psi (Z)) ~ \rightarrow ~ v(X) \, n(Y \, \phi (Y_2) \, \psi (Y_1)) \, v(Z)\)

  • Our pattern matching determines the following unifiers:

    \(X = a\), \(Y = b\), \(Y_1 = c\), \( Y_2 = d\), \(Z = e\).

  • This is a valid matching and, after substitutions, the rule’s right-hand side gives the new content:

    \(v(a) ~ n(b \, \phi (d) \, \psi (c)) ~ v(e)\).

Generic Rules Format. We consider rules of the following generic format (we call this format generic, because it actually defines templates involving variables):

figure b

Where:

  • All symbols, including states, promoters and inhibitors, are multisets of terms, possibly containing variables (which can be matched as previously described).

  • Parentheses can be used to clarify the association of symbols, but otherwise have no own meaning.

  • Subscript \(\alpha \) \(\in \) \(\{\scriptstyle \mathtt {min}\displaystyle \), \(\scriptstyle \mathtt {max}\displaystyle \}\) \(\times \) \(\{\scriptstyle \mathtt {min}\displaystyle \), \(\scriptstyle \mathtt {max}\displaystyle \}\), indicates a combined instantiation and rewriting mode, as further discussed in the example below.

  • Out-symbols are sent, at the end of the step, to the cell’s structural neighbours. These symbols are enclosed in round parentheses which further indicate their destinations, above abbreviated as \(\delta \). The most usual scenarios include:

  • \((a)\downarrow _i\) indicates that a is sent to child i (unicast);

  • \((a)\uparrow _i\) indicates that a is sent to parent i (unicast);

  • \((a)\downarrow _\forall \) indicates that a is sent to all children (broadcast);

  • \((a)\uparrow _\forall \) indicates that a is sent to all parents (broadcast);

  • \((a)\updownarrow _\forall \) indicates that a is sent to all neighbours (broadcast).

All symbols sent via one generic rule to the same destination form one single message and they travel together as one single block (even if the generic rule has multiple instantiations).

  • Both immediate-symbols and in-symbols remain in the current cell, but there is a subtle difference:

  • in-symbols become available after the end of the current step only, as in traditional P systems (we can imagine that these are sent via an ad-hoc loopback channel);

  • immediate-symbols become immediately available (i) to the current rule, if it uses the \(\mathtt {max}\) instantiation mode, and (ii) always, to the succeeding rules (in weak priority order).

Immediate symbols can substantially improve the runtime performance, which could be required for two main reasons: (i) to achieve parity with best traditional algorithms, and (ii) to ensure correctness when proper timing is logically critical. However, they are seldom required and not used in the systems presented in this paper.

Example. To explain our combined instantiation and rewriting mode, let us consider a cell, \(\sigma \), containing three counter-like complex symbols, \(c(\textit{1}^2)\), \(c(\textit{1}^2)\), \(c(\textit{1}^3)\), and the four possible instantiation\(\otimes \)rewriting modes of the following “decrementing” rule:

figure c
  1. 1.

    m If \(\alpha = \, \scriptstyle \mathtt {{\mathtt {min}\otimes \mathtt {min}}}\displaystyle \), rule \(\rho _{\mathtt {min}\otimes \mathtt {min}}\) nondeterministically generates and applies (in the \(\scriptstyle \mathtt {min}\displaystyle \) mode) one of the following two rule instances:

    $$\begin{aligned}&(\rho '_1) \quad ~S_1 ~c(\textit{1}^2) \rightarrow _{\mathtt {min}} S_2 ~c(\textit{1}) \quad \mathrm { or}\\&(\rho ''_1) \quad ~S_1 ~c(\textit{1}^3) \rightarrow _{\mathtt {min}} S_2 ~c(\textit{1}^2). \end{aligned}$$

    Using \((\rho '_1)\), cell \(\sigma \) ends with counters \(c(\textit{1})\), \(c(\textit{1}^2)\), \(c(\textit{1}^3)\). Using \((\rho ''_1)\), cell \(\sigma \) ends with counters \(c(\textit{1}^2)\), \(c(\textit{1}^2)\), \(c(\textit{1}^2)\).

  2. 2.

    If \(\alpha = \, \scriptstyle \mathtt {{\mathtt {max}\otimes \mathtt {min}}}\displaystyle \), rule \(\rho _{\mathtt {max}\otimes \mathtt {min}}\) first generates and then applies (in the \(\scriptstyle \mathtt {min}\displaystyle \) mode) the following two rule instances:

    $$\begin{aligned}&(\rho '_2) \quad ~S_1 ~c(\textit{1}^2) \rightarrow _{\mathtt {min}} S_2 ~c(\textit{1}) \quad \mathrm { and}\\&(\rho ''_2) \quad ~S_1 ~c(\textit{1}^3) \rightarrow _{\mathtt {min}} S_2 ~c(\textit{1}^2). \end{aligned}$$

    Using \((\rho '_2)\) and \((\rho ''_2)\), cell \(\sigma \) ends with counters \(c(\textit{1})\), \(c(\textit{1}^2)\), \(c(\textit{1}^2)\).

  3. 3.

    If \(\alpha = \, \scriptstyle \mathtt {{\mathtt {min}\otimes \mathtt {max}}}\displaystyle \), rule \(\rho _{\mathtt {min}\otimes \mathtt {max}}\) nondeterministically generates and applies (in the \(\scriptstyle \mathtt {max}\displaystyle \) mode) one of the following rule instances:

    $$\begin{aligned}&(\rho '_3) \quad ~S_1 ~c(\textit{1}^2) \rightarrow _{\mathtt {max}} S_2 ~c(\textit{1}) \quad \mathrm { or}\\&(\rho ''_3) \quad ~S_1 ~c(\textit{1}^3) \rightarrow _{\mathtt {max}} S_2 ~c(\textit{1}^2). \end{aligned}$$

    Using \((\rho '_3)\), cell \(\sigma \) ends with counters \(c(\textit{1})\), \(c(\textit{1})\), \(c(\textit{1}^3)\). Using \((\rho ''_3)\), cell \(\sigma \) ends with counters \(c(\textit{1}^2)\), \(c(\textit{1}^2)\), \(c(\textit{1}^2)\).

  4. 4.

    If \(\alpha = \, \scriptstyle \mathtt {{\mathtt {max}\otimes \mathtt {max}}}\displaystyle \), rule \(\rho _{\mathtt {min}\otimes \mathtt {max}}\) first generates and then applies (in the \(\scriptstyle \mathtt {max}\displaystyle \) mode) the following two rule instances:

    $$\begin{aligned}&(\rho '_4) \quad ~S_1 ~c(\textit{1}^2) \rightarrow _{\mathtt {max}} S_2 ~c(\textit{1}) \quad \mathrm {and}\\&(\rho ''_4) \quad ~S_1 ~c(\textit{1}^3) \rightarrow _{\mathtt {max}} S_2 ~c(\textit{1}^2). \end{aligned}$$

    Using \((\rho '_4)\) and \((\rho ''_4)\), cell \(\sigma \) ends with counters \(c(\textit{1})\), \(c(\textit{1})\), \(c(\textit{1}^2)\).

The interpretation of \(\scriptstyle \mathtt {{\mathtt {min}\otimes \mathtt {min}}}\displaystyle \), \(\scriptstyle \mathtt {{\mathtt {min}\otimes \mathtt {max}}}\displaystyle \) and \(\scriptstyle \mathtt {{\mathtt {max}\otimes \mathtt {max}}}\displaystyle \) modes is straightforward. While other interpretations could be considered, the mode \(\scriptstyle \mathtt {{\mathtt {max}\otimes \mathtt {min}}}\displaystyle \) indicates that the generic rule is instantiated as many times as possible, without superfluous instances (i.e. without duplicates or instances which are not applicable) and each one of the instantiated rules is applied once, if possible.

If a rule does not contain any non-ground term, then it has only one possible instantiation: itself. Thus, in this case, the instantiation is an idempotent transformation, and the modes \(\scriptstyle \mathtt {{\mathtt {min}\otimes \mathtt {min}}}\displaystyle \), \(\scriptstyle \mathtt {{\mathtt {min}\otimes \mathtt {max}}}\displaystyle \), \(\scriptstyle \mathtt {{\mathtt {max}\otimes \mathtt {min}}}\displaystyle \), \(\scriptstyle \mathtt {{\mathtt {max}\otimes \mathtt {max}}}\displaystyle \) fall back onto traditional modes \(\scriptstyle \mathtt {min}\displaystyle \), \(\scriptstyle \mathtt {max}\displaystyle \), \(\scriptstyle \mathtt {min}\displaystyle \), \(\scriptstyle \mathtt {max}\displaystyle \), respectively.

Special Cases. Simple scenarios involving generic rules are sometimes semantically equivalent to loop-based sets of non-generic rules. For example, consider the rule

$$ S_1 ~ a(x(I) \; y(J)) ~ \rightarrow _{\mathtt {max}\otimes \mathtt {min}}~ S_2 ~ b(I) ~ c(J), $$

where the cell’s contents guarantee that I and J only match integers in ranges [1, n] and [1, m], respectively. Under these assumptions, this rule is equivalent to the following set of non-generic rules:

$$ S_1 ~ a_{i,j} ~ \rightarrow _\mathtt {min}S_2 ~ b_i ~ c_j, ~ \forall i \in [1,n], j \in [1,m]. $$

However, unification is a much more powerful concept, which cannot be generally reduced to simple loops.

Micro-Surgery: Operations that Only Affect Inner Nested Symbols. Such operations improve both the crispness and the efficiency of the rules. Consider a cell that contains symbols o(abpq), r and a naive rule which attempts to change the inner b to a d, if an inner p and a top–level r are also present:

$$ S_1 ~ o(b R) ~ \rightarrow _{\mathtt {min}\otimes \mathtt {min}}~ S_2 ~ o(d R) ~ \mid ~ o(p \_) ~ r. $$

Unless we change the “standard” application rules, this rule fails, because symbol p is locked as a promoter and cannot be changed at the same time (not even by copy/paste from the left-hand side R to the right-hand side R). We solve this problem without changing the standard application rules, by adding an access path to the inner symbols needed. The access path is a slash delimited list of outer symbols, in nesting order, which opens an inner bag for usual rewriting operations; these outer symbols on the path are not themselves touched. For example, this modified rule solves the problem by opening the contents of o for processing:

$$ S_1 ~ o/b ~ \rightarrow _{\mathtt {min}\otimes \mathtt {min}}~ S_2 ~ o/d ~ \mid ~ o/p ~ r. $$

This extension helps even more when we want to localise the changes to inner symbols of a specific outer symbol. For example, consider a similar operation that needs to be applied on the innermost contents of symbol o(ij)(abpq), identified by its coordinates ij.

$$ S_1 ~ o(x(i) \, y(j))/b ~ \rightarrow _{\mathtt {min}\otimes \mathtt {min}}~ S_2 ~ o(x(i) \, y(j))/d ~ \mid ~ o(x(i) \, y(j))/p ~ r. $$

If all or most symbols involved share the same path, then the path could qualify the whole rule; existing top-level symbols could be qualified by usual path conventions, e.g. in our case, r could be explicitly qualified by either of  /  or .. / :

$$ o(x(i) \, y(j))~{:}{:}~S_1 ~ b ~ \rightarrow _{\mathtt {min}\otimes \mathtt {min}}~ S_2 ~ d ~ \mid ~ p ~ ../r. $$

Note that the usual rulesets are just a special case of this extension, when all rules are by default qualified with the root path  / .

Note. For all modes, the instantiations are conceptually created when rules are tested for applicability and are also ephemeral, i.e. they disappear at the end of the step. P system implementations are encouraged to directly apply high-level generic rules, if this is more efficient (it usually is); they may, but need not, start by transforming high-level rules into low-level rules, by way of instantiations.

Benefits. This type of generic rules allow (i) a reasonably fast parsing and processing of subcomponents, and (ii) algorithm descriptions with fixed size alphabets and fixed sized rulesets, independent of the size of the problem and number of cells in the system (often impossible with only atomic symbols).

Synchronous vs Asynchronous. In our models, we do not make any syntactic difference between the synchronous and asynchronous scenarios; this is strictly a runtime assumption [15]. Any model is able to run on both the synchronous and asynchronous runtime “engines”, albeit the results may differ.

In the synchronous scenario of traditional P systems, all rules in a step take together exactly one time unit and then all message exchanges (including loopback messages for in-symbols) are performed at the end of the step, in zero time (i.e. instantaneously). Alternatively, but logically equivalent, we here consider that rules in a step are performed in zero time (i.e. instantaneously) and then all message exchanges are performed in exactly one time unit. We prefer the second interpretation, because it allows us to interpret synchronous runs as special cases of asynchronous runs.

In the asynchronous scenario, we still consider that rules in a step are performed in zero time (i.e. instantaneously), but then, to arrive at its destination, each message may take any finite real time in the (0, 1] interval (i.e. travelling times are typically scaled to the travel time of the slowest message). Additionally, unless otherwise specified, we also assume that messages traveling on the same directed arc follow a FIFO rule, i.e. no fast message can overtake a slow progressing one. This definition closely emulates the standard definition used for asynchronous distributed algorithms [13]. Clearly, the asynchronous model is highly non-deterministic, but most useful algorithms manage to remain confluent.

In both scenarios, we need to cater for a particularity of P systems, where a cell may remain active after completing its current step and then will automatically start a new step, without necessarily receiving any new message. In contrast, in classical distributed models, nodes may only become active after receiving a new message, so there is no self-activation without messaging. We can solve this issue by (i) assuminging a hidden self-activation message that cells can post themselves at the end of the step and (ii) postulating that such self-addressed messages will arrive not later than any other messages coming from other cells.

Obviously, any algorithm that works correctly in the asynchronous mode will also work correctly in the synchronous mode, but the converse is not generally true: extra care may be needed to transform a correct synchronous algorithm into a correct asynchronous one; there are also general control layers, such as synchronisers, that can attempt to run a synchronous algorithm on an existing asynchronous runtime, but this does not always work [13].

Complexity Measures. We consider a set of basic complexity measures similar to the ones used in the traditional distributed algorithms field.

  • Time complexity: the supremum over all possible running times (which, although not perfect, is the most usual definition for the asynchronous time complexity).

  • Message complexity: the number of exchanged messages.

  • Atomic complexity: the number of atoms summed over all exchanged messages (analogous to the traditional bit complexity).

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Nicolescu, R. (2017). Revising the Membrane Computing Model for Byzantine Agreement. In: Leporati, A., Rozenberg, G., Salomaa, A., Zandron, C. (eds) Membrane Computing. CMC 2016. Lecture Notes in Computer Science(), vol 10105. Springer, Cham. https://doi.org/10.1007/978-3-319-54072-6_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-54072-6_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-54071-9

  • Online ISBN: 978-3-319-54072-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics