Skip to main content
Log in

Correct-by-construction synthesis of model transformations using transformation patterns

  • Regular Paper
  • Published:
Software & Systems Modeling Aims and scope Submit manuscript

Abstract

Model transformations are an essential part of model-based development approaches, such as Model-driven Architecture (MDA) and Model-driven Development (MDD). Model transformations are used to refine and abstract models, to re-express models in a new modelling language, and to analyse, refactor, compare and improve models. Therefore, the correctness of model transformations is critically important for successful application of model-based development: software developers should be able to rely upon the correct processing of their models by transformations in the same way that they rely upon compilers to produce correct executable versions of their programs. In this paper, we address this problem by defining standard structures for model transformation specifications and implementations, which serve as patterns and strategies for constructing a wide range of model transformations. These are incorporated into a tool-supported process which automatically synthesises implementations of model transformations from their specifications, these implementations are correct-by-construction with respect to their specifications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Agrawal, A., Vizhanyo, A., Kalmar, Z., Shi, F., Narayanan, A., Karsai, G.: Reusable idioms and patterns in graph transformation languages. In: Electronic notes in Theoretical Computer Science, pp. 181–192 (2005)

  2. Akehurst, D., Howells, W., McDonald-Maier, K.: Kent model transformation language. In: Model Transformations, in Practice (2005)

  3. van Amstel, M., Bosems, S., Kurtev, I., Pires, L.F.: Performance in model transformations: experiments with ATL and QVT, ICMT 2011. In: LNCS, vol. 6707, pp. 198–212. Springer, Berlin (2011)

  4. Anastasakis, K., Bordbar, B., Georg, G., Ray, I.: On challenges of model transformation from UML to Alloy. SoSyM 9(1) (2010)

  5. Bezivin, J., Jouault, F., Palies, J.: Towards Model Transformation Design Patterns. University of Nantes, ATLAS group (2003)

  6. Cabot, J., Clariso, R., Guerra, E., De Lara, J.: Verification and validation of declarative model-to-model transformations through invariants. J. Syst. Softw. 83(2), 283–302 (2010)

    Google Scholar 

  7. Cleaveland, C.: Program Generators with XML and Java. Prentice Hall, Englewood Cliffs (2001)

    Google Scholar 

  8. Cuadrado, J.S., Jouault, F., Molina, J.G., Bezivin, J.: Optimization patterns for OCL-based model transformations, MODELS 2008. In: LNCS, vol. 5421. Springer, Berlin (2008)

  9. Cuadrado, J., Molina, J.: Modularisation of model transformations through a phasing mechanism. Softw. Syst. Modell. 8(3), 325–345 (2009)

    Article  Google Scholar 

  10. Czarnecki, K., Helsen, S.: Classification of Model Transformation Approaches, OOPSLA 03 workshop on Generative Techniques in the context of Model-Driven Architecture, OOPSLA (2003)

  11. Czarnecki, K., Helsen, S.: Feature-based survey of model transformation approaches. IBM Syst. J. 45(3), 621–645 (2006)

    Article  Google Scholar 

  12. Duddy, K., Gerber, A., Lawley, M., Raymond, K., Steel, J.: Model transformation: a declarative, reusable pattern approach. In: 7th International Enterprise Distributed Object Computing Conference (EDOC ’03) (2003)

  13. Eclipse organisation, EMF Ecore specification (2011). http://www.eclipse.org/emf

  14. ESA, Hood Reference Manual R4 (2011). http://www.esa.int

  15. France, R., Chosh, S., Song, E., Kim, D.: A metamodelling approach to pattern-based model refactoring. IEEE Softw. 20(5), 52–58 (2003)

    Article  Google Scholar 

  16. France, R., Rumpe, B.: Model-driven development of complex software: a research roadmap, FOSE ’07. IEEE (2007)

  17. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Menlo Park (1994)

    Google Scholar 

  18. Goldschmidt, T., Wachsmuth, G.: Refinement Transformation Support for QVT Relational Transformations. FZI, Karlsruhe (2011)

    Google Scholar 

  19. Van Gorp, P., Mazanek, S., Rensink, A.: Live challenge problem, TTC 2010. Malaga (2010)

  20. Guerra, E., de Lara, J., Kolovos, D., Paige, R., Marchi dos Santos, O.: transML: a family of languages to model model transformations, MODELS 2010. In: LNCS, vol. 6394. Springer, Berlin (2010)

  21. Iacob, M.E., Steen, M.W.A., Heerink, L.: Reusable model transformation patterns. In: Enterprise Distributed Object Computing Conference (2008)

  22. Johannes, J., Zschaler, S., Fernandez, M., Castillo, A., Kolovos, D., Paige, R.: Abstracting complex languages through transformation and composition, MODELS 2009. In: LNCS, vol. 5795, pp. 546–550. Springer, Berlin (2009)

  23. Kermeta (2010). http://www.kermeta.org

  24. Kolahdouz-Rahimi, L., Lano, K., Pillay, S., Troya, J., Van Gorp, P.: Goal-oriented measurement of model transformation methods. Sci. Comput. Program. (2012) (submitted)

  25. Kolovos, D., Paige, R., Polack, F.: The Epsilon Transformation Language. In: ICMT 2008, LNCS, vol. 5063, pp. 46–60, Springer, Berlin (2008)

  26. Kurtev, I., Van den Berg, K., Joualt, F.: Rule-based modularisation in model transformation languages illustrated with ATL. In: Proceedings 2006 ACM Symposium on Applied Computing (SAC 06), pp. 1202–1209. ACM Press, New York (2006)

  27. Lano, K.: The B Language and Method. Springer, Berlin (1996)

    Book  Google Scholar 

  28. Lano, K.: A catalogue of UML model transformations (2006). http://www.dcs.kcl.ac.uk/staff/kcl/tcat.pdf

  29. Lano, K. (ed.): UML 2 Semantics and Applications. Wiley, New York (2009)

    Google Scholar 

  30. Lano, K., Kolahdouz-Rahimi, S.: Slicing of UML models using Model Transformations, MODELS 2010. In: LNCS, vol. 6395, pp. 228–242. Springer, Berlin (2010)

  31. Lano, K., Kolahdouz-Rahimi, S.: Migration case study using UML-RSDS, TTC 2010. Malaga, Spain (2010)

    Google Scholar 

  32. Lano, K., Kolahdouz-Rahimi, S.: Model-driven development of model transformations, ICMT (2011)

  33. Lano, K., Kolahdouz-Rahimi, S.: Specification of the “Hello World” case study, TTC (2011)

  34. Lano, K., Kolahdouz-Rahimi, S.: Specification of the GMF migration case study, TTC (2011)

  35. Lano, K., Kolahdouz-Rahimi, S.: Slicing techniques for UML models, JOT (2011)

  36. Lano, K., Kolahdouz-Rahimi, S.: Composition of model transformations in UML-RSDS. In: Lano, K., Zschaler, S., Tratt, L. (eds.) Composition and Evolution of Model Transformations. Bentham Science Press, United Arab Emirates (2012)

  37. Lano, K., Kolahdouz-Rahimi, S.: Transformation invertibility and interpretations in UML-RSDS. Dept. of Informatics, Kings College, London (2012)

  38. Lano, K., Kolahdouz-Rahimi, S., Clark, T.: Comparing Verification Techniques for Model Transformations. Modevva workshop, MODELS (2012)

  39. Mens, T., Czarnecki, K., Van Gorp, P.: A Taxonomy of Model Transformations, Dagstuhl Seminar Proceedings 04101 (2005)

  40. Markovic, S., Baar, T.: Semantics of OCL Specified with QVT. Softw. Syst. Modell. 7(4), 399–422 (2008)

    Google Scholar 

  41. OMG: Query/View/Transformation Specification, ptc/05-11-01 (2005)

  42. OMG: Query/View/Transformation Specification, annex A (2010)

  43. OMG: Model-Driven Architecture (2004). http://www.omg.org/mda/

  44. OMG: Meta Object Facility (MOF) Core Specification, OMG document formal/06-01-01 (2006)

  45. OptXware The Viatra-I Model Transformation Framework Users Guide (2010)

  46. Orejas, F., Guerra, E., J Ehrig, de Lara H.: Correctness, completeness and termination of pattern-based model-to-model transformation. CALCO 2009, pp. 383–397 (2009)

  47. Poernomo, I.: Proofs as model transformations, ICMT (2008)

  48. Poernomo, I., Terrell, J.: Correct-by-construction Model Transformations from Spanning tree specifications in Coq. ICFEM (2010)

  49. Pons, C., Giandini, R., Perez, G., Baum, G.: A two-level calculus for composing hybrid QVT transformations. SCCC, pp. 105–114. IEEE Press, New York (2009)

  50. Rensink, A., Kuperus, J-H.: Repotting the Geraniums: on nested graph transformation rules, proceedings of GT-VMT 2009. Electronic communications of the EASST, vol. 18 (2009)

  51. Romeikat, R., Roser, S., Mullender, P., Bauer, B.: Translation of QVT Relations into QVT Operational Mappings. ICMT (2008)

  52. Rose, L., Herrmannsdoerfer, M., Mazanek, S., et al.: Graph and Model Transformation Tools for Model Migration. SoSym (2012) (to appear)

  53. Schurr, A.: Specification of graph translators with triple graph grammars, WG ’94. In: LNCS, vol. 903, pp. 151–163. Springer, Berlin (1994)

  54. Sen, S., Moha, N., Mahe, V., Barais, O., Baudry, B., Jezequel, J.-M.: Reusable model transformations. Softw. Syst. Modell. 11(1), 111–125 (2012)

    Google Scholar 

  55. Syriani, E., Vangheluwe, H.: De-/re-constructing model transformation languages. Proceedings of 9th international workshop GT-VMT, Electronic Communications of EASST (2010)

  56. Taentzer, G., Ehrig, K., Guerra, E., de Lara, J., Lengyel, L., Levendovsky, T., Prange, U., Varro, D., Varro-Gyapay, S.: Model transformation by graph transformation: a comparative study. MODELS (2005)

  57. Tisi, M., Cabot, J., Jouault, F.: Improving higher-order transformations support in ATL, ICMT 2010. In: LNCS, vol. 6142, pp. 215–229. Springer, Berlin (2010)

  58. Varro, D., Asztalos, M., Bisztray, D., Boronat, A., Dang, D.-H., Geis, R., Greenyer, J., Van Gorp, P., Kniemeyer, O., Narayanan, A., Rencis, E., Weinell, E.: Transformation of UML models to CSP: a case study for graph transformation tools, AGTIVE 2007. In: LNCS, vol. 5088, pp. 540–565. Springer, Berlin (2007)

Download references

Acknowledgments

The work presented here was carried out in the EPSRC HoRTMoDA project at King’s College London.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. Lano.

Additional information

Communicated by Dr. Jeff Gray.

Appendices

Appendix A: Formal statement of patterns

In this section, we define the transformation patterns using the standard GoF pattern documentation format [17].

1.1 A.1 Conjunctive implicative form

Synopsis To specify the effect of a transformation in a declarative manner, as a global pre/post predicate, consisting of a conjunction of constraints with a \(\forall \, \implies \exists \) structure.

Forces Useful whenever a platform-independent specification of a transformation is suitable. The conjunctive-implicative form improves the modularity of a specification by separating the creation of a composite target entity instance and the creation of its components into successive constraints. This structure of specifications can be used to analyse the semantics of a transformation, and also to construct an implementation. It assists in the scheduling/control of rules, by specifying possible orders for rule execution, based upon the data dependencies of the constraints.

Solution The \(Cons\) predicate should be split into separate conjuncts each relating one (or a group) of source model elements to one (or a group) of target model elements:

$$\begin{aligned} \forall \, s : S_i \cdot SCond_{i,j} ~implies~ \exists \,t : T_{i,j} \cdot TCond_{i,j} ~and~ Post_{i,j} \end{aligned}$$

where the \(S_i\) are source entities, the \(T_{i,j}\) are target model entities, \(SCond_{i,j}\) is a predicate on \(s\) (identifying which elements the constraint should apply to), and \(Post_{i,j}\) defines \(t\) in terms of \(s\). Figure 4 shows the structure of this pattern.

There should not be further alternations of quantifiers in \(Post\), because such complexity hinders comprehension. Instead, object lookup by means of object indexing or traces can be used in \(Post\) to find (previously created) target elements to assign to association role features of \(t\).

For \(Cons\) specifications including type 2 or type 3 constraints, suitable \(Q\) measures are needed for these constraints to establish termination, confluence and correctness of the derived transformation implementation.

Consequences The transformation form assists in the definition of an inverse transformation (which will also have the same form), in the sequential decomposition of the transformation into phases, and in verification of the transformation.

Code examples A large example of this approach for a migration transformation is in [31]. The UML to relational mapping is also specified in this style in [32]. In this paper we have described the UML to EIS transformation in detail.

1.2 A.2 Structure preservation

Synopsis A transformation that maintains a 1–1 relation between the source and target models. Separate constraints or rules define the relation between each pair of source and target entities.

Forces These transformations are useful when a model needs to be copied to a closely-related language. They can be used as simple sub-components within more complex transformations, such as migration transformations where only a small part of a model changes significantly in structure from source to target.

Solution Define the transformation by conjunctive-implicative constraints

$$\begin{aligned} \forall \, s : S_i \cdot \exists \,t : T_{i} \cdot Post_{i} \end{aligned}$$

which are ordered starting from the base entities of \(S\) to successively composed entities. If identities of \(S_i\) objects are copied to those of \(T_j\) objects, this must be in an injective (inequality-preserving) manner.

Code examples Transformations which copy models (used particularly in ATL) are examples of this pattern. The pattern is referred to as the mapping pattern in [21]. Copy-by-retyping is a strategy for implementing this pattern in some model migration languages [52].

1.3 A.3 Entity splitting

Synopsis A transformation that creates instances of two or more target entities from one instance of a source entity. Separate constraints or rules are used to define the creation of instances of different entities.

Forces The pattern is used to reduce the complexity of such transformations. If a transformation combines the creation of multiple target instances in a single constraint, the specification may become excessively complex and difficult to maintain.

Solution Separate the parts of the constraint that refer to separate target entities into separate constraints. If one entity type \(T2\) depends on another, \(T1\), then the constraint creating \(T1\) should precede the constraint creating \(T2\), otherwise they can occur in either order. Identity attributes may be used to relate the different target instances derived from a single source instance. These attributes are also necessary for correct reversal of the transformation.

Figure 5 shows the typical structure of this pattern.

Consequences This pattern assists in improving the modularity of a specification, separating into distinct constraints the construction of distinct target entities.

Code examples An example is given in [26] of the application of this pattern to the creation of MVC components from source data. This pattern is related to the local source to global target type of transformation identified in [9], and to the refinement pattern of [21]. The element mapping and element mapping with variability patterns of [22] are special cases of the pattern. The combination of two constraints using the fork operator of [49] is an inverse to this pattern.

1.4 A.4 Entity merging

Synopsis A target model entity is updated using data from two or more different source model entities. Separate constraints or rules are used to define the updates from separate sources.

Forces Used when a target entity amalgamates data from several different source entities.

Solution Define the separate updates as distinct constraints, creating the target instance in the first constraint, and using some means of object lookup to locate this instance for update by subsequent constraints.

This pattern can be used to improve the decomposition of the specification, simplifying individual constraints.

Figure 6 shows a typical structure of this pattern.

Consequences It should be checked that constraints \(C_i\) and \(C_j\) which update the same entity or entity instance do not interfere with each other:

$$\begin{aligned} C_i ~\implies ~ [stat(C_j)]C_i \end{aligned}$$

where \(i < j\) and \(stat(C_j)\) is the phase that implements \(C_j\). The mapping of identity attributes from source objects to target objects can be non-injective, but in such a case successive updates to the same target object from different source objects must also be non-interfering.

Code examples This pattern is related to the global source to local target type of transformation identified in [9], and to the abstraction pattern of [21].

1.5 A.5 Parallel composition

Synopsis To decompose the specification of a transformation into components which define separate aspects of the transformation task, and which can be performed in parallel.

Forces This pattern is useful whenever a transformation consists of distinguishable groups of rules or constraints which can be separated to reduce the complexity and to increase the modularity of the transformation.

Solution Separate the specification constraints into distinct groups based on the aspect of the target model which they affect. Typically the groups are defined to update disjoint sets of target domains and features. The groups are then factored into separate transformations.

This pattern improves the specification modularity by separating out related constraints into sub-transformations, which may be modified relatively independently.

Consequences The groups of constraints selected should be cohesive, and with few dependencies upon other groups. Cycles of dependencies between groups are not permitted.

Code examples In the UML to EIS example, there are the groups: \(\{ C4, C5 \}\) (foundational), \(\{ C1, C2, C6 \}\) (business tier), \(\{ C3, C8 \}\) (resource tier), \(\{ C7 \}\) (presentation tier). These have the constraint dependencies:

$$\begin{aligned} C3 < C8, C4 < C5, C5 < C6, C6 < C7 \end{aligned}$$

Due to the clientship relationship between tiers, the sub-transformation for a tier will depend upon the sub-transformation for the immediately lower tier (eg., the presentation tier depends upon the business tier). So there are the dependency orderings

$$\begin{aligned} foundational < business, business < presentation \end{aligned}$$

of transformations.

The composition of the conjoined components is automatically carried out by the UML-RSDS tools, producing a composed transformation in which the constraints are ordered (i) in the same relative order as in their components, (ii) in an order respecting any dependencies between components.

In this case, a possible ordering is therefore

$$\begin{aligned} C4, C5, C3, C8, C1, C2, C6, C7 \end{aligned}$$

The external composition of transformations by the fork operator of [49] is similar to parallel composition of transformations.

1.6 A.6 Recursive form

Synopsis To specify the effect of a transformation in a declarative manner, as a global pre/post predicate, using a recursive definition of the transformation relation.

Forces Useful whenever a platform-independent specification of a transformation is required, and the conjunctive-implicative form is not applicable, because the post-state of the transformation cannot be directly characterised or simply related to the pre-state.

Solution The \(Cons\) predicate is defined by an equation such as

$$\begin{aligned} tmodel = \tau (smodel{\text{@}pre}) \end{aligned}$$

or more generally as

$$\begin{aligned} (smodel,tmodel) = \tau (smodel{\text{@}pre}, tmodel{\text{@}pre}) \end{aligned}$$

where \(smodel\) represents the source model and \(tmodel\) the target model, and \(\tau \) is a recursive function defined by the \(Cons\) constraints, usually by a disjunction of cases, including a default case to terminate the recursion.

There should exist a measure \(Q: \mathbb{N }\) on the state of the source and target models, such that \(Q\) is decreased on each step of the recursion, and with \(Q = 0\) in the termination case of the recursion. \(Q\) is an abstract measure of the time complexity of the transformation, the maximum number of steps needed to complete the transformation on a particular model. For quality-improvement transformations it can also be regarded as a measure of the (lack of) quality of a model.

This pattern improves the potential for analysis of a specification, by making explicit the function that may be computed by a set of rules.

Consequences The proof of \(Ens\) and other properties from \(Cons\) is more indirect for this style of specification, typically requiring induction using the recursive definitions.

Implementation The constraints can be used to define a recursive operation that satisfies the specification, or an equivalent iterative form. The constraints can also be used to define pattern-matching rules in transformation languages such as ATL or QVT-R. The source patterns for such rules are derived from the conditions \(SCond\), and the definitions of the rule effects are derived from the conditions \(\exists \,t : T_{i,j} \cdot TCond_{i,j} ~and~ Post_{i,j}\), which define incremental updates to the source and target models.

Code examples Many computer science problems can be expressed in this form, such as sorting, searching and scheduling. We use the implementation approach 3, as for type 3 constraints, to implement recursive specifications in Java.

1.7 A.7 Auxiliary metamodel

Synopsis The introduction of a metamodel for auxiliary data, neither part of the source or target language, used in a model transformation.

Forces Useful whenever auxiliary data needs to be used in a transformation: such data may simplify the transformation definition, and may permit a more convenient use of the transformation, eg., by supporting decomposition into sub-transformations. A typical case is a query transformation which evaluates some complex expression over the source model, such as computing the maximum depth of inheritance in a class diagram, or counting the number of instances of a complex structure in the source model: explicitly representing computed data as auxiliary entities or features may simplify the transformation.

Tracing can also be carried out by using auxiliary data to record the history of transformation steps within a transformation. The concept of a correspondence graph or model also uses an auxiliary metamodel to retain a record of which target instances are derived from which source instances: this helps to prevent duplicate creation of target elements and to detect termination of the transformation process [58].

Solution Define the auxiliary metamodel as a set of (meta) attributes, associations, entities and generalisations extending the source and/or target metamodels. These elements may be used in the succedents of \(Cons\) constraints (to define how the auxiliary data is derived from source model data) or in antecedents (to define how target model data is derived from the auxiliary data).

Figure 7 shows a typical structure of this pattern. This pattern helps to simplify the complexity of model navigations and constructions in a transformation, and to decompose the transformation into subparts/phases.

Consequences It may be necessary to remove auxiliary data from a target model, if this model must conform to a specific target language at termination of the transformation. A final phase in the transformation could be defined to delete the data (cf. the construction and cleanup pattern).

Code examples Auxiliary metamodel can be used to add artificial structure to a model, such as a root element, to assist in navigation. It can also be used to precompute expression values prior to a transformation execution, to avoid duplicated evaluations.

An example of the pattern is a transformation which returns the number of cycles of three distinct nodes in a graph. This problem can be elegantly solved by extending the basic graph metamodel by defining an auxiliary entity \(ThreeCycle\) which records the 3-cycles in the graph (Fig. 11).

Fig. 11
figure 11

Extended graph metamodel

The auxiliary language elements are shown with dashed lines.

The specification \(Cons\) of this transformation then defines how unique elements of \(ThreeCycle\) are derived from the graph, and returns the cardinality of this type at the end state of the transformation:

$$\begin{aligned}&(C1):~ \\&\, \forall \, g : Graph \cdot \forall \, e1 : g.edges; e2 : g.edges; e3 : \\&\quad g.edges ~\cdot \\&\,\, e1.trg = e2.src ~and~ e2.trg = e3.src~ and~ e3.trg \\&\quad = e1.src ~and\\&\,\, (e1.src \cup e2.src \cup e3.src){\rightarrow }size() = 3 ~~implies\\&\,\,\, \exists _1\,tc : ThreeCycle \cdot tc.elements = (e1.src \cup e2.src \\&\qquad \cup e3.src) ~and~\\&\,\,\,\,\,\, tc : g.cycles \\&(C2):~ \forall \, g : Graph \cdot \exists \,r : IntResult \cdot r.num \\&\quad = g.cycles{\rightarrow }size() \end{aligned}$$

There are therefore two phases: (i) constructing the auxiliary data, and (ii) using it to compute the required result.

The alternative to introducing the intermediate entity would be a more complex definition of the constraints, involving the construction of sets of sets using OCL \(collect\).

Related patterns This pattern extends the conjunctive-implicative and recursive form patterns, by allowing constraints to refer to data which is neither part of the source or target languages. Some previously-published patterns can be considered to be special cases of this pattern: the map-using-link pattern of [1], and the determining an opposite relationship and finding constant expressions patterns of [8].

1.8 A.8 Construction and cleanup

Synopsis To simplify a transformation by separating rules which construct model elements from those which delete elements.

Forces Useful when a transformation both creates and deletes elements of entities, resulting in complex specifications. In the conjunctive-implicative form, it is necessary to place all constraints which delete elements of source entity \(S_i\) after all constraints which read or write these elements.

Solution Separate the creation phase and deletion phase into separate constraints, usually the creation (construction phase) will precede the deletion (cleanup). These can be implemented as separate transformations, each with a simpler specification and coding than the single transformation.

This pattern assists in modularising a transformation.

Consequences The pattern leads to the production of intermediate models (between construction and deletion) which may be invalid as models of either the source or target languages. It may be necessary to form an enlarged language for such models.

Code examples Examples are migration transformations where there are common entities between the source and target languages [34]. A first phase copies/adapts any necessary data from the old version (source) entities which are absent in the new version (target) language, and creates data for new entities, then a second phase removes all elements of the model which are not in the target language. The intermediate model is a model of a union language of the source and target languages.

Other examples are complex update-in-place transformations, such as the removal of duplicated attributes [24] or multiple inheritance. The ATL solution to the case study of [24] uses this pattern.

One implementation strategy for this pattern is to explicitly mark the unwanted elements for deletion in a first phase, and then to carry out the deletion of marked elements in a second phase. This approach can be applied to the transformation to remove multiple inheritance: a first phase could mark the generalisations to be removed, a second phase could introduce the replacement associations, and a third deletes the marked generalisations. This is an application of the auxiliary metamodel pattern.

1.9 A.9 Unique instantiation

Synopsis To avoid duplicate creation of objects in the target model, a check is made that an object satisfying specified properties does not already exist, before such an object is created.

Forces Required when duplicated copies of objects in the target model are forbidden, either explicitly by use of the \(\exists _1\,t : T_j \cdot Post\) quantifier, or implicitly by the fact that \(T_j\) possesses an identifier (primary key) attribute.

Solution To implement a specification \(\exists _1\,t : T_j \cdot Post\) for a concrete class \(T_j\), test if \(\exists \,t : T_j \cdot Post\) is already true. If so, take no action, otherwise, create a new instance \(t\) of \(T_j\) and establish \(Post\) for this \(t\).

In the case of a specification \(\exists \,t : T_j \cdot t.id = x ~and~ Post\) where \(id\) is a primary key attribute, check if a \(T_j\) object with this \(id\) value already exists: \(x \in T_j.id\) and if so, use the object (\(T_j[x]\)) to establish \(Post\). Otherwise (if \(T_j\) is concrete and not of the form \(E\)@\(pre\)), create a new instance \(t\) of \(T_j\) and apply \(t.id = x ~and~ Post\) to this instance.

This pattern assists in the decomposition of a transformation, enabling a target object to be created in one phase and referred to and modified by a subsequent phase.

Consequences The pattern ensures the correct implementation of the constraint. It can be used when we wish to share one subordinate object between several referring objects: the subordinate object is created only once, and is subsequently shared by the referrers. Cf., also the entity merging pattern.

Implementation To implement \(\exists \,x : T \cdot x.id = v ~and~ P\) where \(id\) is the identity attribute of \(T\), we use the following design activity:

figure a1

Likewise for \(\exists _1\,x : T \cdot x.id = v ~and~ P\).

To implement \(\exists _1\,x : T \cdot P\) where no equation \(x.id = v\) occurs in \(P\), we perform the activity

figure a2

Code examples Here is sample code, created to check the uniqueness of a \(ThreeCycle\) in a graph, corresponding to the succeedent of constraint \(C1\) above:

figure a3

where the \(exists\_0\) query operation tests if there is already a three cycle with the given elements in \(this.cycles\).

The ‘check before enforce’ matching semantics for QVT-R rules is another example of the pattern. The unique lazy rules of ATL are similar in their purpose, and the use of \(equivalent\)/\(equivalents\) in ETL, and \(resolveIn\)/\(resolveoneIn\) in QVT-O also correspond to application of this pattern [25, 51].

Related patterns Object indexing can be used to efficiently obtain an object with a given primary key value in the second variant of the pattern. The pattern facilitates the use of the entity merging pattern.

1.10 A.10 Object indexing

Synopsis All objects of a class are indexed by a unique key value, to permit efficient lookup of objects by their key.

Forces Required when frequent access is needed to objects or sets of objects based upon some unique identifier attribute (a primary key).

Solution Maintain an index map data structure \(cmap\) of type \(IndType \rightarrow C\), where \(C\) is the class to be indexed, and \(IndType\) the type of its primary key. Access to a \(C\) object with key value \(v\) is then obtained by applying \(cmap\) to \(v\): \(cmap.get(v)\).

This pattern also assists in the separation of a transformation into phases.

Figure 12 shows the structure of the pattern. The map \(cmap\) is a qualified association, and is an auxiliary metamodel element used to facilitate separation of the specification into loosely coupled rules.

Fig. 12
figure 12

Object indexing structure

Consequences The key value of an object should not be changed after its creation: any such change will require an update of \(cmap\), including a check that the new key value is not already used in another object.

The pattern substantially improves the efficiency of object lookup, making this a constant-time operation independent of the size of the domain \(C\).

Implementation When a new \(C\) object \(c\) is created, add \(c.ind \mapsto c\) to \(cmap\). When \(c\) is deleted, remove this pair from \(cmap\). To look up \(C\) objects by their id, apply \(cmap\).

An alternative strategy to look up target model elements is to use an explicit transformation trace facility, as in Kermeta [23] and Viatra [45], or implicit traces as in ETL [25].

The concept of key attribute in QVT-R is an example of the use of this pattern.

Related patterns The pattern can be regarded as an implementation-level application of the auxiliary metamodel pattern. It is used by phased creation to look up the target elements corresponding to previously-processed source model elements. It is used by unique instantiation to look up elements to avoid duplicating them.

1.11 A.11 Omit negative application conditions

Synopsis Omit redundant tests for negative application for a constraint, in order to optimise its implementation.

Forces If it can be deduced that the succeedent of a constraint is inconsistent with the antecedent, omit checks in the constraint implementation for the truth of the succeedent. Such checks can constitute a substantial part of the execution time of the transformation, for complex succeedents.

Solution When applying a type 2 or type 3 constraint

$$\begin{aligned} \forall \, s : S_i \cdot SCond ~~implies~~ Succ \end{aligned}$$

to source model elements, a test to check if the succeedent \(Succ\) is already true is performed (if \(SCond\) holds): if this test succeeds, then the constraint is not applied. However, if it is known that

$$\begin{aligned} Succ \implies \lnot (SCond) \end{aligned}$$

then this test is unnecessary and can be omitted.

The UML-RSDS tools carry out checks to see if this optimisation can be performed, based on the syntactic form of the antecedent and succeedent.

Consequences The executable code of the constraint is simplified, and its execution cost is reduced.

Code examples This pattern particularly applies to recursive form constraints. For example, the removal of unbalanced nodes from a binary tree:

$$\begin{aligned}&\forall \, b : BST \cdot b.left = \{\} ~and~ b.right \ne \{\} ~~implies\\&~~ \exists \,b1 : BST \cdot b1 : b.left \\&\forall \, b : BST \cdot b.left \ne \{\} ~and~ b.right = \{\} ~~implies\\&~~ \exists \,b1 : BST \cdot b1 : b.right \end{aligned}$$

Here the succeedent formula \(b1 : b.f\) contradicts the antecedent conjunct \(b.f = \{\}\), for feature \(f\) being \(left\) in the first constraint, and \(right\) in the second. Therefore, in both cases, the succeedent test can be omitted from the constraint implementation.

The resulting optimisation of the code can be significant. A test of this example on a model of 2,000 \(BST\) elements arranged so that the second constraint is applicable 1,000 times, results in an execution time of 2,053 ms with the pattern applied, in contrast to a time of 3,025 ms without the pattern.

Related patterns The ‘replace recursion by iteration’ optimisation is an alternative optimisation of type 3 constraints.

1.12 A.12 Replace recursion by iteration

Synopsis Use a single iteration over a source domain \(S_i\), instead of a fixpoint iteration, in cases of constraints based on \(S_i\) which also create or delete \(S_i\) elements, if each application strictly reduces the set of \(S_i\) elements that satisfy the application conditions of the constraint.

Forces A type 3 constraint

$$\begin{aligned} \forall \, s : S_i \cdot SCond ~implies~ Succ \end{aligned}$$

can be implemented by a single iteration over \(S_i\) provided that each application of \(Succ\) reduces the set of elements that match the constraint conditions.

Solution For such a constraint, if

$$\begin{aligned}&\forall \, matches \cdot matches = \{ s : S_i | SCond \}~\wedge ~matches\\&\quad \ne \{\} ~\implies ~\\&\quad [stat(Succ)]( \{ s : S_i | SCond \} \subset matches ) \end{aligned}$$

can be proved, then \(Q\) is the size of  \(\{ s : S_i | SCond \}\)  and an iteration

figure a4

is sufficient to ensure the constraint is established.

Consequences The execution time of the constraint implementation depends only upon the original number of elements of \(S_i\), not upon the maximum number that could exist. There is no need for a variant function \(Q\) to be defined explicitly for the constraint.

Code examples An example is the ‘repotting geraniums’ example of [50] (Fig. 13):

$$\begin{aligned}&\forall \, p : Pot; v ~\cdot \\&\quad p.broken = true ~and~ \\&\quad v = p.plants{\rightarrow }select( flowering = true ) ~and\\&\quad v \ne \{ \} ~~implies \\&\quad \quad \exists \,p1 : Pot \cdot p1.plants = v \end{aligned}$$

For every broken pot that contains some flowering plant, a new (unbroken) pot is created and the flowering plants of the broken pot are transferred to the new pot. Even though new pots are created by applications of the constraint, the number of pots satisfying the application condition (antecedent) is reduced by each application.

Fig. 13
figure 13

Repotting geraniums metamodel

In one test case, with 1,000 broken pots, each containing 10 flowering plants, the unoptimised implementation takes 5,948 ms to execute, the optimised takes 5,638 ms.

1.13 A.13 Decompose complex navigations

Synopsis Simplify long navigation expressions in antecedent tests of a constraint, in order to optimise its implementation.

Forces A test \(a : f1.f2\) which selects an element \(a\) of the composition of two or more collection-valued association ends is potentially inefficient to compute, because the composition may involve the construction of large collections of elements.

Solution Decompose the test into two or more simpler matching tests:

$$\begin{aligned} s : f1 ~and~ a : s.f2 \end{aligned}$$

where \(s\) is a new variable identifier, not occurring elsewhere in the constraint.

Consequences If the size of \(f1\) is typically \(M\) and that of \(f2\) is \(N\), then only \(M + N\) elements are searched for an initial match for \(a\) in the optimised version, in contrast to \(M*N\) elements in the unoptimised version.

Therefore, the optimisation is beneficial particularly for type 3 or recursive form constraints for which there is a high likelihood of finding a matching \(a\) in any particular \(s.f2\) set.

Code examples In the class diagram rationalisation case study, two constraints include an antecedent test

$$\begin{aligned} a : specialization.specific.ownedAttribute \end{aligned}$$

to select some attribute of a subclass of the current class. This can be optimised by writing

$$\begin{aligned} s : specialization.specific ~and~ a : s.ownedAttribute \end{aligned}$$

In a test case with 500 subclasses of a class \(c\), each subclass having ten attributes which are clones of the attributes of the other subclasses, the non-optimised version computes a set of attributes of size 5000 in order to select one candidate \(a\) element. In contrast, the optimised version only computes the set \(specialization.specific\) of size 500, and the set of 10 attributes of the first element of this set. The execution time of the non-optimised version on this test case is 472 s, the execution of the optimised version is 273 s [24].

1.14 A.14 Remove duplicated expression evaluations

Synopsis Avoid duplicated evaluation of expressions by using mechanisms to retain or cache their values.

Forces An expression evaluation occurs in multiple places within a transformation rule or specification, and will have the same value at these different locations.

Solution Remove duplicated evaluations of complex expressions from transformation specifications: duplicated expressions within a single rule/constraint can be factored out by using a \(let\, v = e\) construct, which evaluates the expression \(e\) once, the value can be subsequently referenced by the identifier \(v\). Duplicates in different rules can be factored out by defining auxiliary data features and storing pre-computed expression values in these features. Alternatively they can be factored without caching by defining query operations to compute them.

Consequences Modularity of the specification is increased, because of the higher factorisation of the specification after the pattern is applied. Efficiency may not be increased in certain cases, if cached values are never used then the overhead of caching may increase execution time.

Code examples This optimisation is performed in ATL by moving duplicated expression evaluations to helper operations, whose results are cached. Depending on the structure of the input model, this may reduce execution times [3]. Languages such as Kermeta and Viatra also have mechanisms for factoring out repeated evaluations.

An example of duplicated expressions within a constraint occurs in the class diagram rationalisation case study [24], in the constraint:

$$\begin{aligned}&\forall \, c : Entity; a : c.specialization.specific.owned\\&\quad Attribute ~and\\&\quad v = c.specialization{\rightarrow }select(\\&\quad \quad specific.ownedAttribute{\rightarrow }exists( b | b.name \\&\qquad \quad = a.name ~and~ b.type = a.type) ) ~and\\&\quad \quad v{\rightarrow }size() > 1 ~~implies\\&\quad \quad \exists \,e : Entity \cdot e.name = c.name + a.name ~and\\&\quad \quad \quad a : e.ownedAttribute ~and\\&\quad \quad \quad e.specialization = v ~and\\&\quad \quad \quad \exists \,g : Generalization \cdot g : c.specialization ~and\\&\qquad \qquad ~ g.specific = e ~and\\&\quad \quad \quad v.specific.ownedAttribute{\rightarrow }select( name \\&\qquad \qquad = a.name ){\rightarrow }isDeleted() \end{aligned}$$

Here, a let variable \(v\) is used to avoid re-computation of the complex expression

$$\begin{aligned}&c.specialization{\rightarrow }select\\&\quad (specific.ownedAttribute{\rightarrow }exists( b | b.name \\&\qquad = a.name ~and~ b.type = a.type) ) \end{aligned}$$

which is used in three places in the constraint.

An example of duplicated evaluations between constraint applications is the constraint

$$\begin{aligned}&\forall \, c : Entity; a : c.ownedAttribute~ \cdot \\&\quad \quad \exists \,cl : Column \cdot cl.rdbname = a.name ~and~ \\&\qquad \quad cl : Table[c.rootClass().name].column \end{aligned}$$

in a UML to relational database transformation. Here, the computation of the \(rootClass()\) of each entity can involve duplicated evaluations, and it may be more efficient to precompute these and store them in an auxiliary association \(rootClass : Entity\) of \(Entity\), computed using the generic transitive closure transformation (Sect. 8).

Related patterns This is a special case of a general modularisation and optimisation strategy for factoring out repeated sub-expressions from programs or specifications [28]. In [8], the related pattern finding constant expressions is described. Auxiliary metamodel can be used to cache precomputed expression values in auxiliary metamodel elements.

The same concept of factoring can be applied to common update functionality that is repeated in different rules. However, such factoring only improves the modularity of the specification, and does not improve efficiency. Implementation of update factoring using explicit rule invocation is described in [26].

1.15 A.15 Recursive descent

Synopsis Construct target model elements recursively from the top down, by following the corresponding hierarchy of the source model elements from composite elements down to their components, recursively.

Forces Appropriate for relatively simple transformations in which there is a close structural correspondence between source and target models, and a strict hierarchy of entities (no cycles in the entity dependency relation).

Solution Decompose the transformation into operations, based upon the \(Cons\) constraints.

A constraint

$$\begin{aligned} \forall \, s : S_i \cdot SCond ~implies~ \exists \,t : T_j \cdot TCond ~and~ Post \end{aligned}$$

where \(S_i\) and \(T_j\) are maximal in the entity hierarchies of their languages will be implemented by an operation \(mapSiToTj() : T_j\) of \(S_i\), which if the \(S_i\) object satisfies \(SCond\), creates a new instance \(t\) of \(T_j\), sets its local attribute data according to \(TCond\), and recursively calls \(subs.mapSubSToSubT(t)\) on subordinate elements \(subs\) of the \(S_i\) object which are involved in the transformation. Rules for the subordinate entities are likewise implemented by operations \(mapSubSToSubT(t : TSup)\) with the same structure, but which also set the links in \(Post\) between \(t\) and its subordinate components.

This provides a local decomposition of a transformation (within a single phase), and an explicit scheduling of rule applications.

Consequences The transformation implementation is tightly coupled to the source and target entity hierarchies, and cannot be decomposed into separate phases.

Code examples The implementation of the UML to relational database transformation in QVT-R, ATL or KMTL are examples of this pattern [2, 42].

The constraints \(C4\) and \(C5\) can be implemented by a recursive descent design, with operations

figure a5

in \(Actor\) and

figure a6

in \(UseCase\).

The \(\exists _1\) operator is used in the definition of \(mapToUser\) because no more than one \(User\) object with a given name should exist.

The execution time (130ms) of this implementation on a model with 1,000 use cases and 10,000 actors is similar to that of the phased creation implementation of the constraints.

1.16 A.16 Phased creation

Synopsis Construct target model elements in phases, ‘bottom-up’ from individual objects to composite structures, based upon a structural dependency ordering of the target language entities.

Forces Used whenever the target model is too complex to construct in a single step. In particular, if an entity depends upon itself via an association, or two or more entities are mutually dependent via associations. In such a case, the entity instances are created first in one phase, then the links between the instances are established in a subsequent phase.

Solution Decompose the transformation into phases, based upon the \(Cons\) constraints. These constraints should be ordered so that data read in one constraint is not written by a subsequent constraint, in particular, phase \(p1\) must precede phase \(p2\) if it creates instances of an entity \(T1\) which is read in \(p2\).

This pattern assists in the scheduling/control of rules, and in the decomposition of the transformation into logical phases. It also assists in verification, by decomposing verification on the basis of the phases.

Consequences The stepwise construction of the target model leads to a transformation implementation as a sequence of phases: earlier phases construct elements that are used in later phases. Some mechanism is required to look up target elements from earlier phases, such as by key-based search or by trace lookup.

Implementation The constraints are analysed to determine the dependency ordering between the target language data and entities. \(T1 < T2\) means that a \(T1\) instance is used in the construction of a \(T2\) instance. Usually this is because there is an association directed from \(T2\) to \(T1\), or because some feature of \(T2\) is derived from an expression using \(T1\) elements.

If the order \(<\) is a partial order then the corresponding ordering of phases follows directly from \(<\): a phase that creates \(T2\) instances must follow all phases that create \(T1\) instances, where \(T1 < T2\). However, if there are self-loops \(T3 < T3\), or longer cycles of dependencies, then the phases creating the entities do not set the links between them, instead there must be a phase which follows all these phases which specifically sets the links. This case of the pattern is related to the separation of generate and refinement transformation phases in [9].

Code examples The \(ThreeCycle\) example illustrates the simple case. Here \(ThreeCycle < IntResult\), so the phase implementing \(C2\) must follow that for \(C1\). Likewise in the UML to EIS example, where \(SessionBean < Operation\).

A simple example of the second case of the pattern is shown in Fig. 14.

Fig. 14
figure 14

Mutually dependent target entities example

Here, there are four constraints in \(Cons\):

$$\begin{aligned}&\forall \, s : S_1 \cdot s.x > 0 ~implies~ \exists \,t : T_1 \cdot t.id2 = s.id1 \\&\forall \, s : S_1 \cdot s.x \le 0 ~implies~ \exists \,t : T_2 \cdot t.id3 = s.id1 \\&\forall \, s : S_1 \cdot s.x > 0 ~implies~ T_1[s.id1].r2 = T_2[s.r.id1] \\&\forall \, s : S_1 \cdot s.x \le 0 ~implies~ T_2[s.id1].r1 = T_1[s.r.id1] \end{aligned}$$

This is a re-expression transformation, which loses some source model information (\(S_1\) objects with \(x > 0\) will only have a copy of \(r{\rightarrow }select(x \le 0)\) in the target model, rather than the entire \(r\) set). \(T_1\) and \(T_2\) are mutually dependent. The first and second constraints can be implemented by a single phase, this is then followed by a phase implementing the third and fourth constraints.

Related patterns Object indexing can be used to find the target elements constructed for particular source elements in earlier phases.

The pattern is closely related to the concept of phases in [9] and the concept of layers in AGG [56]. It is the most natural way of implementing conjunctive-implicative form constraints.

Appendix B: Generic transformations

Some transformations occur quite often as particular components within a transformation process, and can therefore be conveniently defined as reusable generic transformations. In contrast to general design patterns, the degree of variability in the application of generic transformations is restricted to instantiation of a fixed template specification. We consider that it is preferable to define such transformations in a highly generic manner and to reuse them by parameter instantiation, rather than adapting them to different metamodels by means of external adaption code [54]. A generic transformation should also include a proof of its properties, such as termination and confluence, by means of a suitable \(Q\) measure. These proofs can then be reused for the specific instantiations.

1.1 B.1 Transitive closure of an association

This generic transformation computes the transitive closure of a many–many self association \(parent\) on an entity \(E\), as a derived many–many association \(ancestor\) on \(E\) (Fig. 15).

Fig. 15
figure 15

Transitive closure metamodel

There are two constraints:

$$\begin{aligned}&\forall \, s : E \cdot s.parent \subseteq s.ancestor \\&\forall \, s : E \cdot s.parent.ancestor \subseteq s.ancestor \end{aligned}$$

The first is of type 1, the second of type 2.

A \(Q\) measure for the second constraint is:

$$\begin{aligned} \Sigma _{s : E} card(s.parent^+ - s.ancestor) \end{aligned}$$

where \(parent^+\) is the non-reflexive transitive closure of \(parent\). The transformation is confluent and terminating. The transformation has been verified using the Atelier B toolkit [38].

This transformation can be reused by instantiating \(E\), \(parent\) and \(ancestor\) as required: \(parent\) can be instantiated by a one-many association (and by compositions and other combinations of associations), but \(ancestor\) should be instantiated only by single many–many associations.

Other descriptions of this transformation are given in [1, 56].

1.2 B.2 Flattening of composite structures

This transformation concerns object hierarchies defined by whole-part relations (Fig. 16).

Fig. 16
figure 16

Flattening metamodel

This can be specified using the transitive closure transformation:

$$\begin{aligned}&\forall \, c : Composite \cdot c.elements = c.partstc\\&\quad {\rightarrow }select( self \in Basic ) \end{aligned}$$

where \(partstc\) is an auxiliary association that stores the transitive closure of \(parts\), considered as a many–many self association on \(E\).

It can also be directly specified:

$$\begin{aligned}&\forall \, c : Composite \cdot c.parts{\rightarrow }select( self \in Basic ) \\&\quad \subseteq c.elements \\&\forall \, c : Composite; x \cdot x : c.parts~and~ x : Composite\\&\quad ~implies~ x.elements \subseteq c.elements \end{aligned}$$

The first constraint is of type 1, the second is of type 2. This transformation is also described in [21].

1.3 B.3 Coalescing sets

This transformation finds a minimal partition which extends a given set of sets. It coalesces pairs of sets which share a common element until only pairwise disjoint sets remain.

Figure 17 shows the metamodel for this update-in-place transformation.

Fig. 17
figure 17

Coalescing metamodel

The constraint is:

$$\begin{aligned}&\forall \, a1, a2 : A ~\cdot \\&\quad a1 \ne a2 ~and~ a1.br \cap a2.br \ne \{\}~ implies\\&\quad \quad a2.br \subseteq a1.br ~and~ a2{\rightarrow }isDeleted() \end{aligned}$$

This is of type 3, and of recursive form, but can be optimised by omitting the succeedent test.

A \(Q\) measure is

$$\begin{aligned} card(\{ a1, a2 : A | a1 \ne a2 \wedge a1.br \cap a2.br \ne \{\} \}) \end{aligned}$$

The transformation is terminating, but confluence does not hold, because non-isomorphic termination states can be reached.

1.4 B.4 Process to state

This is described as the duality pattern in [21]. It transforms a process-based dynamic modelling notation (such as UML activity diagrams) to a state-based notation (such as UML state machines).

Figure 18 shows the metamodels.

Fig. 18
figure 18

Process to state metamodels

The transformation is expressed by several constraints, dealing with different local topologies of process diagrams [21]. The two basic constraints are those mapping arrows to states, and a straight-line join of two arrows at a process to a transition:

$$\begin{aligned}&\forall \, a : Arrow ~\cdot \\&\quad \exists \,s : State ~\cdot ~ s.name = a.name ~and~ \\&\qquad ArrowStateDetails \\&\forall \, p : Process ~\cdot ~ p.outgoing{\rightarrow }size() = 1 ~and~\\&\quad p.incoming{\rightarrow }size() = 1 ~implies\\&\quad \exists \,t : Transition ~\cdot ~ t.source = State[p.incoming.\\&\qquad name] ~and~ \\&\quad \quad t.target = State[p.outgoing.name] ~and~\\&\qquad \quad ProcessTransitionDetails \end{aligned}$$

The terms \(ArrowStateDetails\) and \(Process\) \(Transition\) \(Details\) are predicate variables which will be instantiated with specific predicates for particular applications of the transformation.

Appendix C: Write and read frames

The write frame \(wr(P)\) of a predicate is the set of features and classes that it modifies, when interpreted as an action (an action \(stat(P)\) to establish \(P\)). This includes object creation. The read frame \(rd(P)\) is the set of classes and features read in \(P\). The read and write frames can help to distinguish different implementation strategies for conjunctive-implicative constraints. In some cases, a more precise analysis is necessary, where \(wr^*(P)\) and \(rd^*(P)\), which include the sets of objects written and read in \(P\), are used instead.

Table 7 gives the definition of some cases of these sets.

Table 7 Definition of read and write frames

If an association end \(role2\) has a named opposite end \(role1\), then \(role1\) depends on \(role2\) and vice-versa. Deleting an instance of class \(E\) may affect any superclass of \(E\) and any association end incident with \(E\) or with any superclass of \(E\). The read frame of an operation invocation \(e.op(pars)\) is the read frame of \(e\) together with that of the postcondition of \(op\), excluding the formal parameters of \(op\), and of the parameters \(pars\).

In some cases, \(wr(Post) \cap rd(Post)\) may be non-empty, but \(wr^*(Post) \cap rd^*(Post)\) is empty. For example, the constraint

$$\begin{aligned}&\forall \, e1 : Edge{\text{@}}pre; e2 : Edge{\text{@}}pre ~\cdot \\&\quad \quad e1 \ne e2 ~and~ e1.trg = e2.src~ implies\\&\quad \quad \quad \exists _1\,e : Edge \cdot e.src = e1.src ~and~ e.trg = e2.trg \end{aligned}$$

to compute the composition of edges in a graph. Here \(trg\) and \(src\) are both read and written to in the constraint. But \(wr^*(Post)\) is \(\{ e \} \times \{ src, trg \}\), where \(e \in \overline{Edge} - \overline{Edge{\text{@}}pre}\) and so \(wr^*\) is disjoint for distinct applications of the constraint, and also disjoint from \(rd^*\) of the constraint, which has object set \(\{ e1, e2 \}\) where \(e1 \in \overline{Edge{\text{@}}pre}, e2 \in \overline{Edge{\text{@}}pre}\). Therefore approach 1 can be used to implement the constraint, instead of approach 3.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lano, K., Kolahdouz-Rahimi, S., Poernomo, I. et al. Correct-by-construction synthesis of model transformations using transformation patterns. Softw Syst Model 13, 873–907 (2014). https://doi.org/10.1007/s10270-012-0291-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10270-012-0291-7

Keywords

Navigation