Skip to main content

Exploiting Results of Model-Based Analysis Tools

  • Chapter
  • First Online:
Composing Model-Based Analysis Tools

Abstract

Any analysis produces results to be used by analysis users to understand and improve the system being analysed. But what are the ways in which analysis results can be exploited? And how is exploitation of analysis results related to analysis composition? In this chapter, we provide a conceptual model of analysis-result exploitation and a model of the variability and commonalities between different analysis approaches, leading to a feature-based description of results exploitation. We demonstrate different instantiations of our feature model in nine case studies of specific analysis techniques. Through this discussion, we also showcase different forms of analysis composition, leading to different forms of exploitation of analysis results for refined analysis, improving analysis mechanisms, exploring results, etc. We, thus, present the fundamental terminology for researchers to discuss exploitation of analysis results, including under composition, and highlight some of the challenges and opportunities for future research.

This core chapter addresses Challenge 4 introduced in Chap. 3 of this book (exploiting analysis results).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.00
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hani Abdeen, Khaled Bali, Houari Sahraoui, and Bruno Dufour. “Learning dependency-based change impact predictors using independent change histories”. In: Information and Software Technology 67 (2015), pp. 220–235. https://doi.org/10.1016/j.infsof.2015.07.007.

  2. Uwe Aßmann, Steffen Zschaler, and Gerd Wagner. “Ontologies, Meta-Models, and the Model-Driven Paradigm”. In: Ontologies for Software Engineering and Technology. 2006, pp. 249–273. https://doi.org/10.1007/3-540-34518-3_9.

  3. Nelly Bencomo, Sebastian Götz, and Hui Song. “Models@run.time: a guided tour of the state of the art and research challenges”. In: Software and Systems Modeling 18 (2019), pp. 3049–3082. https://doi.org/10.1007/s10270-018-00712-x.

  4. Yuriy Brun, Reid Holmes, Michael D. Ernst, and David Notkin. “Speculative analysis: exploring future development states of software”. In: FSE/SDP Workshop on Future of Software Engineering Research. 2010, pp. 59–64. https://doi.org/10.1145/1882362.1882375.

  5. Jean-Michel Bruel, Benoît Combemale, Esther Guerra, Jean-Marc Jézéquel, Jörg Kienzle, Juan de Lara, Gunter Mussbacher, Eugene Syriani, and Hans Vangheluwe. “Comparing and classifying model transformation reuse approaches across metamodels”. In: Software and Systems Modeling 19.2 (2020), pp. 441–465. https://doi.org/10.1007/s10270-019-00762-9.

  6. Lawrence Chung, Brian A. Nixon, Eric Yu, and John Mylopoulos. Non-Functional Requirements in Software Engineering. Springer, 2000. https://doi.org/10.1007/978-1-4615-5269-7.

    Book  Google Scholar 

  7. Edmund Clarke, Orna Grumberg, Somesh Jha, Yuan Lu, and Helmut Veith. “Counterexample- guided abstraction refinement”. In: Int’l Conf. Computer Aided Verification, CAV. 2000, pp. 154–169. https://doi.org/10.1007/10722167_15.

  8. Edmund M. Clarke, Orna Grumberg, Somesh Jha, Yuan Lu, and Helmut Veith. “Counterexample-guided abstraction refinement for symbolic model checking”. In: Journal of the ACM 50.5 (2003), pp. 752–794. https://doi.org/10.1145/876638.876643.

  9. Manuel Clavel, Francisco Durán, Steven Eker, Patrick Lincoln, Narciso Martí-Oliet, José Meseguer, and Carolyn L. Talcott. All About Maude—A High-Performance Logical Framework, How to Specify, Program and Verify Systems in Rewriting Logic. Vol. 4350. Springer, 2007.

    Google Scholar 

  10. Vittorio Cortellessa, Antinisca Di Marco, and Paola Inverardi. “Integrating performance and reliability analysis in a non-functional MDA framework”. In: 10th International Conference Fundamental Approaches to Software Engineering. 2007, pp. 57–71. https://doi.org/10.1007/978-3-540-71289-3_6.

  11. Flavio Corradini, Fabrizio Fornari, Andrea Polini, Barbara Re, Francesco Tiezzi, and Andrea Vandin. “BProVe: A formal verification framework for business process models”. In: 32nd IEEE/ACM International Conference on Automated Software Engineering. 2017, pp. 217–228. https://doi.org/10.1109/ASE.2017.8115635.

  12. Fleur Duseau, B. Dufour, and Houari Sahraoui. “Vasco: A visual approach to explore object churn in framework-intensive applications”. In: 28th IEEE International Conference on Software Maintenance. 2012, pp. 15–24. https://doi.org/10.1109/ICSM.2012.6405248.

  13. Boudewijn F. van Dongen, Ana Karla A. de Medeiros, H. M. W. Verbeek, A. J. M. M. Weijters, and Wil M. P. van der Aalst. “The ProM framework: A new era in process mining tool support”. In: 26th International Conference Applications and Theory of Petri Nets. 2005, pp. 444–454. https://doi.org/10.1007/11494744_25.

  14. Francisco Durán, Camilo Rocha, and Gwen Salaün. “Stochastic analysis of BPMN with time in rewriting logic”. In: Science of Computer Programming 168 (2018), pp. 1–17. https://doi.org/10.1016/j.scico.2018.08.007.

  15. Karim Dhambri, Houari A. Sahraoui, and Pierre Poulin. “Visual detection of design anomalies”. In: 12th European Conference on Software Maintenance and Reengineering. 2008, pp. 279–283. https://doi.org/10.1109/CSMR.2008.4493326.

  16. Marlon Dumas, Marcello La Rosa, Jan Mendling, and Hajo A. Reijers. Fundamentals of Business Process Management. Springer, 2013.

    Book  Google Scholar 

  17. Martin Fowler. Refactoring: improving the design of existing code. Addison-Wesley Professional, 2018.

    MATH  Google Scholar 

  18. Thomas Freytag, Philip Allgaier, Andrea Burattin, and Andreas Danek-Bulius. “WoPeD—A “proof-of-concept” platform for experimental BPM research projects”. In: BPM Demo Track and BPM Dissertation Award. 2017. http://ceurws.org/Vol-1920/BPM_2017_paper_190.pdf.

  19. Mathias Fritzsche, Jendrik Johannes, Steffen Zschaler, Anatoly Zherebtsov, and Alexander Terekhov. “Application of tracing techniques in model-driven performance engineering”. In: 4th ECMDA Traceability Workshop. 2008. https://doi.org/10.1.1.148.4702.

  20. Carlo A. Furia, Martin Nordio, Nadia Polikarpova, and Julian Tschannen. “Auto- Proof: auto-active functional verification of object-oriented programs”. In: International Journal Software Tools Technology Transfer 19.6 (2017), pp. 697–716. https://doi.org/10.1007/978-3-662-46681-0_53.

  21. Martin Gogolla, Frank Hilken, and Khanh-Hoang Doan. “Achieving Model Quality through Model Validation, Verification and Exploration”. In: Journal on Computer Languages, Systems and Structures 54 (2018), pp. 474–511. https://doi.org/10.1016/j.cl.2017.10.001.

  22. Martin Gogolla, Antonio Vallecillo, Loli Burgueno, and Frank Hilken. “Employing Classifying Terms for Testing Model Transformations”. In: 18th International Conference Model Driven Engineering Languages and Systems. 2015, pp. 312–321. https://doi.org/10.1109/MODELS.2015.7338262.

  23. Esther Guerra, Juan de Lara, Alessio Malizia, and Paloma Díaz. “Supporting useroriented analysis for multi-view domain-specific visual languages”. In: Information and Software Technology 51.4 (2009), pp. 769–784. https://doi.org/10.1016/j.infsof.2008.09.005.

  24. Ábel Hegedüs, Gábor Bergmann, István Ráth, and Dániel Varró. “Back-annotation of simulation traces with change-driven model transformations”. In: 8th IEEE International Conference on Software Engineering and Formal Methods. 2010, pp. 145–155. https://doi.org/10.1109/SEFM.2010.28.

  25. Robert Heinrich, Francisco Durán, Carolyn L. Talcott, and Steffen Zschaler (eds.) Composing Model-Based Analysis Tools. Springer, 2021. https://doi.org/10.1007/978-3-030-81915-6.

  26. Frank Hilken, Martin Gogolla, Loli Burgueno, and Antonio Vallecillo. “Testing models and model transformations using classifying terms”. In: Software and Systems Modeling 17.3 (2018), pp. 885–912. https://doi.org/10.1007/s10270-016-0568-3.

  27. Frédéric Jouault, Freddy Allilaire, Jean Bézivin, and Ivan Kurtev. “ATL: A model transformation tool”. In: Science of Computer Programming 72.1-2 (2008), pp. 31–39. https://doi.org/10.1016/j.scico.2007.08.002.

  28. Mirco Kuhlmann and Martin Gogolla. “From UML and OCL to relational logic and back”. In: 15th International Conference on Model Driven Engineering Languages and Systems. 2012, pp. 415–431. https://doi.org/10.1007/978-3-642-33666-9_27.

  29. Marouane Kessentini, Stéphane Vaucher, and Houari A. Sahraoui. “Deviance from perfection is a better criterion than closeness to evil when identifying risky code”. In: 25th IEEE/ACM International Conference on Automated Software Engineering. 2010, pp. 113–122. https://doi.org/10.1145/1858996.1859015.

  30. Bixin Li, Xiaobing Sun, Hareton Leung, and Sai Zhang. “A survey of code-based change impact analysis techniques”. In: Software Testing, Verification and Reliability 23.8 (2013), pp. 613–646. https://doi.org/10.1002/stvr.1475.

  31. Radu Marinescu. “Detection strategies: Metrics-based rules for detecting design flaws”. In: International Conference on Software Maintenance. 2004, pp. 350–359. https://doi.org/10.1109/ICSM.2004.1357820.

  32. Jens Meinicke, Thomas Thüm, Reimar Schröter, Fabian Benduhn, Thomas Leich, and Gunter Saake. Mastering Software Variability with FeatureIDE. Springer, 2017.

    Book  Google Scholar 

  33. Davin Alexander McCall and Michael Kölling. “A new look at novice programmer errors”. In: Transactions of Computing Education 19.4 (2019). https://doi.org/10.1145/3335814.

  34. Naouel Moha, Yann-Gaël Guéhéneuc, Laurence Duchien, and Anne-Françoise Le Meur. “DECOR: A Method for the Specification and Detection of Code and Design Smells”. In: IEEE Transactions on Software Engineering 36.1 (2010), pp. 20–36. https://doi.org/10.1109/TSE.2009.50.

  35. Object Management Group. Object Constraint Language 2.4. Tech. rep. formal/14-02-03. Object Management Group, 2014.

    Google Scholar 

  36. Object Management Group. UML 2.5. Tech. rep. formal/2015-03-01. Object Management Group, 2015.

    Google Scholar 

  37. OMG. Business Process Model and Notation (BPMN)—V 2.0. 2011.

    Google Scholar 

  38. Ali Ouni, Marouane Kessentini, Houari A. Sahraoui, Katsuro Inoue, and Kalyanmoy Deb. “Multi-criteria code refactoring using search-based software engineering: An industrial case study”. In: ACM Transactions on Software Engineering and Methodology 25.3 (2016), 23:1–23:53. https://doi.org/10.1145/2932631.

  39. Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, Andrea De Lucia, and Denys Poshyvanyk. “Detecting bad smells in source code using change history information”. In: 28th IEEE/ACM International Conference on Automated Software Engineering. 2013, pp. 268–278. https://doi.org/10.1109/ASE.2013.6693086.

  40. Xiaoxia Ren, Ophelia C. Chesley, and Barbara G. Ryder. “Identifying failure causes in Java programs: An application of change impact analysis”. In: IEEE Transactions on Software Engineering 32.9 (2006), pp. 718–732. https://doi.org/10.1109/TSE.2006.90.

  41. Marcello La Rosa, Hajo A. Reijers, Wil M. P. van der Aalst, Remco M. Dijkman, Jan Mendling, Marlon Dumas, and Luciano García-Bañuelos. “APROMORE: An advanced process model repository”. In: Expert Systems with Applications 38.6 (2011), pp. 7029–7040. https://doi.org/10.1016/j.eswa.2010.12.012.

  42. Genaina Nunes Rodrigues, David S. Rosenblum, and Sebastián Uchitel. “Reliability prediction in model-driven development”. In: 8th International Conference Model Driven Engineering Languages and Systems. 2005, pp. 339–354. https://doi.org/10.1007/11557432_25.

  43. Mohamed Aymen Saied, Omar Benomar, Hani Abdeen, and Houari Sahraoui. “Mining multi-level API usage patterns”. In: IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering. 2015, pp. 23–32. https://doi.org/10.1109/SANER.2015.7081812.

  44. Mohamed Aymen Saied, Erick Raelijohn, Edouard Batot, Michalis Famelis, and Houari Sahraoui. “Towards assisting developers in API usage by automated recovery of complex temporal patterns”. In: Information and Software Technology 119 (2020), p. 106213. https://doi.org/10.1016/j.infsof.2019.106213.

  45. Ed Seidewitz. “What models mean”. In: IEEE Software 20.5 (2003), pp. 26–32.

    Google Scholar 

  46. Jesús Sánchez Cuadrado, Esther Guerra, and Juan de Lara. “Static analysis of model transformations”. In: IEEE Transactions on Software Engineering 43.9 (2017), pp. 868–897. https://doi.org/10.1109/TSE.2016.2635137.

  47. Jesús Sánchez Cuadrado, Esther Guerra, and Juan de Lara. “Quick fixing ATL transformations with speculative analysis”. In: Software and System Modeling 17.3 (2018), pp. 779–813. https://doi.org/10.1007/s10270-016-0541-1.

  48. Emina Torlak and Daniel Jackson. “Kodkod: A relational model finder”. In: 13th International Conference on Tools and Algorithms for the Construction and Analysis of Systems. 2007, pp. 632–647. https://doi.org/10.1007/978-3-540-71209-1_49.

  49. Eric Verbeek and Wil M. P. van der Aalst. “Woflan 2.0: A Petri-net-based workflow diagnosis tool”. In: 21st International Conference Application and Theory of Petri Nets. 2000, pp. 475–484. https://doi.org/10.1007/3-540-44988-4_28.

  50. Hao Zhong, Tao Xie, Lu Zhang, Jian Pei, and Hong Mei. “MAPO: Mining and recommending API usage patterns”. In: European Conference on Object-Oriented Programming. 2009, pp. 318–343. https://doi.org/10.1007/978-3-642-03013-0_15.

  51. Steffen Zschaler, Sam White, Kyle Hodgetts, and Martin Chapman. “Modularity for automated assessment: A design-space exploration”. In: Workshop Software Engineering für E-Learning-Systeme. 2018. http://ceur-ws.org/Vol-2066/seels2018paper02.pdf.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francisco Durán .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Durán, F., Gogolla, M., Guerra, E., Lara, J.d., Sahraoui, H., Zschaler, S. (2021). Exploiting Results of Model-Based Analysis Tools. In: Heinrich, R., Durán, F., Talcott, C., Zschaler, S. (eds) Composing Model-Based Analysis Tools. Springer, Cham. https://doi.org/10.1007/978-3-030-81915-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-81915-6_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-81914-9

  • Online ISBN: 978-3-030-81915-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics