Skip to main content
Log in

ExpRunA : a domain-specific approach for technology-oriented experiments

  • Regular Paper
  • Published:
Software and Systems Modeling Aims and scope Submit manuscript

Abstract

Conducting technology-oriented experiments (i.e., experiments in which treatments are applied to objects by a computer-based tool) without proper tool support is often a time-consuming and highly error-prone task. Although many techniques have been proposed to help conducting controlled experiments, none of them simultaneously addresses (1) the executable specification of experiments at a high level of abstraction; (2) automated treatment execution and automated data analysis from the experiment specification; and (3) formal guaranties of the correctness of results according to an experiment specification for technology-oriented experiments. To address these issues, we provide a Domain-Specific Modeling approach to create a Web-based tool (ExpRunA ) comprising a Domain-Specific Language named ToExpDSL , execution and analysis script generators, a supporting framework, and a running infrastructure. An experimenter uses ToExpDSL to specify an experiment using experimentation concepts. From this specification, applications corresponding to the underlying treatments are executed, execution results are collected and analyzed, and, finally, the analysis results are presented to the experimenter. We establish the consistency of such results with respect to the experiment specification by formalizing and proving key correctness properties of ExpRunA . We empirically evaluated ExpRunA with respect to automation by replicating three already published experiments; we evaluated the level of abstraction by a qualitative assessment. Our empirical evaluation shows that ToExpDSL is expressive enough to specify three technology-oriented experiments and that ExpRunA can be used to enable sound automation of execution and analysis from the specification of technology-oriented experiments at a high level of abstraction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. https://github.com/SPLMC/reana-spl/.

  2. https://expruna.github.io/.

  3. https://expruna.github.io/.

  4. https://bit.ly/2NTCuSe.

  5. https://bit.ly/2CvShEU.

  6. https://bit.ly/2Al2uT7.

  7. https://expruna.github.io/.

  8. https://www.eclipse.org/.

  9. https://www.r-project.org.

  10. https://www.latex-project.org.

  11. https://eclipse.org/Xtext/.

  12. http://www.eclipse.org/xtend/.

  13. The set of experiments we found is presented on our supplementary Website.

  14. https://bit.ly/2NTCuSe.

  15. https://github.com/vlab-cs-ucsb/cashew/.

References

  1. Abate, A., Bessa, I., Cattaruzza, D., Cordeiro, L., David, C., Kesseli, P., Kroening, D., Polgreen, E.: Automated formal synthesis of digital controllers for state-space physical plants. In: International Conference on Computer Aided Verification, pp. 462–482. Springer (2017)

  2. Arisholm, E., Sjøberg, D.I.K., Carelius, G.J., Lindsjørn, Y.: SESE an experiment support environment for evaluating software engineering technologies. In: Tenth Nordic Workshop on Programming and Software Development Tools and Techniques, pp. 81–98 (2002)

  3. Aydin, A., Bang, L., Bultan, T.: Automata-based model counting for string constraints. In: International Conference on Computer Aided Verification, pp. 255–272 (2015)

    Chapter  Google Scholar 

  4. Bak, S., Duggirala, P.S.: Simulation-equivalent reachability of large linear systems with inputs. In: International Conference on Computer Aided Verification, pp. 401–420. Springer (2017)

  5. Bang, L., Aydin, Abdulbaki, P., Quoc-Sang, P., Corina S., Bultan, T.: String analysis for side channels with segmented oracles. In: 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 193–204 (2016)

  6. Banks, J.: Introduction to simulation. In: 1999 Winter Simulation Conference, pp. 7–13 (1999)

  7. Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., Warfield, A.: Xen and the art of virtualization. In: 19th ACM Symposium on Operating Systems Principles, pp. 164–177 (2003)

  8. Baudin, M., Dutfoy, A., Iooss, B., Popelin, A.-L.: OpenTURNS: An Industrial Software for Uncertainty Quantification in Simulation. Springer, Berlin (2017)

    Google Scholar 

  9. Beyer, D., Löwe, S., Wendler, P.: Benchmarking and resource measurement. In: Model Checking Software, pp. 160–178 (2015)

    Chapter  Google Scholar 

  10. Beyer, D., Dangl, M., Wendler, P.: A unifying view on SMT-based software verification. J. Autom. Reason. 60(3), 299–335 (2018)

    Article  MathSciNet  Google Scholar 

  11. Box, G.E.P., Hunter, J.S., Hunter, W.G.: Statistics for Experimenters: Design, Innovation, and Discovery, vol. 2. Wiley-Interscience, New York (2005)

    MATH  Google Scholar 

  12. Brennan, T., Tsiskaridze, N., Rosner, N., Aydin, A., Bultan, T.: Constraint normalization and parameterized caching for quantitative program analysis. In: 11th Joint Meeting on Foundations of Software Engineering, pp. 535–546 (2017)

  13. Brihaye, T., Geeraerts, G., Ho, H.-M., Monmege, B.: Mighty L: a compositional translation from MITL to timed automata. In: International Conference on Computer Aided Verification, pp. 421–440. Springer (2017)

  14. Chen, X., Chang, J.T.: Planning bioinformatics workflows using an expert system. Bioinformatics 33, 1210–1215 (2017)

    Google Scholar 

  15. Ciolkowski, M.: An Approach for Quantitative Aggregation of Evidence from Controlled Experiments in Software Engineering. Fraunhofer Verlag, Glasgow (2012)

    Google Scholar 

  16. Devroey, X., Perrouin, G., Legay, A., Schobbens, P.-Y., Heymans, P.: Search-based similarity-driven behavioural SPL testing. In: Proceedings of the Tenth International Workshop on Variability Modelling of Software-Intensive Systems, pp. 89–96. ACM (2016)

  17. Devroey, X., Perrouin, G., Papadakis, M., Legay, A., Schobbens, P.-Y., Heymans, P.: Automata language equivalence vs. simulations for model-based mutant equivalence: an empirical evaluation. In: IEEE International Conference on Software Testing, Verification and Validation, pp. 424–429 (2017)

  18. Easterbrook, S., Singer, J., Storey, M.-A., Damian, D.: Selecting empirical methods for software engineering research. In: Guide to Advanced Empirical Software Engineering, pp. 285–311 (2008)

    Chapter  Google Scholar 

  19. Feigenspan, J., Siegmund, N., Hasselberg, A., Köppen, M.: Prophet: tool infrastructure to support program comprehension experiments. In: Poster at the International Symposium on Empirical Software Engineering and Measurement (ESEM) (2011)

  20. Freire, M., Accioly, P., Sizílio, G., Neto, E.C., Kulesza, U., Eduardo, Borba, P.: A model-driven approach to specifying and monitoring controlled experiments in software engineering. In: International Conference on Product Focused Software Process Improvement, pp. 65–79 (2013)

    Chapter  Google Scholar 

  21. Freire, M., Kulesza, U., Aranha, E., Nery, G., Costa, D., Jedlitschka, A., Campos, E., Acuña, S.T., Gómez, M.N.: Assessing and evolving a domain specific language for formalizing software engineering experiments: an empirical study. Int. J. Softw. Eng. Knowl. Eng. 24(10), 1509–1531 (2014)

    Article  Google Scholar 

  22. Hauck, M., Kuperberg, M., Huber, N., Reussner, R.: Deriving performance-relevant infrastructure properties through model-based experiments with Ginpex. Softw. Syst. Model. 13(4), 1345–1365 (2014)

    Article  Google Scholar 

  23. Hochstein, L., Nakamura, T., Shull, F., Zazworka, N., Basili, V.R., Zelkowitz, M.V.: An environment for conducting families of software engineering experiments. Adv. Comput. 74, 175–200 (2008)

    Article  Google Scholar 

  24. Houben, C., Lapkin, A.A.: Automatic discovery and optimization of chemical processes. Curr. Opin. Chem. Eng. 9, 1–7 (2015)

    Article  Google Scholar 

  25. Jedlitschka, A., Ciolkowski, M., Pfahl, D.: Reporting experiments in software engineering. In: Guide to Advanced Empirical Software Engineering, pp. 201–228 (2008)

    Chapter  Google Scholar 

  26. Juristo, N., Moreno, A.M.: Basics of Software Engineering Experimentation. Springer, Berlin (2013)

    MATH  Google Scholar 

  27. Kelly, S., Tolvanen, J.-P.: Domain-Specific Modeling: Enabling Full Code Generation. Wiley, Hoboken (2008)

    Book  Google Scholar 

  28. Kephart, J.O., Chess, D.M.: The vision of autonomic computing. Computer 36(1), 41–50 (2003)

    Article  MathSciNet  Google Scholar 

  29. Lajmi, A., Martinez, J., Ziadi, T.: DSLFORGE: textual modeling on the web. DemosMoDELS 1255, 25–29 (2014)

    Google Scholar 

  30. Lanna, A., Castro, T., Alves, V., Rodrigues, G., Schobbens, P.-Y., Apel, S.: Feature-family-based reliability analysis of software product lines. Inf. Softw. Technol. 94, 59–81 (2018)

    Article  Google Scholar 

  31. Leite, A.F., Alves, V., Rodrigues, G.N., Tadonki, C., Eisenbeis, C., Alves de Melo, A.C.M.: Dohko: an autonomic system for provision, configuration, and management of inter-cloud environments based on a software product line engineering method. Cluster Comput. 20, 1951–1976 (2017)

    Article  Google Scholar 

  32. Luu, L., Shinde, S., Saxena, P., Demsky, B.: A model counter for constraints over unbounded strings. ACM SIGPLAN Notices 49, 565–576 (2014)

    Article  Google Scholar 

  33. Madeyski, L., Kitchenham, B.: Would wider adoption of reproducible research be beneficial for empirical software engineering research? J. Intell. Fuzzy Syst. 32(2), 1509–1521 (2017)

    Article  Google Scholar 

  34. Maróti, M., Kecskés, T., Kereskényi, R., Broll, B., Völgyesi, P., Jurácz, L., Levendovszky, T., Lédeczi, Á.: Next generation (meta) modeling: web-and cloud-based collaborative tool infrastructure. MPM@ MoDELS 1237, 41–60 (2014)

    Google Scholar 

  35. Mattoso, M., Werner, C., Travassos, G.H., Braganholo, V., Ogasawara, E., Oliveira, D., Cruz, S., Martinho, W., Murta, L.: Towards supporting the life cycle of large scale scientific experiments. Int. J. Bus. Process Integr. Manag. 5(1), 79–92 (2010)

    Article  Google Scholar 

  36. Medeiros, F., Kästner, C., Ribeiro, M., Gheyi, R., Apel, S.: A comparison of 10 sampling algorithms for configurable systems. In: 38th International Conference on Software Engineering, pp. 643–654 (2016)

  37. Păsăreanu, C.S., Visser, W., Bushnell, D., Geldenhuys, J., Mehlitz, P., Rungta, N.: Symbolic pathfinder: integrating symbolic execution with model checking for Java bytecode analysis. Autom. Softw. Eng. 20(3), 391–425 (2013)

    Article  Google Scholar 

  38. Pavlov, S.S., Yu Dmitriev, A., Chepurchenko, I.A., Frontasyeva, M.V.: Automation system for measurement of gamma-ray spectra of induced activity for multi-element high volume neutron activation analysis at the reactor ibr-2 of frank laboratory of neutron physics at the joint institute for nuclear research. Phys. Part. Nucl. Lett. 11(6), 737–742 (2014)

    Article  Google Scholar 

  39. Ralha, C.G., Abreu, C.G., Coelho, C.G.C., Zaghetto, A., Macchiavello, B., Machado, R.B.: A multi-agent model system for land-use change simulation. Environ. Model. Softw. 42, 30–46 (2013)

    Article  Google Scholar 

  40. Rizzo, T., Duong, J.: The crime attack. In: Ekoparty Security Conference (2012)

  41. Sánchez, A.B., Segura, S., Ruiz-Cortés, A.: A comparison of test case prioritization criteria for software product lines. In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation, pp. 41–50. IEEE (2014)

  42. Saxena, P., Akhawe, D., Hanna, S., Mao, F., McCamant, S., Song, D.: A symbolic execution framework for javascript. In: IEEE Symposium on Security and Privacy, pp. 513–528 (2010)

  43. Sonntag, M., Karastoyanova, D., Leymann, F.: The missing features of workflow systems for scientific computations. In: Software Engineering, pp. 209–216 (2010)

  44. Sztipanovits, J., Bapty, T., Neema, S., Howard, L., Jackson, E.: OpenMETA: a model-and component-based design tool chain for cyber-physical systems. In: Joint European Conferences on Theory and Practice of Software, pp. 235–248 (2014)

    Chapter  Google Scholar 

  45. Tabatabaei, S.: A probabilistic neural network based approach for predicting the output power of wind turbines. J. Exp. Theor. Artif. Intell. 29, 273–285 (2016)

    Article  Google Scholar 

  46. Travassos, G.H., dos Santos, P.S.M, Mian, P.G., Neto, A.C.D., Biolchini, J.: An environment to support large scale experimentation in software engineering. In: 13th IEEE International Conference on Engineering of Complex Computer Systems, pp. 193–202 (2008)

  47. Travassos, G.H., Barros, M.O.: Contributions of in virtuo and in silico experiments for the future of empirical studies in software engineering. In: 2nd Workshop on Empirical Software Engineering the Future of Empirical Studies in Software Engineering, pp. 117–130 (2003)

  48. Varga, A., Hornig, R.:. An overview of the OMNeT++ simulation environment. In: 1st International Conference on Simulation Tools and Techniques for Communications, Networks and Systems & Workshops, pp. 60:1–60:10 (2008)

  49. Visser, W., Geldenhuys, J., Dwyer, M.B.: Green: reducing, reusing and recycling constraints in program analysis. In: SIGSOFT FSE (2012)

  50. Wang, Y., Rutherford, M.J., Carzaniga, A., Wolf, A.L.: Automating experimentation on distributed testbeds. In: 20th IEEE/ACM International Conference on Automated Software engineering, pp. 164–173 (2005)

  51. Ward, M.: A definition of abstraction. J. Softw. Evolut. Process 7(6), 443–450 (1995)

    Google Scholar 

  52. Weir, M., Aggarwal, S., Collins, M., Stern, H.: Testing metrics for password creation policies by attacking large sets of revealed passwords. In: 17th ACM Conference on Computer and Communications Security, pp. 162–175 (2010)

  53. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering. Springer, Berlin (2012)

    Book  Google Scholar 

  54. Yoo, A.B., Jette, M.A., Grondona, M.: Slurm: simple linux utility for resource management. In: Workshop on JSSPP, pp. 44–60. Springer (2003)

  55. Zhao, Y., Fei, X., Raicu, I., Lu, S.: Opportunities and challenges in running scientific workflows on the cloud. In: CyberC, pp. 455–462 (2011)

Download references

Acknowledgements

We would like to thank the following people for fruitful discussions and suggestions on how to improve this work: Andre Lanna, Thiago Castro, Thiago Ramos, Alba Melo, Eduardo Nakano, Rodrigo Ribeiro, Guilherme Travassos, Rodrigo Bonifacio, Stefan Ganser, Pierre-Yves Schobbens, Gilles Perrouin, and the anonymous reviewers. Vander Alves was partially supported by CNPq (grant 310757/2018-5), CAPES (Edital 29/2017-CAPES/WBI), and the Alexander von Humboldt Foundation. Eneias Silva was partially supported by FAPDF (Edital 1/2017). Apel’s work has been funded by the German Research Foundation (AP 206/11).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eneias Silva.

Additional information

Communicated by Dr Gabor Karsai.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Silva, E., Leite, A., Alves, V. et al. ExpRunA : a domain-specific approach for technology-oriented experiments. Softw Syst Model 19, 493–526 (2020). https://doi.org/10.1007/s10270-019-00749-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10270-019-00749-6

Keywords

Navigation