Skip to main content
Log in

An evolutionary testbed for software technology evaluation

  • Published:
Innovations in Systems and Software Engineering Aims and scope Submit manuscript

Abstract.

Empirical evidence and technology evaluation are needed to close the gap between the state of the art and the state of the practice in software engineering. However, there are difficulties associated with evaluating technologies based on empirical evidence: insufficient specification of context variables, cost of experimentation, and risks associated with trying out new technologies. In this paper, we propose the idea of an evolutionary testbed for addressing these problems. We demonstrate the utility of the testbed in empirical studies involving two different research technologies applied to the testbed, as well as the results of these studies. The work is part of NASA’s High Dependability Computing Project (HDCP), in which we are evaluating a wide range of new technologies for improving the dependability of NASA mission-critical systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Basili VR, Green S, Laitenberger O, Lanubile F, Shull F, Sorumgaard S, Zelkowitz MV (1996) The empirical investigation of perspective-based reading. J Empir Softw Eng 1(2):133–164

    Google Scholar 

  2. Basili VR, McGarry FE, Pajerski R, Zelkowitz MV (2002) Lessons learned from 25 years of process improvement: the rise and fall of the NASA Software Engineering Laboratory. In: IEEE Computer Society and ACM international conference on software engineering (ICSE 2002), pp 69–79

  3. Basili V, Donzelli P, Asgari S (2004) Modelling dependability – the unified model of dependability. University of Maryland, Technical Report CS-TR-46–01

  4. Boehm B, Bhuta J, Garlan D, Gradman E, Huang L, Lam A, Madachy R, Medvidovic N, Meyer K, Meyers S, Perez G, Reinholtz K, Roshandel R, Rouquette N (2004) Using empirical testbeds to accelerate technology maturity and transition: the SCRover experience. In: 2004 international symposium on empirical software engineering (ISESE‘04), pp 117–126

  5. Dennis G (2003) TSAFE: building a trusted computing base for air traffic control software. Masters Thesis, MIT, Cambridge, MA

  6. Erzberger H, Paielli RA (2002) Concept for next generation air traffic control system. Air Traffic Control Q 10(4):355–378

    Google Scholar 

  7. Kitchenham BA, Dyba T, Jorgensen M (2004) Evidence-based software engineering. In: Proceedings of the 26th international conference on software engineering, pp 273–281

  8. Murphy GC, Notkin D, Sullivan K (1995) Software reflexion models: bridging the gap between source and high-level models. In: Proceedings of the 3rd ACM SIGSOFT symposium on the foundations of software engineering. ACM Press, New York, pp 18–28

  9. Regnell B, Runeson P, Thelin T (2000) Are the perspectives really different? – further experimentation on scenario-based reading of requirements. J Empir Softw Eng 5(4):331–356

    Google Scholar 

  10. Schroeder PJ, Bolaki P, Gopu V (2004) Comparing the fault detection effectiveness of N-way and random test suites. In: Proceedings of the 2004 international symposium on empirical software engineering, Redondo Beach, CA, pp 49–59

  11. Shull F, Basili V, Carver J, Maldonado JC, Travassos GH, Mendonça M, Fabbri S (2002) Replicating software engineering experiments: addressing the tacit knowledge problem. In: International symposium on empirical software engineering (ISESE’02) Nara, Japan

  12. Travassos GH, Shull F, Fredericks M, Basili VR (1999) Detecting defects in object-oriented designs: using reading techniques to increase software quality. In: Proceedings of the conference on object-oriented programming, systems, languages, and applications (OOPSLA)

  13. Tesoriero Tvedt R, Costa P, Lindvall M (2002) Does the code match the design? A process for architecture evaluation. In: Proceedings of the international conference on software maintenance, pp 393–401

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikael Lindvall.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lindvall, M., Rus, I., Shull, F. et al. An evolutionary testbed for software technology evaluation. Innovations Syst Softw Eng 1, 3–11 (2005). https://doi.org/10.1007/s11334-005-0007-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11334-005-0007-z

Keywords

Navigation