Skip to main content

Advertisement

Log in

Experiences in developing and applying a software engineering technology testbed

  • Industry Experience Report
  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

A major problem in empirical software engineering is to determine or ensure comparability across multiple sources of empirical data. This paper summarizes experiences in developing and applying a software engineering technology testbed. The testbed was designed to ensure comparability of empirical data used to evaluate alternative software engineering technologies, and to accelerate the technology maturation and transition into project use. The requirements for such software engineering technology testbeds include not only the specifications and code, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments. The requirements and architecture to build a particular software engineering technology testbed to help NASA evaluate its investments in software dependability research and technology have been developed and applied to evaluate a wide range of technologies. The technologies evaluated came from the fields of architecture, testing, state-model checking, and operational envelopes. This paper will present for the first time the requirements and architecture of the software engineering technology testbed. The results of the technology evaluations will be analyzed from a point of view of how researchers benefitted from using the SETT. The researchers just reported how their technology performed in their original findings. The testbed evaluation showed (1) that certain technologies were complementary and cost-effective to apply; (2) that the testbed was cost-effective to use by researchers within a well-specified domain of applicability; (3) that collaboration in testbed use by researchers and the practitioners resulted comparable empirical data and in actions to accelerate technology maturity and transition into project use, as shown in the AcmeStudio evaluation; and (4) that the software engineering technology testbed’s requirements and architecture were suitable for evaluating technologies and accelerating their maturation and transition into project use.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

References

  • Benzel T, Braden R, Kim D, Neuman C, Joseph A, Ostrenga R et al Design, Deployment, and Use of the Deter Testbed. Proceedings of the DETER Community Workshop on Cyber Security Experimentation and Test, August 2007.

  • Boehm B (1996) Anchoring the software process. IEEE Softw 73–82, (July). doi:10.1109/52.526834

  • Boehm B, Port D 2001. Balancing discipline and flexibility with the Spiral Model and MBASE. Crosstalk, December 2001, pp. 23–28 (http://www.stsc.hill.at.mil/crosstalk)

  • Boehm, B. and USC Center for Software Engineering (2003) Guidelines for Model-Based (System) Architecting and Software Engineering. (http://sunset.usc.edu/research/MBASE)

  • Boehm B, Bhuta J, Garlan D, Gradman E, Huang L, Lam A et al (2004) Using testbeds to accelerate technology maturity and transition: The SCRover Experience. ACM-IEEE International Symposium on Empirical Software Engineering, August, pp. 117–126

  • Booch G, Rumbaugh J, Jacobson I (1999) The unified modeling language user guide. Addison Wesley, Reading

    Google Scholar 

  • Chillarege R, Bhandari IS, Chaar JK, Halliday MJ, Moebus DS, Ray BK et al (1992) Orthogonal Defect Classification—A Concept for In-Process Measurements. IEEE Trans Softw Eng 18(11). doi:10.1109/32.177364

  • Dvorak D, Rasmussen R, Reeves G, Sacks A (2000) Software architecture themes in JPL’s Mission Data System. Proceedings of 2000 IEEE Aerospace Conference.

  • Fickas S, Prideaux J, Fortier A 2004. ROPE: Reasoning about OPerational Envelopes. http://www.cs.uoregon.edu/research/mds/

  • Garlan D, Monroe RT, Wile D (2000). Acme: architectural description of component-based systems. In: Leavens GT, Sitaraman M (eds) Foundations of component-based systems. Cambridge University Press

  • Kruchten P (2001) The rational unified process (2nd edn). Addison Wesley, Reading

    Google Scholar 

  • Lindvall M, Rus I, Donzelli P, Memon A, Zelkowitz M, Betin-Can A et al (2007) Experimenting with software testbeds for evaluating new technologies. Empir Softw Eng 12(4):417–444 doi:10.1007/s10664-006-9034-0

    Article  Google Scholar 

  • Mettala E, Graham M (1992) The domain-specific software architecture program. Technical Report CMU/SEI-92-SR-9, CMU Software Engineering Institute

  • Mills H (1972) On The Statistical Validation of Computer Programs. IBM Federal Systems Division Report 72-6015

  • Redwine S, Riddle W (1985) Software technology maturation. Proceedings of the 8th International Conference on Software Engineering (ICSE1985), pp. 189–200

  • Rinker G (2002) Mission Data Systems Architecture and Implementation Guidelines. Ground System Architectures Workshop (GSAW 2002). El Segundo, California

  • RoboCup 2007. <http://www.robocup.org/>

  • Roshandel R, Schmerl B, Medvidovic N, Garlan D, Zhang D (2004a) Understanding tradeoffs among different architectural modeling approaches. Proceedings of the 4th Working IEEE/IFIP Conference on Software Architecture (WICSA 2004) Oslo, Norway

  • Roshandel R, van der Hoek A, Mikic-Rakic M, Medvidovic N (2004b) Mae—a system model and environment for managing architectural evolution. ACM Trans Softw Eng Methodol 11(2):240–276 doi:10.1145/1018210.1018213

    Article  Google Scholar 

  • Roshandel R, Banerjee S, Cheung L, Medvidovic N, Golubchik L (2006) Estimating software component reliability by leveraging architectural models. 28th International Conference on Software Engineering (ICSE 2006), Shanghai, China, May, pp. 853–856

  • Stone P (2003) Multiagent competition and research: lessons from RoboCup and TAC. RoboCup-2002: Robot Soccer World Cup VI. Springer Verlag, Berlin, pp 224–237

  • Tracz W (1995) DSSA (Domain-Specific Software Architecture) pedagogical example. ACM SIGSOFT Softw Eng Notes 20(3):49–62

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by NASA-HDCP contracts to CMU, JPL, and USC. It also benefited from support from the JPL-MDS team and the NSF HDC programs that includes Dr. Roshanak Roshandel, Dr. Steve Fickas and his graduate students, Dr. David Garlan and the AcmeStudio team, Dr. Gupta, Dr. Helmy, Ganesha Bhaskara, and Dr. Carolyn Talcott. In addition, I’d like to acknowledge the USC graduate students who helped in developing SCRover.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Lam.

Additional information

Editorial Responsibility: A. Mockus

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lam, A., Boehm, B. Experiences in developing and applying a software engineering technology testbed. Empir Software Eng 14, 579–601 (2009). https://doi.org/10.1007/s10664-008-9096-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-008-9096-2

Keywords

Navigation