Skip to main content
Log in

TORC: test plan optimization by requirements clustering

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

Acceptance testing is a time-consuming task for complex software systems that have to fulfill a large number of requirements. To reduce this effort, we have developed a widely automated method for deriving test plans from requirements that are expressed in natural language. It consists of three stages: annotation, clustering, and test plan specification. The general idea is to exploit redundancies and implicit relationships in requirements specifications. Multi-viewpoint techniques based on RM-ODP (Reference Model for Open Distributed Processing) are employed for specifying the requirements. We then use linguistic analysis techniques, requirements clustering algorithms, and pattern-based requirements collection to reduce the total effort of testing against the requirements specification. In particular, we use linguistic analysis for extracting and annotating the actor, process and object of a requirements statement. During clustering, a similarity function is computed as a measure for the overlap of requirements. In the test plan specification stage, our approach provides capabilities for semi-automatically deriving test plans and acceptance criteria from the clustered informal textual requirements. Two patterns are applied to compute a suitable order of test activities. The generated test plans consist of a sequence of test steps and asserts that are executed or checked in the given order. We also present the supporting prototype tool TORC, which is available open source. For the evaluation of the approach, we have conducted a case study in the field of acceptance testing of a national electronic identification system. In summary, we report on lessons learned how linguistic analysis and clustering techniques can help testers in understanding the relations between requirements and for improving test planning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. IBM (www.ibm.com) (formerly Telelogic).

  2. QA Systems (www.qa-systems.de).

  3. Serena Software (www.serena.com).

  4. Sparx Systems (www.sparxsystems.com.au).

  5. http://sourceforge.net/projects/torc-plugin.

References

  • Abbot, R. (1983). Program design by informal English descriptions. Communications of the ACM, 26(11), 882–894.

    Article  Google Scholar 

  • Al-Otaiby, T. N., AlSherif, M., Bond, W. P. (2005). Toward software requirements modularization using hierarchical clustering techniques. In Proceedings of the 43rd Annual Southeast Regional ConferenceVolume 2 (Kennesaw, Georgia, March 18–20). ACM-SE 43. ACM, New York, NY, 223–228.

  • ANSI/IEEE Std 830-1984–Software Requirements Specification.

  • Bradner, S. (1997). Key words for use in RFCs to indicate requirement levels. BCP 14, RFC 2119.

  • CERN. (1999) European organization for nuclear research: Colt. URL: http://dsd.lbl.gov/~hoschek/colt-download.

  • Chen, K., Zhang, W., Zhao, H., Mei, H. (2005). An approach to constructing feature models based on requirements clustering. In Proceedings of the 13th IEEE international Conference on Requirements Engineering 2005. RE. IEEE Computer Society, Washington, DC, 31–40.

  • Dowty, D. W. (1979). Word meaning and montague grammatic. Berlin: Springer.

    Google Scholar 

  • Dustin, E., Rashka, J., & Paul, J. (1999). Automated software testing: Introduction, management and performance. Boston: Addison Wesley.

    Google Scholar 

  • Engels, G., Sauer, S. (2010). A meta-method for defining software engineering methods. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, Bernhard Westfechtel (Eds.), Graph transformations and model-driven engineering (LNCS, Vol. 5765, pp. 411–440). Berlin/Heidelberg: Springer.

  • Fewster, M., & Graham, D. (1999). Software test automation. Boston: Addison Wesley.

    MATH  Google Scholar 

  • Geisser, M., Hildenbrand, T., Riegel, N. (2007). Evaluating the applicability of requirements engineering tools for distributed software development, Working Paper 2/2007, University of Mannheim, January.

  • Goldin, L. & Berry, D. M. (1994). AbstFinder: A prototype abstraction finder for natural language text for use in requirements elicitation: Design, methodology, and evaluation. In Proceedings First International Conference on Requirements Engineering. Colorado Springs, CO: IEEE Computer Society, pp. 84–93.

  • Güldali, B., Funke, H., Jahnich, M., Sauer, S., Engels, G. (2009). Semi-automated test planning for eID systems by using requirements clustering. In 24th IEEE/ACM International Conference on Automated Software Engineering (ASE 2009), 16–20 November, Auckland, New Zealand, pp. 29–39.

  • Güldali, B., Sauer, S., Winkelhane, P., Funke, H., Jahnich, M. (2010). Pattern-based generation of test plans for open distributed processing systems. In Proceedings of the 5th Workshop on Automation of Software Test (Cape Town, South Africa, May 03–04). AST ‘10. ACM, New York, NY, 119–126. doi:http://doi.acm.org/10.1145/1808266.1808284.

  • HJP Consulting. (2005). GlobalTester: Framework for Testing Smart Cards, www.globaltester.org.

  • Hsia, P., & Gupta, A. (1992). Incremental delivery using abstract data types and requirements clustering. Systems Integration, 1992. ICSI ‘92., Proceedings of the Second International Conference on, pp. 137–150, 15–18 June 1992, Morristown, NJ, USA. doi:10.1109/ICSI.1992.217275.

  • Hsia, P. & Yaung, A. T. (1998). Another approach to system decomposition: requirements clustering. Computer Software and Applications Conference, 1988. COMPSAC 88. In Proceedings of the Twelfth International, pp. 75–82, 5–7 October.

  • International Civil Aviation Organization. (2006). Doc 9303, machine readable travel documents. Part 1: Machine readable passports. Vols. 1 and 2, Sixth Edition.

  • ISO/IEC 10746-1: 1998–12. Information technology—Open Distributed Processing—Reference model.

  • ISO/IEC 10746-2:1996–09. Information technology—Open Distributed Processing—Reference model: Foundations.

  • ISO/IEC 10746-3:1996–09. Information technology—Open Distributed Processing—Reference model: Architecture.

  • ISO/IEC 10746-4: 1998–12. Information technology—Open distributed processing—Reference model: Architectural semantics.

  • Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: A review. ACM Computing Surveys, 31(3), 264–323. doi:acm.org/10.1145/331499.331504.

    Article  Google Scholar 

  • Lehmann, E., Wegener, J. (2000). Test case design by means of the CTE XL. In Proceedings of the 8th European International Conference on Software Testing, Analysis & Review (EuroSTAR 2000), Kopenhagen, Denmark, December.

  • Li, Z., Rahman, Q. A., Madhavji, N. H. (2007). An approach to requirements encapsulation with clustering. In Proceedings of Anais do WER07Workshop em Engenharia de Requisitos, Toronto, Canada, May 17–18, pp 92–96.

  • Miller, G. A. (1995). Wordnet: A lexical database for English. Communications of the ACM, 38(11), 39–41. doi:acm.org/10.1145/219717.219748.

    Article  Google Scholar 

  • Müller, T., Black, R., Eldh, S., Graham, D., Olsen, K., Pyhäjärvi, M., Thompson, G., van Veendendal, E. (2007). Certified Tester–Foundation Level Syllabus–Version 2007, International Software Testing Qualifications Board (ISTQB), Möhrendorf, Germany.

  • Object Management Group. (2007). UML Specification V2.1.1. www.omg.org/cgi-bin/doc?formal/-07-02-05.

  • Rupp, C. (2007): Requirements engineering und management. 4. Auflage: Hanser-Verlag.

  • Salger, F., Sauer, S., Engels, G., Baumann, A. (2010). Knowledge transfer in global software development–leveraging ontologies, tools and assessments. In 5th IEEE International Conference on Global Software Engineering (ICGSE 2010), pp. 336–341.

  • Sneed, H. (2008). Automated requirements analysis with the text analyzer testing tool. Version: 1.3, Anecon Software Design und Beratung G.m.b.H, January.

  • Stanford Lexicalized Parser v1.6.1. (2008). URL: http://nlp.stanford.edu/downloads/lex-parser.shtml.

  • Utting, M., & Legeard, B. (2007). Practical model-based testing—A tools approach. Morgan Kaufmann Publ., Amsterdam.

    Google Scholar 

  • Wiggerts, T. A. (1997). Using clustering algorithms in legacy systems remodularization. In Proceedings of the Fourth Working Conference on Reverse Engineering, pp. 33–43, 6–8 October.

  • Winkelhane, P. (2010). Teilautomatisierte Generierung von Testplänen aus Anforderungsspezifikationen für offene, verteilte Identifikationssysteme. Master Thesis (in German), University of Paderborn.

Download references

Acknowledgments

We thank Dr. Michael Jahnich for contributing to the former publications and Peter Winkelhane for contributing to the tool support TORC.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baris Güldali.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Güldali, B., Funke, H., Sauer, S. et al. TORC: test plan optimization by requirements clustering. Software Qual J 19, 771–799 (2011). https://doi.org/10.1007/s11219-011-9149-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-011-9149-4

Keywords

Navigation