Skip to main content
Log in

A controlled experiment to assess the impact of system architectures on new system requirements

  • RE'09 Special Issue
  • Published:
Requirements Engineering Aims and scope Submit manuscript

Abstract

While much research attention has been paid to transitioning from requirements to software architectures, relatively little attention has been paid to how new requirements are affected by an existing system architecture. Specifically, no scientific studies have been conducted on the “characteristic” differences between the newly elicited requirements gathered in the presence or absence of an existing software architecture. This paper describes an exploratory controlled study investigating such requirements characteristics. We identify a multitude of characteristics (e.g., end-user focus, technological focus, and importance) that were affected by the presence or absence of an SA, together with the extent of this effect. Furthermore, we identify the specific aspects of the architecture that had an impact on the characteristics. The study results have implications for RE process engineering, post-requirements analysis, requirements engineering tools, traceability management, and future empirical work in RE based on several emergent hypotheses resultant from this study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. For the rest of the paper, the acronym SA refers to System (or Software) Architecture as a software artefact.

  2. Related workshops, such as STRAW 2001 and 2003 [39] focused mainly on transitioning from RE to SA and not on the role of SA in RE.

  3. The 2-way nested ANOVA testing was done using SPSS 16.0 from SPSS Inc. (http://www.spss.com).

  4. The ANOVA test can only be conducted on the Likert-based requirements characteristics and not the ordinal-based characteristics (abstraction and type of requirement). These ordinal-based characteristics are analyzed separately in Sect. 4.2.2.5.

  5. Because of the scale types used for these two attributes, the Cohen Effect Size tests cannot be applied.

  6. In total, there were 148 requirements that were enabled by the architecture, 126 requirements that were constrained and 51 requirements that were influenced. However, since we found no statistically significant differences between enabled, constrained and influenced requirements they will simply be grouped as affected requirements.

  7. Due to readability of the template, not all columns with characteristics are shown. See Table 1 for full list.

References

  1. Arkley P, Riddle S (2005) Overcoming the traceability benefit problem. In: 13th IEEE international conference on requirements engineering (RE’05), Paris, France, pp 385–389

  2. Basili VR, Weiss D (1984) A methodology for collecting valid software engineering data. IEEE Trans OnSoft Eng, pp 728–738

  3. Bass L, Clements P, Kazman R (2003) Software architecture in practice, 2nd edn. Addison-Wesley, Reading

    Google Scholar 

  4. Bayer J, Muthig D, Widen T (2000) Customizable domain analysis. In: The proceedings of the first international symposium of generative and component-based software engineering (GCSE ‘99), Lecture Notes in Computer Science, Springer, pp. 178–194

  5. Berander P (2004) Using students as subjects in requirements prioritization. In: Proceedings of the 7th international conference on empirical assessment & evaluation in software engineering, pp. 95–102

  6. Breitman K, Sampaio do Prado JC (2000) Scenario evolution: a closer view on relationships. In: Fourth international conference on requirements engineering (RE’00), Illinois, US, pp 95–107

  7. Burgess C, Dattani I, Hughes G, May J, Rees K (2001) Using influence diagrams to aid the management of software change. J Requir Eng 6(3):173–182

    Article  MATH  Google Scholar 

  8. Carver J, Shull F, Basili V (2003) Observational studies to accelerate process experience in classroom studies: an evaluation. In: Proceedings of the 2003 international symposium on empirical software engineering (ISESE ‘03), Rome, Italy, pp 72–79

  9. Castillo E, Gutiérrez M, Hadi A (1997) Learning bayesian networks, Expert systems and probabilistic network models. Monographs in computer science. Springer, New York, pp 481–528

    Google Scholar 

  10. Cook J, Votta LG, Wolf AL, Wolf AL (1998) Cost-Effective analysis of in-place software processes. IEEE Trans Softw Eng 24(8):650–663

    Article  Google Scholar 

  11. El Emam K, Madhavji N (1995) Measuring the success of requirements engineering processes. In: Proceedings of the 2nd IEEE international symposium on requirements engineering, pp 204–211

  12. Etien A, Salinesi C (2005) Managing requirements in a co-evolution context. In: 13th IEEE international requirements engineering conference (RE’05), Paris, France, pp 125–134

  13. Ferrari R, Madhavji NH (2008) Software architecting without requirements knowledge and experience: what are the repercussions? J Syst Softw 81(9):1470–1490

    Article  Google Scholar 

  14. Ferreira S, Collofello J, Collofello J, Shunk D, Mackulak G, Mackulak G (2009) Understanding the effects of requirements volatility in software engineering by using analytical modeling and software process simulation. J Syst Softw 82(10):1568–1577

    Article  Google Scholar 

  15. Fusaro P, El Emam K, Smith B (1997) Evaluating the interrater agreement of process capability ratings. In: Proceedings of the 4th international software metrics symposium, pp 2–11

  16. Heindl M, Biffl S (2005) A case study on value-based requirements tracing. In: Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on foundations of software engineering, Lisbon, Portugal, pp 60–69

  17. IEEE SWEBOK (2004) Guide to the software engineering body of knowledge: 2004 Version. IEEE and IEEE Computer Society project. <http://www.swebok.org/>

  18. Host M, Regnell B, Wohlin C (2000) Using students as subjects—a comparative study of students and professionals in lead-time impact assessment, Empirical Software Engineering, pp 201–214

  19. Jackson M (1994) The role of architecture in requirements engineering. In: Proceedings of the 1st international conference on requirements engineering (RE ‘94’), pp 241

  20. John I, Muthig D, Sody P, Tolzmann E (2002) Efficient and systematic software evolution through domain analysis. In: 10th IEEE joint international requirements engineering conference (RE ‘02), Essen, Germany, pp 237–245

  21. Johnson PM, Moore CA, Dane JA, Brewer RS (2000) Empirically guided software effort guesstimation. IEEE Software 17(6)

  22. Johnson RB, Christensan L (2003) Educational research: quantitative, qualitative and mixed approaches. www.southalabama.edu/coe/bset/johnson/dr_johnson/2lectures.htm. Date last accessed June 2009

  23. Kamiya T, Kusumoto S, Inoue K (2002) CCFinder: a multilinguistic token-based code clone detection system for large scale source code. IEEE Trans Softw Eng 28(7):654–670

    Article  Google Scholar 

  24. Kotonya G, Sommerville I (1998) Requir Eng. Wiley, England

    Google Scholar 

  25. Van Lamsweerde A (2003) From system goals to software architecture. In: Bernardo M, Inverardi P (eds) Formal methods for software architectures. LNCS 2804, Springer, Berlin, pp 25–43

    Google Scholar 

  26. Maiden N, Manning S, Robertson S, Greenwood J (2004) Integrating creativity workshops into structured requirements processes. In: Proceedings of the 5th conference on designing interactive systems: processes, practices, methods, and techniques, Cambridge, MA, US, pp 113–122

  27. Mead N (1994) The role of software architecture in requirements engineering. In: Proceedings of the 1st international conference on requirements engineering, p 242

  28. Miller J, Ferrari R, Madhavji NH (2008) Architectural effects on requirements decisions: an exploratory study. In: 7th working international conference on software architecture, Vancouver, pp 231–240

  29. Miller J, Ferrari R, Madhavji NH (2009) Characteristics of new requirements in the presence or absence of an existing system architecture. In: 17th international conference on requirements engineering (RE ‘09), Atlanta, United States, pp 5–14

  30. Nuseibeh B (2001) Weaving together requirements and architectures. IEEE Comput 34(3):115–117

    Google Scholar 

  31. Nuseibeh B, Easterbrook S (2000) Requirements engineering: a roadmap. In: Proceedings of the 22nd international conference on software engineering (ICSE), pp 27–46

  32. Pett MA (1997) Nonparametric statistics for health care research: statistics for small samples and unusual distributions, 2nd edn. SAGE, Beverley Hills

    Google Scholar 

  33. Porter AA, Selby RW (1990) Empirically guided software development using metric-based classification trees. IEEE Softw 7(2):46–54

    Article  Google Scholar 

  34. Rajlich VT, Bennett KH (2000) A staged model for the software life cycle. IEEE Comput 33(7):66–71

    Google Scholar 

  35. Rao PV (1997) Statistical research methods in the life sciences. Brooks/Cole, Belmont

    Google Scholar 

  36. Rolland C, Salinesi C, Etien A (2004) Eliciting gaps in requirements change. J Requir Eng 1:1–15

    Article  Google Scholar 

  37. Shaw M (2003) Writing good software engineering research papers: minitutorial. In: Proceedings of the 25th international conference on software engineering (ICSE 2003), Portland, USA, Tutorial Session, pp 726–736

  38. Shekaran C (1994) Panel overview: the role of software architecture in requirements engineering. In: Proceedings of 1st international conference on requirements engineering, p 239

  39. Software Requirements to Architectures Workshop (STRAW’01 & ‘03) (2001 and 2003) June 2001, Toronto, Canada; May 2003, Portland, USA

  40. Tichy WF, Lukowicz, Prechelt L, Ernst A (1995) Experimental evaluation in computer science: a quantitative study. J Syst Softw (January), 1–18

  41. Vilella K, Doerr J, Gross A (2008) Proactively managing the evolution of embedded system requirements. In: 16th international conference on requirements engineering (RE ‘08), Delhi, India, pp 13–22

  42. Wieringa RJ, Heerkens J (2006) The methodological soundness of requirements engineering papers: a conceptual framework and two case studies. Requir Eng J 11:295–307

    Article  Google Scholar 

  43. Yu E (1997) Towards modelling and reasoning support for early-phase requirements engineering. In: Proceedings of the 3rd IEEE international symposium on requirements engineering (RE’97) January 6–8, 1997, Washington, DC, USA, pp 226–235

Download references

Acknowledgments

This work was, in part, supported by Natural Science and Engineering Research Council (NSERC) of Canada. We are also grateful to the study participants, and to the researchers who conducted the ratings of the requirements. Lastly, we are thankful to the anonymous reviewers for their helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Remo Ferrari.

Additional information

A preliminary version of this paper was published in [29].

Appendix: requirements ratings data collection instrument

Appendix: requirements ratings data collection instrument

The requirements ratings data collection instrument was administered to each requirements rater to collect the requirements rating data (see Sect. 3.6.1). The instrument was operationalized through an MS Excel Spreadsheet file, and the organization of the spreadsheet is organized from the structure of Table 8 below.

Table 8 Requirements rating data entry template

Essentially, each requirement takes up two rows of this table. The first row is where the information pertaining to the requirement is given to the raters and where they enter the ratings for the different requirements characteristics. Specifically, for each requirement, there are four pieces of information given to the reviewer: requirements ID, a title, a description, and a rationale. The requirement ID is a numerical value that uniquely identifies the requirement. The title explicitly indicates what part of the system the requirement is referring to (Tele-Banking, Wireless Banking, Web Banking or Interac.) The description is the requirement itself, and the rationale provides additional reasoning as to why the requirement is necessary. These four pieces of information are given in the first four columns of the Table. The next twelve columnsFootnote 7 are where the rater enters their rating for the particular requirements characteristic given in the column header. The raters filled out this part of the instrument with reference to the list of requirements characteristics, their definitions, and the scales to use for each characteristic (see Table 1). In the second row for a given requirement, the rater can optionally leave any comments regarding their specific ratings for a particular requirement characteristic entry.

Note that in order to remove possible researcher bias during the ratings process, the Table does not contain any information that can associate given requirements with specific teams that elicited the requirements and whether they had access to the existing SA during their RE project.

The results of each individual rater’s assessment are merged into another MS Excel sheet which is organized based on the structure from Table 2 in Sect. 3.6.1, where the inter-rater agreement procedure from Sect. 3.6.1 can be conducted.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ferrari, R., Miller, J.A. & Madhavji, N.H. A controlled experiment to assess the impact of system architectures on new system requirements. Requirements Eng 15, 215–233 (2010). https://doi.org/10.1007/s00766-010-0099-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00766-010-0099-3

Keywords

Navigation