Abstract
Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation meetings. Collocating stakeholders is often an expensive exercise. To reduce expense, we have proposed a framework for supporting software architecture evaluation process using groupware systems. This paper presents a controlled experiment that we conducted to assess the effectiveness of one of the key activities, developing scenario profiles, of the proposed groupware-supported process of evaluating software architecture. We used a cross-over experiment involving 32 teams of three 3rd and 4th year undergraduate students. We found that the quality of scenario profiles developed by distributed teams using a groupware tool were significantly better than the quality of scenario profiles developed by face-to-face teams (p < 0.001). However, questionnaires indicated that most participants preferred the face-to-face arrangement (82%) and 60% thought the distributed meetings were less efficient. We conclude that distributed meetings for developing scenario profiles are extremely effective but that tool support must be of a high standard or participants will not find distributed meetings acceptable.
Similar content being viewed by others
Notes
“The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them.”(Bass et al. 2003).
There are a few tools developed to support distributed inspection such as IBIS and ISPIS. Although, software architecture evaluation is quite different to inspection, tools for distributed inspection may be used to support distributed process of software architecture evaluation. For further discussion on this issue, see our work published in (Ali-Babar and Verner 2005).
References
Ali-Babar M, Verner J (2005) Groupware requirements for supporting software architecture evaluation process. In: Proceedings of the International Workshop on Distributed Software Development, Paris, 29 August 2005
Ali-Babar M, Zhu L, Jeffery R (2004) A Framework for Classifying and Comparing Software Architecture Evaluation Methods. In: Proceedings of the 15th Australian Software Engineering Conference, Melbourne, 13–16 April 2004
Ali-Babar M, Kitchenham B, Gorton I (2006a) Towards a distributed software architecture evaluation process—a preliminary assessment. In: Proceedings of the 28th International Conference on Software Engineering (Emerging Result Track), Shanghai, 20–28 May 2006
Ali-Babar M, Kitchenham B, Zhu L, Gorton I, Jeffery R (2006b) An empirical study of groupware support for distributed software architecture evaluation process. J Syst Softw 79(7):912–925
Basili VR, Selby RW, Hutchens DH (1986) Experimentation in software engineering. IEEE Trans Softw Eng 12(7):733–743
Bass L, Clements P, Kazman R (2003) Software architecture in practice. Addison-Wesley, Reading
Bengtsson P (2002) Architecture-level modifiability analysis. Ph.D. Thesis, Blekinge Institute of Technology
Bengtsson P, Bosch J (2000) An experiment on creating scenario profiles for software change. Ann Softw Eng 9:59–78
Biuk-Aghai RP, Hawryszkiewyez IT (1999) Analysis of virtual workspaces. In: Proceedings of the Database Applications in Non-Traditional Environments. Japan, 28–30 November 1999
Boeham B, Grunbacher P, Briggs RO (2001) Developing groupware for requirements negotiation: lessons learned. IEEE Softw 18(3):46–55
Clements P, Kazman R, Klein M (2002) Evaluating software architectures: methods and case studies. Addison-Wesley, Reading
Damian DE, Eberlein A, Shaw MLG, Gaines BR (2000) Using different communication media in requirements negotiation. IEEE Softw 17(3):28–36
Dobrica L, Niemela E (2002) A survey on software architecture analysis methods. IEEE Trans Softw Eng 28(7):638–653
Ellis CA, Gibbs SJ, Rein GL (1991) Groupware: some issues and experiences. Commun ACM 34(1):38–58
Fjermestad J (2004) An analysis of communication mode in group support systems research. Decis Support Syst 37(2):239–263
Fjermestad J, Hiltz SR (1998–1999) An assessment of group support systems experimental research: methodology and results. J Manage Inf Syst 15(3):7–149
Fjermestad J, Hiltz SR (2000–2001) Group support systems: a descriptive evaluation of case and field studies. J Manage Inf Syst 17(3):115–159
Genuchten MV, Cornelissen W, Dijk CV (1997–98) Supporting inspection with an electronic meeting system. J Manage Inf Syst 14(3):165–178
Genuchten MV, Van Dijk C, Scholten H, Vogel D (2001) Using group support systems for software inspections. IEEE Softw 18(3):60–65
Halling M, Grunbacher P, Biffl S (2001) Tailoring a COTS group support system for software requirements inspection. In: Proceedings of the 16th International Conference on Automated Software Engineering, San Diego, 26–29 November 2001
Herbsleb JD, Moitra D (2001) Global software development. IEEE Softw 18(2):16–20
Hiltz R, Turoff M (1978) The network of nations: human communication via computer. Addison-Wesley, Reading
Host M, Regnell B, Wohlin C (2000) Using students as subjects—a comparative study of students and professionals in lead-time impact assessment. Empir Softw Eng 5:201–214
Jarvenpaa SL, Rao VS, Huber GP (1988) Computer support for meetings of groups working on unstructured problems: a field experiment. MIS Q 12(4):645–666
Kazman R, Bass L (2002) Making architecture reviews work in the real world. IEEE Softw 19(1):67–73
Kazman R, Bass L, Abowd G, Webb M (1994) SAAM: a method for analyzing the properties of software architectures. In: Proceedings of the 16th International Conference on Software Engineering, Sorrento, May 1994
Kazman R, Abowd G, Bass L, Clements P (1996) Scenario-based analysis of software architecture. IEEE Softw Eng 13(6):47–55
Kazman R, Barbacci M, Klein M, Carriere SJ (1999) Experience with performing architecture tradeoff analysis. In: Proceedings of the 21st International Conference on Software Engineering, Los Angeles, May
Kazman R, Klein M, Clements P (2000) ATAM: method for architecture evaluation. CMU/SEI-2000-TR-004, Software Engineering Institute, Carnegie Mellon University, Pittsburgh
Kiesler S, Siegel J, McGuire TW (1984) Social psychological aspects of computer-mediated communication. Am Psychol 9(10):1123–1134
Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, El Emam K, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734
Kitchenham B, Fay J, Linkman S (2004) The case against cross-over design in software engineering. In: Proceedings of the 11th International Workshop on Software Technology and Engineering Practice, Amsterdam, 19–21 September 2003
Lanubile F, Mallardo T, Calefato F (2003) Tool support for geographically dispersed inspection teams. Softw Process Improv Pract 8(4):217–231
Lassing N, Bengtsson P, Bosch J, Vliet HV (2002) Experience with ALMA: architecture-level modifiability analysis. J Syst Softw 61(1):47–57
Lassing N, Rijsenbrij D, Vliet HV (2003) How well can we predict changes at architecture design time? J Syst Softw 65(2):141–153
Maranzano JF, Rozsypal SA, Zimmerman GH, Warnken GW, Wirth PE, Weiss DM (2005) Architecture reviews: practice and experience. IEEE Softw 22(2):34–43
McGrath JE, Hollingshead AB (1994) Groups interacting with technology. Sage, Newbury Park
Nunamaker J, Vogel D, Heminger A, Martz B (1989) Experiences at IBM with group support systems: a field study. Decis Support Syst 5:183–196
Nunamaker JF, Dennis AR, Valacich JS, Vogel D, George JF (1991) Electronic meeting systems to support group work. Commun ACM 34(7):40–61
Nunamaker JF, Briggs RO, Mittleman DD, Vogel DR, Balthazard PA (1996–1997) Lessons from a dozen years of group support systems research: a discussion of lab and field findings. J Manage Inf Syst 13(3):163–207
Paasivaara M, Lassenius C (2003) Collaboration practices in global inter-organizational software development projects. Softw Process Improv Pract 8(4):183–199
Perry DE, Porter A, Wade MW, Votta LG, Perpich J (2002) Reducing inspection interval in large-scale software development. IEEE Trans Softw Eng 28(7):695–705
Poole MS, Desanctis G (1990) Understanding the use of group decision support systems: the theory of adaptive structuration. In: Fulk J, Steinfield C (eds) Organizations and communication technology. Sage, Newbury, pp 173–193
Porter AA, Johnson PM (1997) Assessing software review meetings: results of a comparative analysis of two experimental studies. IEEE Trans Softw Eng 23(3):129–145
Rosnow RL, Rosenthal R (1997) People studying people: artifacts and ethics in behavioral research. Freeman, San Francisco
Sakthivel S (2005) Virtual workgroups in offshore systems development. Inf Softw Technol 47(5):305–318
Sauer C, Jeffery DR, Land L, Yetton P (2000) The effectiveness of software development technical reviews: a behaviorally motivated program of research. IEEE Trans Softw Eng 26(1):1–14
Senn S (2002) Cross-over trials in clinical research. Wiley, New York
Toothaker LE, Miller L (1996) Introductory statistics for the behavioral science. Brooks/Cole, Pacific Grove
Tyran CK, George JF (2002) Improving software inspections with group process support. Commun ACM 45(9):87–92
Tyran CK, Dennis AR, Vogal DR, Nunamaker JF (1992) The application of electronic meeting technology to support strategic management. MIS Q 16:313–334
Valacich JS, Dennis AR, Nunamaker JF (1991) Electronic meeting support: the GroupSystems concepts. Int J Man-Mach Stud 34(2):261–282
Valacich J, Dennis AR, Nunamaker JF (1992) Group size and anonymity effects on computer-mediated idea generation. Small Group Res 23(1):49–73
Wohlin C, Runeson P, Host M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: an introduction. Kluwer, Norwell
Zwiki (2004) Zwiki system. http://www.zwiki.org. Cited 30 November 2004.
Acknowledgment
We greatly appreciate the anonymous reviewers’ comments, which helped us improve this paper. We are grateful to the participants of this controlled experiment. Xiaowen Wang helped in preparing reference scenario profile and marking scenario profiles. The first author was working with the National ICT Australia when the reported work was performed.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A
1.1 Questionnaire to gather self-reported data
Appendix B
1.1 Top 15 Reference Profile Scenarios
Appendix C
1.1 Experimental Data
Rights and permissions
About this article
Cite this article
Babar, M.A., Kitchenham, B. & Jeffery, R. Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment. Empir Software Eng 13, 39–62 (2008). https://doi.org/10.1007/s10664-007-9052-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10664-007-9052-6