Skip to main content
Log in

Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation meetings. Collocating stakeholders is often an expensive exercise. To reduce expense, we have proposed a framework for supporting software architecture evaluation process using groupware systems. This paper presents a controlled experiment that we conducted to assess the effectiveness of one of the key activities, developing scenario profiles, of the proposed groupware-supported process of evaluating software architecture. We used a cross-over experiment involving 32 teams of three 3rd and 4th year undergraduate students. We found that the quality of scenario profiles developed by distributed teams using a groupware tool were significantly better than the quality of scenario profiles developed by face-to-face teams (p < 0.001). However, questionnaires indicated that most participants preferred the face-to-face arrangement (82%) and 60% thought the distributed meetings were less efficient. We conclude that distributed meetings for developing scenario profiles are extremely effective but that tool support must be of a high standard or participants will not find distributed meetings acceptable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. “The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them.”(Bass et al. 2003).

  2. There are a few tools developed to support distributed inspection such as IBIS and ISPIS. Although, software architecture evaluation is quite different to inspection, tools for distributed inspection may be used to support distributed process of software architecture evaluation. For further discussion on this issue, see our work published in (Ali-Babar and Verner 2005).

References

  • Ali-Babar M, Verner J (2005) Groupware requirements for supporting software architecture evaluation process. In: Proceedings of the International Workshop on Distributed Software Development, Paris, 29 August 2005

  • Ali-Babar M, Zhu L, Jeffery R (2004) A Framework for Classifying and Comparing Software Architecture Evaluation Methods. In: Proceedings of the 15th Australian Software Engineering Conference, Melbourne, 13–16 April 2004

  • Ali-Babar M, Kitchenham B, Gorton I (2006a) Towards a distributed software architecture evaluation process—a preliminary assessment. In: Proceedings of the 28th International Conference on Software Engineering (Emerging Result Track), Shanghai, 20–28 May 2006

  • Ali-Babar M, Kitchenham B, Zhu L, Gorton I, Jeffery R (2006b) An empirical study of groupware support for distributed software architecture evaluation process. J Syst Softw 79(7):912–925

    Article  Google Scholar 

  • Basili VR, Selby RW, Hutchens DH (1986) Experimentation in software engineering. IEEE Trans Softw Eng 12(7):733–743

    Google Scholar 

  • Bass L, Clements P, Kazman R (2003) Software architecture in practice. Addison-Wesley, Reading

    Google Scholar 

  • Bengtsson P (2002) Architecture-level modifiability analysis. Ph.D. Thesis, Blekinge Institute of Technology

  • Bengtsson P, Bosch J (2000) An experiment on creating scenario profiles for software change. Ann Softw Eng 9:59–78

    Article  Google Scholar 

  • Biuk-Aghai RP, Hawryszkiewyez IT (1999) Analysis of virtual workspaces. In: Proceedings of the Database Applications in Non-Traditional Environments. Japan, 28–30 November 1999

  • Boeham B, Grunbacher P, Briggs RO (2001) Developing groupware for requirements negotiation: lessons learned. IEEE Softw 18(3):46–55

    Article  Google Scholar 

  • Clements P, Kazman R, Klein M (2002) Evaluating software architectures: methods and case studies. Addison-Wesley, Reading

    Google Scholar 

  • Damian DE, Eberlein A, Shaw MLG, Gaines BR (2000) Using different communication media in requirements negotiation. IEEE Softw 17(3):28–36

    Article  Google Scholar 

  • Dobrica L, Niemela E (2002) A survey on software architecture analysis methods. IEEE Trans Softw Eng 28(7):638–653

    Article  Google Scholar 

  • Ellis CA, Gibbs SJ, Rein GL (1991) Groupware: some issues and experiences. Commun ACM 34(1):38–58

    Article  Google Scholar 

  • Fjermestad J (2004) An analysis of communication mode in group support systems research. Decis Support Syst 37(2):239–263

    Google Scholar 

  • Fjermestad J, Hiltz SR (1998–1999) An assessment of group support systems experimental research: methodology and results. J Manage Inf Syst 15(3):7–149

    Google Scholar 

  • Fjermestad J, Hiltz SR (2000–2001) Group support systems: a descriptive evaluation of case and field studies. J Manage Inf Syst 17(3):115–159

    Google Scholar 

  • Genuchten MV, Cornelissen W, Dijk CV (1997–98) Supporting inspection with an electronic meeting system. J Manage Inf Syst 14(3):165–178

    Google Scholar 

  • Genuchten MV, Van Dijk C, Scholten H, Vogel D (2001) Using group support systems for software inspections. IEEE Softw 18(3):60–65

    Article  Google Scholar 

  • Halling M, Grunbacher P, Biffl S (2001) Tailoring a COTS group support system for software requirements inspection. In: Proceedings of the 16th International Conference on Automated Software Engineering, San Diego, 26–29 November 2001

  • Herbsleb JD, Moitra D (2001) Global software development. IEEE Softw 18(2):16–20

    Article  Google Scholar 

  • Hiltz R, Turoff M (1978) The network of nations: human communication via computer. Addison-Wesley, Reading

    Google Scholar 

  • Host M, Regnell B, Wohlin C (2000) Using students as subjects—a comparative study of students and professionals in lead-time impact assessment. Empir Softw Eng 5:201–214

    Article  Google Scholar 

  • Jarvenpaa SL, Rao VS, Huber GP (1988) Computer support for meetings of groups working on unstructured problems: a field experiment. MIS Q 12(4):645–666

    Article  Google Scholar 

  • Kazman R, Bass L (2002) Making architecture reviews work in the real world. IEEE Softw 19(1):67–73

    Article  Google Scholar 

  • Kazman R, Bass L, Abowd G, Webb M (1994) SAAM: a method for analyzing the properties of software architectures. In: Proceedings of the 16th International Conference on Software Engineering, Sorrento, May 1994

  • Kazman R, Abowd G, Bass L, Clements P (1996) Scenario-based analysis of software architecture. IEEE Softw Eng 13(6):47–55

    Article  Google Scholar 

  • Kazman R, Barbacci M, Klein M, Carriere SJ (1999) Experience with performing architecture tradeoff analysis. In: Proceedings of the 21st International Conference on Software Engineering, Los Angeles, May

  • Kazman R, Klein M, Clements P (2000) ATAM: method for architecture evaluation. CMU/SEI-2000-TR-004, Software Engineering Institute, Carnegie Mellon University, Pittsburgh

  • Kiesler S, Siegel J, McGuire TW (1984) Social psychological aspects of computer-mediated communication. Am Psychol 9(10):1123–1134

    Article  Google Scholar 

  • Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, El Emam K, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734

    Article  Google Scholar 

  • Kitchenham B, Fay J, Linkman S (2004) The case against cross-over design in software engineering. In: Proceedings of the 11th International Workshop on Software Technology and Engineering Practice, Amsterdam, 19–21 September 2003

  • Lanubile F, Mallardo T, Calefato F (2003) Tool support for geographically dispersed inspection teams. Softw Process Improv Pract 8(4):217–231

    Article  Google Scholar 

  • Lassing N, Bengtsson P, Bosch J, Vliet HV (2002) Experience with ALMA: architecture-level modifiability analysis. J Syst Softw 61(1):47–57

    Article  Google Scholar 

  • Lassing N, Rijsenbrij D, Vliet HV (2003) How well can we predict changes at architecture design time? J Syst Softw 65(2):141–153

    Google Scholar 

  • Maranzano JF, Rozsypal SA, Zimmerman GH, Warnken GW, Wirth PE, Weiss DM (2005) Architecture reviews: practice and experience. IEEE Softw 22(2):34–43

    Article  Google Scholar 

  • McGrath JE, Hollingshead AB (1994) Groups interacting with technology. Sage, Newbury Park

    Google Scholar 

  • Nunamaker J, Vogel D, Heminger A, Martz B (1989) Experiences at IBM with group support systems: a field study. Decis Support Syst 5:183–196

    Article  Google Scholar 

  • Nunamaker JF, Dennis AR, Valacich JS, Vogel D, George JF (1991) Electronic meeting systems to support group work. Commun ACM 34(7):40–61

    Article  Google Scholar 

  • Nunamaker JF, Briggs RO, Mittleman DD, Vogel DR, Balthazard PA (1996–1997) Lessons from a dozen years of group support systems research: a discussion of lab and field findings. J Manage Inf Syst 13(3):163–207

    Google Scholar 

  • Paasivaara M, Lassenius C (2003) Collaboration practices in global inter-organizational software development projects. Softw Process Improv Pract 8(4):183–199

    Article  Google Scholar 

  • Perry DE, Porter A, Wade MW, Votta LG, Perpich J (2002) Reducing inspection interval in large-scale software development. IEEE Trans Softw Eng 28(7):695–705

    Article  Google Scholar 

  • Poole MS, Desanctis G (1990) Understanding the use of group decision support systems: the theory of adaptive structuration. In: Fulk J, Steinfield C (eds) Organizations and communication technology. Sage, Newbury, pp 173–193

    Google Scholar 

  • Porter AA, Johnson PM (1997) Assessing software review meetings: results of a comparative analysis of two experimental studies. IEEE Trans Softw Eng 23(3):129–145

    Article  Google Scholar 

  • Rosnow RL, Rosenthal R (1997) People studying people: artifacts and ethics in behavioral research. Freeman, San Francisco

    Google Scholar 

  • Sakthivel S (2005) Virtual workgroups in offshore systems development. Inf Softw Technol 47(5):305–318

    Article  Google Scholar 

  • Sauer C, Jeffery DR, Land L, Yetton P (2000) The effectiveness of software development technical reviews: a behaviorally motivated program of research. IEEE Trans Softw Eng 26(1):1–14

    Article  Google Scholar 

  • Senn S (2002) Cross-over trials in clinical research. Wiley, New York

    Google Scholar 

  • Toothaker LE, Miller L (1996) Introductory statistics for the behavioral science. Brooks/Cole, Pacific Grove

    Google Scholar 

  • Tyran CK, George JF (2002) Improving software inspections with group process support. Commun ACM 45(9):87–92

    Article  Google Scholar 

  • Tyran CK, Dennis AR, Vogal DR, Nunamaker JF (1992) The application of electronic meeting technology to support strategic management. MIS Q 16:313–334

    Article  Google Scholar 

  • Valacich JS, Dennis AR, Nunamaker JF (1991) Electronic meeting support: the GroupSystems concepts. Int J Man-Mach Stud 34(2):261–282

    Article  Google Scholar 

  • Valacich J, Dennis AR, Nunamaker JF (1992) Group size and anonymity effects on computer-mediated idea generation. Small Group Res 23(1):49–73

    Article  Google Scholar 

  • Wohlin C, Runeson P, Host M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: an introduction. Kluwer, Norwell

    MATH  Google Scholar 

  • Zwiki (2004) Zwiki system. http://www.zwiki.org. Cited 30 November 2004.

Download references

Acknowledgment

We greatly appreciate the anonymous reviewers’ comments, which helped us improve this paper. We are grateful to the participants of this controlled experiment. Xiaowen Wang helped in preparing reference scenario profile and marking scenario profiles. The first author was working with the National ICT Australia when the reported work was performed.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Ali Babar.

Appendices

Appendix A

1.1 Questionnaire to gather self-reported data

Appendix B

1.1 Top 15 Reference Profile Scenarios

Table 9 Top 15 scenarios in the reference scenario profile for Zwiki system
Table 10 Top 15 scenarios in the reference scenario profile for LiveNet system

Appendix C

1.1 Experimental Data

Table 11 Experimental data

Rights and permissions

Reprints and permissions

About this article

Cite this article

Babar, M.A., Kitchenham, B. & Jeffery, R. Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment. Empir Software Eng 13, 39–62 (2008). https://doi.org/10.1007/s10664-007-9052-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-007-9052-6

Keywords

Navigation