skip to main content
10.1145/3084226.3084253acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
research-article

Evaluating Software Architecture Evaluation Methods: An Internal Replication

Authors Info & Claims
Published:15 June 2017Publication History

ABSTRACT

Context: The size and complexity of software systems along with the demand for ensuring quality requirements have fostered the interest in software architecture evaluation methods. Although several empirical studies have been reported, the actual body of knowledge is still insufficient. To address this concern, we presented a family of four controlled experiments that compares a recently proposed method, the Quality-Driven Architecture Derivation and Improvement (QuaDAI) method against the well-known Architecture Tradeoff Analysis Method (ATAM).

Objective: To provide further evidence on the efficiency, effectiveness, and perceived satisfaction of participants using these two software architecture evaluation methods. We report the results of a differentiated internal replication study.

Method: The same materials used in the baseline experiments were employed in this replication but the participants were sixteen practitioners. In addition, we used a simpler design to reduce the treatments' application sequences.

Results: The participants obtained architectures with better quality when applying QuaDAI, and they found this method to be more useful and likely to be used than ATAM, but no difference in terms of efficiency and perceived ease of use were found.

Conclusions: The results are in line with the baseline experiments and support the hypothesis that QuaDAI achieve better results than ATAM when performing architectural evaluations; however, further work is need to improve the methods usability.

References

  1. Aleti A., Buhnova B. Grunske L., Koziolek A., Meedeniya I. Software Architecture Optimization Methods: A Systematic Literature Review, IEEE Transactions on Software Engineering 39(5), 658--683, May 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Babar M. A., Gorton I. Software architecture review: The state of practice. Computer, 42:26{32, July 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Babar M., Kitchenham B. Assessment of a Framework for Comparing Software Architecture Analysis Methods, in: 11th Int. Conf. on Evaluation and Assessment in Software Engineering (EASE 2007), Keele, England, 2007, pp. 12--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Babar M. A., Bass L., Gorton I. Factors influencing industrial practices of software architecture evaluation: an empirical investigation. 3rd. Int. Conf. on the Quality of Software Architectures (QoSA 2007), pp. 90-- 107, Berlin, Heidelberg, 2007. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Barbacci M., Clements P., Lattanze A., Northrop L., Wood W. Using the Architecture Tradeoff Analysis Method (ATAM) to Evaluate the Software Architecture for a Product Line of Avionics Systems: A Case Study. Tech. Report CMU/SEI-2003-TN-012, Software Eng. Institute, 2003.Google ScholarGoogle Scholar
  6. Barcelos R., Travassos G. Evaluation approaches for software architectural documents: a systematic review. Ibero-American Workshop on Requirements Engineering and Software Environments (IDEAS 2006), Argentina, 2006.Google ScholarGoogle Scholar
  7. Briand, L., Labiche, Y., Di Penta, M., Yan-Bondoc, H. 2005. An experimental investigation of formality in UML-based development, IEEE Transactions on Software Engineering 31(10), pp. 833--849. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Carifio J., Perla R. J. Ten common misunderstandings, misconceptions, persistent myths and urban legends about Likert scales and Likert response formats and their antidotes, J. Social Sci. 3 (3) (2007) 106--116.Google ScholarGoogle Scholar
  9. Clements P., Kazman R., and Klein M. Evaluating software architectures: methods and case studies. Addison-Wesley Reading, MA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Datorro, J.: Convex Optimization & Euclidean Distance Geometry. Meboo Publishing (2005)Google ScholarGoogle Scholar
  11. Davis F.D. Perceived usefulness, perceived ease of use and user acceptance of information technology, MIS Quart. 13(3) (1989) 319--340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Dobrica L., Niemela E. A survey on software architecture analysis methods. IEEE Trans. on Softw. Eng., 28(7):638--653. July 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Douglass B.P. Real-time Design Patterns: Robust Scalable Architecture for Real-time Systems, Addison-Wesley, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Falessi D., Babar M. A., Cantone G., Kruchten P. Applying empirical software engineering to software architecture: challenges and lessons learned. Empirical Software Engineering 15(3), 250--276, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Gómez O., Juristo N., Vegas S. Understanding Replication of Experiments in Software Engineering: A Classification, Information and Software Technology (2014) 1033--1048.Google ScholarGoogle Scholar
  16. Gonzalez-Huerta J., Insfrán E., Abrahão S., Scanniello G. Validating a model-driven software architecture evaluation and improvement method: A family of experiments. Information and Software Technology 57: 405--429 (2015)Google ScholarGoogle ScholarCross RefCross Ref
  17. Gonzalez-Huerta J., Insfrán E., Abrahão S. Defining and Validating a Multimodel Approach for Product Architecture Derivation and Improvement. ACM/IEEE 16th International Conference on Model-Driven Engineering Languages and Systems (MODELS 2013), Miami, USA, pp. 388--404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Haitzer T., Zdun U. Controlled Experiment on the Supportive Effect of Architectural Component Diagrams for Design Understanding of Novice Architects, 7th European Conference on Software Architecture (ECSA 2013), Montpellier, France, LNCS Vol. 7957, pp 54--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Juristo N., Vegas S. Using differences among replications of software engineering experiments to gain knowledge, 3rd Int. Symp on Empirical Software Engineering and Measurement (ESEM 2009), Lake Buena Vista, USA, pp. 356--366. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Koziolek H. Sustainability Evaluation of Software Architectures: A Systematic Review, Joint ACM SIGSOFT Conference on Quality of Software Aarchitectures and ACM SIGSOFT Symposium on Architecting Critical Systems (QoSA-ISARCS 2011), Boulder, USA, ACM, pp. 3--12 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Maxwell, K. Applied Statistics for Software Managers. Software Quality Institute Series, Prentice Hall, 2002.Google ScholarGoogle Scholar
  22. Microsoft MSDN, Performance and Reliability Patterns, Version 1.1.0, 2003, <http://msdn.microsoft.com/en-us/library/ff648802.aspx> (accessed 01/2017).Google ScholarGoogle Scholar
  23. Miller J. A., Ferrari R., Madhavji N. H. An exploratory study of architectural effects on requirements decisions, Journal of Systems and Software 83(12) Dec. 2010, pp. 2441--2455. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Reijonen V., Koskinen J., Haikala I. Experiences from Scenario-based Architecture Evaluations with ATAM, 4th European Conference on Software Architecture (ECSA 2010), Copenhagen, Denmark, 2010, pp. 214--229. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Taher, L., Khatib, H.E., Basha., R. A framework and QoS matchmaking algorithm for dynamic web services selection. In: 2nd Int. Conference on Innovations in Information Technology, Dubai, UAE (2005)Google ScholarGoogle Scholar
  26. Van Heesch U., Avgeriou P. Mature architecting -- a survey about the reasoning process of professional architects, 9th Working IEEE/IFIP Conference on Software Architecture (WICSA 2011), pp. 260--269, IEEE Computer Society, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Vegas S., Apa C., Juristo N. Crossover Designs in Software Engineering Experiments: Benefits and Perils. IEEE Trans. Software Eng. 42(2): 120--135 (2016) Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Wohlin, C., Runeson, P., Host, M., Ohlsson, M.C., Regnell, B., Weslen, A. Experimentation in Software Engineering - An Introduction, Kluwer, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Evaluating Software Architecture Evaluation Methods: An Internal Replication

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Other conferences
                EASE '17: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering
                June 2017
                405 pages
                ISBN:9781450348041
                DOI:10.1145/3084226

                Copyright © 2017 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 15 June 2017

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • research-article
                • Research
                • Refereed limited

                Acceptance Rates

                Overall Acceptance Rate71of232submissions,31%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader