Skip to main content
Log in

Supporting the analyzability of architectural component models - empirical findings and tool support

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

This article discusses the understandability of component models that are frequently used as central views in architectural descriptions of software systems. We empirically examine how different component level metrics and the participants’ experience and expertise can be used to predict the understandability of those models. In addition, we develop a tool that supports applying the obtained empirical findings in practice. Our results show that the prediction models have a large effect size, which means that their prediction strength is of high practical significance. The participants’ experience plays an important role in the prediction but the obtained models are not as accurate as the models that use the component level metrics. The developed tools combine the DSL-based architecture abstraction approach with the obtained empirical findings. While the DSL-based architecture abstraction approach enables software architects to keep source code and architecture consistent, the metrics extensions enable them, while working with the DSL, to continuously judge and improve the analyzability of architectural component models based on the understandability of their individual components they create with the DSL. Provided metrics extensions can also help in assessing how much each architectural rule used to specify the DSL affects the understandability of a component which enables for instance finding the rules that contribute the most to a limited understandability. Finally, our approach supports change impact analysis, i.e., the identification of changes that affect different analyzability levels of the component models. We studied the applicability of our approach in a case study of an existing open source system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. Please note that the relationships between the classes consider dependencies between the classes affected by method calls, data reference or inheritance relationships. The same dependencies are considered for all sets of metrics.

  2. all versions: https://github.com/soomla/android-store, studied version: https://swa.univie.ac.at/soomla/

  3. https://swa.univie.ac.at/soomla-architectural-components/

  4. In principle the predictors with the highest VIF values are step-by-step excluded from the set until the highest VIF value becomes less than 10. In our case we have two predictors that have high VIF values that are close to each other (both in the first and in the second step of the analysis) and therefore we can exclude either one or another predictor. The performances of the obtained linear regression models in all the cases show very tiny differences between each other (see Section 4.2).

  5. Please note that predicting the percentage of the correct answers variable is also possible but since we focus on estimating the time as a measure for the understandability effort we consider the percentage of the correct answers as an auxiliary variable that helps in predicting the time variable.

  6. “Cross Validation techniques in R: A brief overview of some methods, packages, and functions for assessing prediction models”.

  7. Nested models are those where all predictors from one model are also contained in the other model. Our models use different sets of predictors and therefore they are non-nested.

  8. The reason for that is that Model 2 has a lower number of predictors which is more preferable for the AICc criterion.

  9. www.objectaid.com

  10. http://frag.sourceforge.net/

  11. The classes contained in the components are compared using their full qualified names that include the names of all packages that contain a given class.

References

  • Aguilar ER, García F, Ruiz F, Piattini M (2007) An exploratory experiment to validate measures for business process models. In: Rolland C, Pastor O, Cavarero J-L (eds) RCIS, pp 271–280

  • Albrecht A, Gaffney JE (1983) Software function, source lines of code, and development effort prediction: a software science validation. IEEE Trans Softw Eng SE-9:639–648

    Article  Google Scholar 

  • Allen EB (2002) Measuring graph abstractions of software: an information-theory approach. In: IEEE METRICS. IEEE Computer Society, Washington, p 182

  • Allen EB, Gottipati S, Govindarajan R (2007) Measuring size, complexity, and coupling of hypergraph abstractions of software: an information-theory approach. Software Quality Control 15:179–212

    Article  Google Scholar 

  • Babar MA, Lago P (2009) Editorial: design decisions and design rationale in software architecture. J Syst Softw 82:1195–1197

    Article  Google Scholar 

  • Bansiya J, Davis CG (2002) A hierarchical model for object-oriented design quality assessment. IEEE Trans Softw Eng 28:4–17

    Article  Google Scholar 

  • Barnes JM, Garlan D, Schmerl BR (2014) Evolution styles: foundations and models for software architecture evolution. Softw Syst Model 13(2):649–678

    Article  Google Scholar 

  • Basili V, Briand L, Melo W (1996) A validation of object-oriented design metrics as quality indicators. IEEE Trans Softw Eng 22:751–761

    Article  Google Scholar 

  • Bass L, Clements P, Kazman R (1998) Software architecture in practice. Addison-Wesley Longman Publishing Co. Inc., Boston

    Google Scholar 

  • Belsley D (1991) Conditioning diagnostics, collinearity and weak data in regression. Wiley-Interscience, Hoboken

    MATH  Google Scholar 

  • Belsley DA, Kuh E, Welsch RE (1980) Regression diagnostics: identifying influential data and sources of collinearity (Wiley Series in Probability and Statistics). Wiley-Interscience, Hoboken

    Book  Google Scholar 

  • Bhattacharya P, Iliofotou M, Neamtiu I, Faloutsos M (2012) Graph-based analysis and prediction for software evolution. In: ICSE’12, pp 419–429

  • Björkander M, Kobryn C (2003) Architecting systems with UML 2.0. IEEE Softw 20:57–61

    Article  Google Scholar 

  • Boehm B (1978) Characteristics of software quality. North-Holland Pub. Co., TRW series of software technology

    Google Scholar 

  • Booch G (1994) Object-oriented analysis and design with applications, 2nd edn. Benjamin-Cummings Publishing Co. Inc., Redwood City

    MATH  Google Scholar 

  • Bouwers E, Correia JP, Deursen A, Visser J (2011) Quantifying the analyzability of software architectures. In: 2011 Ninth working IEEE/IFIP conference on software architecture. IEEE, Piscataway, pp 83–92

  • Briand L, Labiche Y, Di Penta M, Yan-Bondoc H (2005) An experimental investigation of formality in uml-based development. IEEE Trans Softw Eng 31:833–849

    Article  Google Scholar 

  • Burnham K, Anderson D (2002) Model selection and multimodel inference: a practical information-theoretic approach. Springer, Berlin

    MATH  Google Scholar 

  • Canfora G, García F, Piattini M, Ruiz F, Visaggio C (2005) A family of experiments to validate metrics for software process models. J Syst Softw 77(2):113–129

    Article  Google Scholar 

  • Cardoso J (2006) Process control-flow complexity metric: an empirical validation. In: IEEE international conference on services computing, 2006. SCC ’06, pp 167–173

  • Chidamber S, Kemerer C (1994) A metrics suite for object oriented design. IEEE Trans Softw Eng 20:476–493

    Article  Google Scholar 

  • Clements P, Garlan D, Bass L, Stafford J, Nord R, Ivers J, Little R (2002) Documenting software architectures: views and beyond. Pearson Education, London

    Google Scholar 

  • Clements P, Bachmann F, Bass L, Garlan D, Ivers J, Little R, Nord R, Stafford J (2003) Documenting software architectures: views and beyond. Addison-wesley, Boston

    Google Scholar 

  • Cohen J (1988) Statistical power analysis for the behavioral sciences. Lawrence Erlbaum, New Jersey

    MATH  Google Scholar 

  • Cuesta CE, Navarro E, Perry DE, Roda C (2013) Evolution styles: using architectural knowledge as an evolution driver. Journal of Software: Evolution and Process 25(9):957–980

    Google Scholar 

  • Dalgaard P (2004) Introductory statistics with r. Springer, Berlin

    MATH  Google Scholar 

  • Dromey RG (1995) A model for software product quality. IEEE Trans Softw Eng 21:146–162

    Article  Google Scholar 

  • Dromey RG, McGettrick AD (1992) On specifying software quality. Softw Qual J 1:45–74

    Article  Google Scholar 

  • Dugerdil P, Niculescu M (2014) Visualizing software structure understandability. In: 23rd Australian software engineering conference, ASWEC 2014, Milsons Point, April 7-10, 2014, IEEE Computer Society, Sydney, pp 110–119

  • Ebel R, Frisbie D (1991) Essentials of educational measurement. Prentice Hall, Upper Saddle River

    Google Scholar 

  • Egyed A (2004) Consistent adaptation and evolution of class diagrams during refinement. In: Fundamental approaches to software engineering, 7th international conference, FASE 2004, ETAPS 2004 Barcelona, Spain, vol. 2984 of Lecture Notes in Computer Science. Springer, Berlin, pp 37–53

    Chapter  Google Scholar 

  • Elish MO (2010) Exploring the relationships between design metrics and package understandability: a case study. In: ICPC. IEEE Computer Society, Washington, pp 144–147

  • Cook RD (1977) Detection of Influential Observation in Linear Regression, Technometrics, 19(1):15–18. https://doi.org/10.1080/00401706.1977.10489493

    MathSciNet  MATH  Google Scholar 

  • Fenton NE, Pfleeger SL (1998) Software metrics: a rigorous and practical approach, 2nd edn. PWS Publishing Co., Boston

    Google Scholar 

  • Fenton NE, Ohlsson N (2000) Quantitative analysis of faults and failures in a complex software system. IEEE Trans Softw Eng 26:797–814

    Article  Google Scholar 

  • Field A, Miles J, Field Z (2012) Discovering statistics using r. SAGE Publications, Thousand Oaks

    Google Scholar 

  • Genero Bocco M, Moody DL, Piattini M (2005) Assessing the capability of internal metrics as early indicators of maintenance effort through experimentation: Research articles. J Softw Maint Evol 17:225–246

    Article  Google Scholar 

  • Ghezzi C, Jazayeri M, Mandrioli D (2002) Fundamentals of software engineering, 2nd edn. Prentice Hall PTR, Upper Saddle River

    MATH  Google Scholar 

  • Graves TL, Karr AF, Marron JS, Siy H (2000) Predicting fault incidence using software change history. IEEE Trans Softw Eng 26:653–661

    Article  Google Scholar 

  • Gupta V, Chhabra JK (2009) Package coupling measurement in object-oriented software. J Comput Sci Technol 24:273–283

    Article  Google Scholar 

  • Gupta V, Chhabra JK (2012) Package level cohesion measurement in object-oriented software. J Braz Comp Soc 18(3):251–266

    Article  Google Scholar 

  • Haitzer T, Zdun U (2014) Semi-automated architectural abstraction specifications for supporting software evolution. Sci Comput Program 90(Part B, 0):135–160. Special Issue on Component-Based Software Engineering and Software Architecture

    Article  Google Scholar 

  • Harrison R, Counsell SJ, Nithi RV (1998) An evaluation of the mood set of object-oriented software metrics. IEEE Trans Softw Eng 24:491–496

    Article  Google Scholar 

  • Hofmeister C, Nord R, Soni D (2000) Applied software architecture. Addison-Wesley Professional, Boston

    Google Scholar 

  • Hwa J, Lee S, Kwon Y-R (2009) Hierarchical understandability assessment model for large-scale oo system. In: Software engineering conference, 2009. APSEC ’09. Asia-Pacific, pp 11–18

  • Jansen A, Bosch J (2005) Software architecture as a set of architectural design decisions. In: Proceedings of the 5th working IEEE/IFIP conference on software architecture, WICSA ’05. IEEE Computer Society, Washington, pp 109–120

  • Kabacoff R (2011) R in action: data analysis and graphics with r. Manning Pubs Co Series, Manning

    Google Scholar 

  • Kampenes VB, Dybå T, Hannay JE, Sjøberg DIK (2007) Systematic review: a systematic review of effect size in software engineering experiments. Inf Softw Technol 49:1073–1086

    Article  Google Scholar 

  • Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, El Emam K, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28:721–734

    Article  Google Scholar 

  • Kobayashi M, Sakata S (1990) Mallows’ Cp criterion and unbiasedness of model selection. J Econ, Elsevier 45(3):385–395. <https://ideas.repec.org/a/eee/econom/v45y1990i3p385-395.html>>

    Article  Google Scholar 

  • Konersmann M, Durdik Z, Goedicke M, Reussner RH (2013) Towards architecture-centric evolution of long-living systems (the advert approach). In: Kruchten P, Koziolek A, Nord RL (eds) QoSA. ACM, New York, pp 163–168

  • Kruchten P (1995) The 4 + 1 view model of architecture. IEEE Softw 12:42–50

    Article  Google Scholar 

  • Lindland OI, Sindre G, Sølvberg A (1994) Understanding quality in conceptual modeling. IEEE Softw 11:42–49

    Article  Google Scholar 

  • Losavio F, Chirinos L, Lévy N, Ramdane-Cherif A (2003) Quality characteristics for software architecture. Journal of Object Technology 2(2):133–150

    Article  Google Scholar 

  • Lungu M, lanza M, Girba T (2006) Package patterns for visual architecture recovery. In: Proceedings of the 10th European conference on software maintenance and reengineering 2006. CSMR 2006, pp 10–196

  • Ma Y, He K, Du D, Liu J, Yan Y (2006) A complexity metrics set for large-scale object-oriented software systems. In: Proceedings of the Sixth IEEE international conference on computer and information technology, CIT ’06. IEEE Computer Society, Washington, p 189

  • Malaiya YK, Denton J (2000) Module size distribution and defect density. In: Proceedings of the 11th international symposium on software reliability engineering, ISSRE ’00, IEEE Computer Society, p 62

  • Maqbool O, Babri H (2007) Hierarchical clustering for software architecture recovery. IEEE Trans Softw Eng 33:759–780

    Article  Google Scholar 

  • Martin RC (2003) Agile software development: principles, patterns, and practices. Prentice Hall PTR, Upper Saddle River

    Google Scholar 

  • Mazza C, Fairclough J, Bryan M, Daniel P, Adriaan S, Richard S, Michael J, Alvisi G (1996) Software engineering guides. Prentice-Hall International, UK

    Google Scholar 

  • McCabe TJ (1976) A Complexity Measure. IEEE Trans Softw Eng 2(4):308–320. https://doi.org/10.1109/TSE.1976.233837

    Article  MathSciNet  MATH  Google Scholar 

  • Medvidovic N, Taylor RN (2000) A classification and comparison framework for software architecture description languages. IEEE Trans Softw Eng 26:70–93

    Article  Google Scholar 

  • Medvidovic N, Rosenblum DS, Redmiles DF, Robbins JE (2002) Modeling software architectures in the unified modeling language. ACM Trans Softw Eng Methodol 11(1):2–57

    Article  Google Scholar 

  • Mendling J (2008) Metrics for process models: empirical foundations of verification, error prediction, and guidelines for correctness, 1st edn. Springer Publishing Company, Incorporated , New York

    Book  Google Scholar 

  • Mohagheghi P, Conradi R, Killi OM, Schwarz H (2004) An empirical study of software reuse vs. defect-density and stability. In: Proceedings of the 26th international conference on software engineering, ICSE ’04. IEEE Computer Society, Washington, pp 282–292

  • Moody DL (1998) Metrics for evaluating the quality of entity relationship models. In: Proceedings of the 17th international conference on conceptual modeling, ER ’98. Springer, London, pp 211–225

    Chapter  Google Scholar 

  • Moody DL (2003) Measuring the quality of data models: an empirical evaluation of the use of quality metrics in practice. In: Ciborra CU, Mercurio R, de Marco M, Martinez M, Carignani A (eds) ECIS, pp 1337–1352

  • Morasca S (1999) Measuring attributes of concurrent software specifications in petri nets. In: Software metrics symposium, 1999. Proceedings. Sixth International, pp 100–110

  • Nissen ME (1998) Redesigning reengineering through measurement-driven inference. MIS Q 22:509–534

    Article  Google Scholar 

  • Oreizy P, Gorlick MM, Taylor RN, Heimbigner D, Johnson G, Medvidovic N, Quilici A, Rosenblum DS, Wolf AL (1999) An architecture-based approach to self-adaptive software. IEEE Intell Syst 14:54–62

    Article  Google Scholar 

  • Otero MC, Dolado JJ (2004) Evaluation of the comprehension of the dynamic modeling in uml. Inf Softw Technol 46(1):35–53

    Article  Google Scholar 

  • O’brien RM (2007) A caution regarding rules of thumb for variance inflation factors. Qual Quant 41(5):673–690

    Article  Google Scholar 

  • Pacione MJ, Roper M, Wood M (2004) A novel software visualisation model to support software comprehension. In: 11th working conference on reverse engineering, pp 70–79

  • Patig S (2008) A practical guide to testing the understandability of notations. In: Proceedings of the Fifth Asia-Pacific conference on conceptual modelling - vol 79, APCCM ’08. Australian Computer Society, Inc., Darlinghurst, pp 49–58

  • Purchase HC, Colpoys L, McGill M, Carrington D, Britton C (2001) Uml class diagram syntax: an empirical study of comprehension. In: Proceedings of the 2001 Asia-Pacific symposium on information visualisation - vol 9, APVis ’01. Australian Computer Society, Inc., Darlinghurst, pp 113–120

  • Reijers H, Mendling J (2011) A study into the factors that influence the understandability of business process models. IEEE Trans Syst Man Cybern Part A Syst Humans 41:449–462

    Article  Google Scholar 

  • Robbins JE, Medvidovic N, Redmiles DF, Rosenblum DS (1998) Integrating architecture description languages with a standard design method. In: Proceedings of the 20th international conference on software engineering, ICSE ’98, IEEE Computer Society, pp 209–218

  • Rozanski N, Woods E (2005) Software systems architecture: working with stakeholders using viewpoints and perspectives. Addison-Wesley Professional, Boston

    Google Scholar 

  • Rubinfeld DL (2000) Reference guide on multiple regression, 2nd edn. Federal Judicial Center, Washington

    Google Scholar 

  • R Development Core Team (2008) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0. http://www.R-project.org

  • Sarkar S, Kak A, Rama G (2008) Metrics for measuring the quality of modularization of large-scale object-oriented software. IEEE Trans SSoftw Eng 34:700–720

    Article  Google Scholar 

  • Sartipi K (2001) A software evaluation model using component association views. In: IWPC, pp 259–268

  • Sengupta S, Kanjilal A, Bhattacharya S (2011) Measuring complexity of component based architecture: a graph based approach. SIGSOFT Softw Eng Notes 36:1–10

    Article  Google Scholar 

  • Sharma A, Grover PS, Kumar R (2009) Dependency analysis for component-based software systems. SIGSOFT Softw Eng Notes 34:1–6

    Google Scholar 

  • Soo LG, Jung-Mo Y (1992) An empirical study on the complexity metrics of petri nets. Microelectron Reliab 32(3):323–329

    Article  Google Scholar 

  • Stevanetic S, Zdun U (2014a) Exploring the relationships between the understandability of architectural components and graph-based component level metrics. In: Proceedings of the 14th international conference on software quality (QSIC), QSIC 2014. IEEE Computer Society, Dallas

  • Stevanetic S, Zdun U (2014b) Exploring the relationships between the understandability of components in architectural component models and component level metrics. In: Proceedings of the 18th international conference on evaluation and assessment in software engineering (EASE), EASE 2014. ACM Computer Society, London

  • Stevanetic S, Zdun U (2015) Software metrics for measuring the understandability of architectural structures – a systematic mapping study. In: EASE 2015 - 19th international conference on evaluation and assessment in software engineering

  • Stevanetic S, Zdun U (2016) Exploring the understandability of components in architectural component models using component level metrics and participants’ experience. In: The 19th international ACM Sigsoft symposium on component-based software engineering (CBSE 2016)

  • Stevanetic S, Haitzer T, Zdun U (2014) Supporting software evolution by integrating dsl-based architectural abstraction and understandability related metrics. In: Proceedings of the 2014 European conference on software architecture workshops, ECSAW ’14. ACM, New York, pp 19:1–19:8

  • Sun D, Wong K (2005) On evaluating the layout of uml class diagrams for program comprehension. In: Proceedings. 13th international workshop on program comprehension, 2005. IWPC 2005 , pp 317–326

  • Vanderfeesten I, Reijers HA, Mendling RJ, Aalst WM, Cardoso J (2008) On a quest for good process models: the cross-connectivity metric. In: Bellahséne Z, Léonard M (eds) Proceedings of the 20th international conference on Advanced Information Systems Engineering (CAiSE ’08). https://doi.org/10.1007/978-3-540-69534-9_36. Springer, Berlin, Heidelberg, pp 480–494

    Google Scholar 

  • Vanhatalo J, Völzer H, Leymann F (2007) Faster and more focused control-flow analysis for business process models through sese decomposition. In: Proceedings of the 5th international conference on service-oriented computing, ICSOC ’07. Springer, Berlin, pp 43–55

  • van der Aalst WMP, Bisgaard Lassen K (2008) Translating unstructured workflow processes to readable bpel: theory and implementation. Inf Softw Technol 50:131–159

    Article  Google Scholar 

  • Völter M (2010) Architecture as language. IEEE Softw 27:56–64

    Article  Google Scholar 

  • Weber B, Zeitelhofer S, Pinggera J, Torres V, Reichert M (2014) How advanced change patterns impact the process of process modeling. In: Bider I, Gaaloul K, Krogstie J, Nurcan S, Proper H, Schmidt R, Soffer P (eds) Enterprise, business-process and information systems modeling vol 175 of Lecture Notes in Business Information Processing. Springer, Berlin, pp 17–32

  • Zimmermann O, Gschwind T, Küster J, Leymann F, Schuster N (2007) Reusable architectural decision models for enterprise application development. In: Proceedings of the quality of software architectures 3rd international conference on software architectures, components, and applications, QoSA’07, Springer, pp 15–32

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Austrian Science Fund (FWF), Project: P24345-N23. We thank Dr. Nina Senitschnig from the Department of Statistics and Operations Research, for valuable suggestions and help related to the statistical analysis pursued.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Srdjan Stevanetic.

Additional information

Communicated by: Richard Paige

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stevanetic, S., Zdun, U. Supporting the analyzability of architectural component models - empirical findings and tool support. Empir Software Eng 23, 3578–3625 (2018). https://doi.org/10.1007/s10664-017-9583-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-017-9583-4

Keywords

Navigation