skip to main content
10.1145/3483899.3483907acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbcarsConference Proceedingsconference-collections
research-article

Do Critical Components Smell Bad? An Empirical Study with Component-based Software Product Lines

Published:05 October 2021Publication History

ABSTRACT

Component-based software product line (SPL) consists of a set of software products that share common components. For a proper SPL product composition, each component has to follow three principles: encapsulating a single feature, restricting data access, and be replaceable. However, it is known that developers usually introduce anomalous structures, i.e., code smells, along the implementation of components. These code smells might violate one or more component principles and hinder the SPL product composition. Thus, developers should identify code smells in component-based SPLs, especially those affecting highly interconnected components, which are called critical components. Nevertheless, there is limited evidence of how smelly these critical components tend to be in component-based SPLs. To address this limitation, this paper presents a survey with developers of three SPLs. We inquire these developers about their perceptions of a critical component. Then, we characterize critical components per SPL, and identify nine recurring types of code smells. Finally, we quantitatively assess the smelliness of the critical components. Our results suggest that: (i) critical components are ten times more prone to have code smells than non-critical ones; (ii) the most frequent code smell types affecting critical components violate several component principles together; and (iii) these smell types affect multiple SPL components.

References

  1. Ramon Abílio, Juliana Padilha, Eduardo Figueiredo, and Heitor Costa. 2015. Detecting Code Smells in Software Product Lines – An Exploratory Study. In 12th Int. Conference on Information Technology - New Generations (ITNG). 433–438.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Hugo Andrade, Eduardo Almeida, and Ivica Crnkovic. 2014. Architectural bad smells in software product lines. In 11th IEEE/IFIP Conference on Software Architecture (WICSA). 12:1–12:6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Sven Apel, Don Batory, Christian Kastner, and Gunter Saake. 2013. Feature-oriented software product lines. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Colin Atkinson, Joachim Bayer, and Dirk Muthig. 2000. Component-based product line development. In 1st Int. Software Product Line Conference (SPLC). 289–309.Google ScholarGoogle Scholar
  5. Thorsten Berger and Jianmei Guo. 2014. Towards system analysis with variability model metrics. In 8th Int. Workshop on Variability Modelling of Software-Intensive Systems (VaMoS). ACM, 23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Diego Cedrim, Alessandro Garcia, Melina Mongiovi, Rohit Gheyi, Leonardo Sousa, Rafael de Mello, Baldoino Fonseca, Márcio Ribeiro, and Alexander Chávez. 2017. Understanding the impact of refactoring on smells: a longitudinal study of 23 software projects. In 11th Joint Meeting on Foundations of Software Engineering (FSE). 465–475.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Alexander Chávez, Isabella Ferreira, Eduardo Fernandes, Diego Cedrim, and Alessandro Garcia. 2017. How does refactoring affect internal quality attributes?: A multi-project study. In 31st Brazilian Symposium on Software Engineering (SBES). 74–83.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Shyam Chidamber and Chris Kemerer. 1994. A metrics suite for object oriented design. IEEE Trans. Softw. Eng. (TSE) 20, 6 (1994), 476–493.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jerome Cornfield. 1951. A method of estimating comparative rates from clinical data. J. Natl. Cancer Inst. 11, 6 (1951), 1269–1275.Google ScholarGoogle Scholar
  10. Ivica Crnkovic and Magnus Larsson. 2002. Building reliable component-based software systems. Artech House.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Paula Donegan and Paulo Masiero. 2007. Design issues in a component-based software product line. In 1st Brazilian Symposium on Software Components, Architectures and Reuse (SBCARS). 3–16.Google ScholarGoogle ScholarCross RefCross Ref
  12. Reiner Dumke and Achim Winkler. 1997. Managing the component-based software engineering with metrics. In 5th Int. Symposium on Assessment of Software Tools and Technologies (SAST). 104–110.Google ScholarGoogle ScholarCross RefCross Ref
  13. Wolfgang Emmerich and Nima Kaveh. 2001. Component technologies: Java beans, COM, CORBA, RMI, EJB and the CORBA component model. In ACM SIGSOFT Softw. Eng. N., Vol. 26. ACM, 311–312.Google ScholarGoogle Scholar
  14. Wolfram Fenske and Sandro Schulze. 2015. Code smells revisited: A variability perspective. In 9th Int. Workshop on Variability Modelling of Software-Intensive Systems (VaMoS). 3.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Eduardo Fernandes, Johnatan Oliveira, Gustavo Vale, Thanis Paiva, and Eduardo Figueiredo. 2016. A review-based comparative study of bad smell detection tools. In 20th Int. Conference on Evaluation and Assessment in Software Engineering (EASE). 18:1–18:12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Eduardo Fernandes, Gustavo Vale, Leonardo Sousa, Eduardo Figueiredo, Alessandro Garcia, and Jaejoon Lee. 2017. No code anomaly is an island. In 16th Int. Conference on Software Reuse (ICSR). 48–64.Google ScholarGoogle ScholarCross RefCross Ref
  17. Tarcısio Filó, Mariza Bigonha, and K Ferreira. 2015. A catalogue of thresholds for object-oriented software metrics. 1st International Conference on Advances and Trends in Software Engineering (SOFTENG) (2015), 48–55.Google ScholarGoogle Scholar
  18. Ronald Fisher. 1922. On the interpretation of χ 2 from contingency tables, and the calculation of P. J. R. Stat. Soc. 85, 1 (1922), 87–94.Google ScholarGoogle ScholarCross RefCross Ref
  19. Francesca Arcelli Fontana, Pietro Braione, and Marco Zanoni. 2012. Automatic detection of bad smells in code: An experimental assessment.J. Obj. Tech. (JOT) 11, 2 (2012), 5–1.Google ScholarGoogle Scholar
  20. Martin Fowler. 1999. Refactoring. Addison-Wesley Professional.Google ScholarGoogle Scholar
  21. William Frakes and Kyo Kang. 2005. Software reuse research. IEEE Trans. Softw. Eng. (TSE) 31, 7 (2005), 529–536.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Nasib Gill. 2006. Importance of software component characterization for better software reusability. ACM SIGSOFT Softw. Eng. N. 31, 1 (2006), 1–3.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. LimeSurvey GmbH. 2021. LimeSurvey. Available at: https://www.limesurvey.org/.Google ScholarGoogle Scholar
  24. Jin Her, Ji Kim, Sang Oh, Sung Rhew, and Soo Kim. 2007. A framework for evaluating reusability of core asset in product line engineering. Info. Softw. Tech. (IST) 49, 7 (2007), 740 – 760.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Marcus Kessel and Colin Atkinson. 2015. Ranking software components for pragmatic reuse. In 6th Int. Workshop on Emerging Trends in Software Metrics (WETSoM). 63–66.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Charles Krueger. 2006. New methods in software product line practice. Comm. ACM 49, 12 (2006), 37–40.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Michele Lanza and Radu Marinescu. 2006. Object-oriented metrics in practice. Springer Science & Business Media.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Luan Lima, Anderson Uchôa, Carla Bezerra, Emanuel Coutinho, and Lincoln Rocha. 2020. Visualizing the Maintainability of Feature Models in SPLs. In Anais do VIII Workshop de Visualização, Evolução e Manutenção de Software. 1–8.Google ScholarGoogle Scholar
  29. Mark Lorenz and Jeff Kidd. 1994. Object-oriented software metrics. Prentice-Hall.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Isela Macia, Joshua Garcia, Daniel Popescu, Alessandro Garcia, Nenad Medvidovic, and Arndt von Staa. 2012. Are automatically-detected code anomalies relevant to architectural modularity?. In 11th Annual Int. Conference on Aspect-oriented Software Development (AOSD). 167–178.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Mika Mäntylä and Casper Lassenius. 2006. Subjective evaluation of software evolvability using code smells. Emp. Softw. Eng. (ESE) 11, 3 (2006), 395–431.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Júlio Martins, Carla Ilane Moreira Bezerra, and Anderson Uchôa. 2019. Analyzing the Impact of Inter-smell Relations on Software Maintainability: An Empirical Study with Software Product Lines. In XV Brazilian Symposium on Information Systems (SBSI). 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Thomas McCabe. 1976. A complexity measure. IEEE Trans. Softw. Eng. (TSE)4 (1976), 308–320.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Willian Oizumi, Alessandro Garcia, Leonardo Sousa, Bruno Cafeo, and Yixue Zhao. 2016. Code anomalies flock together. In 38th Int. Conference on Software Engineering (ICSE). 440–451.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, and Andrea De Lucia. 2014. Do they really smell bad?. In 30th Int. Conference on Software Maintenance and Evolution (ICSME). 101–110.Google ScholarGoogle Scholar
  36. Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, Andrea De Lucia, and Denys Poshyvanyk. 2013. Detecting bad smells in source code using change history information. In 28th Int. Conference on Automated Software Engineering (ASE). 268–278.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Neil J Salkind and Terese Rainwater. 2003. Exploring research. Prentice Hall Upper Saddle River, NJ.Google ScholarGoogle Scholar
  38. Inc Scientific Toolworks. 2021. Understand Tool. Available at: https://scitools.com/.Google ScholarGoogle Scholar
  39. David Sharp. 1998. Reducing avionics software cost through component based product line development. In 17th Digital Avionics Systems Conference (DASC). G32–1.Google ScholarGoogle ScholarCross RefCross Ref
  40. M Silva, P Guerra, and Cecília Rubira. 2003. A java component model for evolving software systems. In 18th Int. Conference on Automated Software Engineering (ASE). 327–330.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Larissa Soares, Ivan Machado, and Eduardo Almeida. 2015. Non-functional properties in software product lines. In 9th Int. Workshop on Variability Modelling of Software-Intensive Systems (VaMoS). 67.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Charles Spearman. 1904. The proof and measurement of association between two things. American Journal of Psychology 15, 1 (1904), 72–101.Google ScholarGoogle ScholarCross RefCross Ref
  43. Christian Tischer, Andreas Muller, Markus Ketterer, and Lars Geyer. 2007. Why does it take that long?. In 11th Int. Software Product Line Conference (SPLC). 269–274.Google ScholarGoogle ScholarCross RefCross Ref
  44. Leonardo Tizzei, Marcelo Dias, Cecília Rubira, Alessandro Garcia, and Jaejoon Lee. 2011. Components meet aspects. Info. Softw. Tech. (IST) 53, 2 (2011), 121–136.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Anderson Uchôa, Eduardo Fernandes, Ana Carla Bibiano, and Alessandro Garcia. 2017. Do Coupling Metrics Help Characterize Critical Components in Component-based SPL? An Empirical Study. In 5th Workshop on Software Visualization, Evolution and Maintenance (VEM). 36–43.Google ScholarGoogle Scholar
  46. Anderson Uchôa, Wesley K. G. Assunção, and Alessandro Garcia. 2021. Research companion website. https://anderson-uchoa.github.io/SBCARS2021/Google ScholarGoogle Scholar
  47. Gustavo Vale and Eduardo Figueiredo. 2015. A method to derive metric thresholds for software product lines. In 29th Brazilian Symposium on Software Engineering (SBES). 110–119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Gustavo Vale, Eduardo Figueiredo, Ramon Abílio, and Heitor Costa. 2014. Bad Smells in Software Product Lines: A Systematic Review. In 8th Brazilian Symposium on Software Components, Architectures and Reuse (SBCARS). 84–94.Google ScholarGoogle Scholar
  49. Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in software engineering. Springer Science & Business Media.Google ScholarGoogle ScholarCross RefCross Ref
  50. Aiko Yamashita and Leon Moonen. 2013. Do developers care about code smells?. In 20th Working Conference on Reverse Engineering (WCRE). 242–251.Google ScholarGoogle Scholar

Index Terms

  1. Do Critical Components Smell Bad? An Empirical Study with Component-based Software Product Lines
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            SBCARS '21: Proceedings of the 15th Brazilian Symposium on Software Components, Architectures, and Reuse
            September 2021
            109 pages
            ISBN:9781450384193
            DOI:10.1145/3483899

            Copyright © 2021 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 5 October 2021

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited

            Acceptance Rates

            Overall Acceptance Rate23of79submissions,29%
          • Article Metrics

            • Downloads (Last 12 months)10
            • Downloads (Last 6 weeks)1

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format