Abstract
Code smells are symptoms of poor design that hamper software evolution and maintenance. Hence, code smells should be detected as early as possible to avoid software quality degradation. However, the notion of whether a design and/or implementation choice is smelly is subjective, varying for different projects and developers. In practice, developers may have different perceptions about the presence (or not) of a smell, which we call developer-sensitive smell detection. Although Machine Learning (ML) techniques are promising to detect smells, there is little knowledge regarding the accuracy of these techniques to detect developer-sensitive smells. Besides, companies may change developers frequently, and the models should adapt quickly to the preferences of new developers, i.e., using few training instances. Based on that, we present an investigation of the behavior of ML techniques in detecting developer-sensitive smells. We evaluated seven popular ML techniques based on their accuracy and efficiency for identifying 10 smell types according to individual perceptions of 63 developers, with some divergent agreement on the presence of smells. The results showed that five out of seven techniques had statistically similar behavior, being able to properly detect smells. However, the accuracy of all ML techniques was affected by developers’ opinion agreement and smell types. We also observed that the detection rules generated for developers individually have more metrics than in related studies. We can conclude that code smells detection tools should consider the individual perception of each developer to reach higher accuracy. However, untrained developers or developers with high disagreement can introduce bias in the smell detection, which can be risky for overall software quality. Moreover, our findings shed light on improving the state of the art and practice for the detection of code smells, contributing to multiple stakeholders.
Similar content being viewed by others
References
Abbes M, Khomh F, Gueheneuc Y G, Antoniol G (2011) An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension. In: 15th European conference on software maintenance and reengineering (CSMR). IEEE, pp 181–190
Amorim L, Costa E, Antunes N, Fonseca B, Ribeiro M (2015) Experience report: evaluating the effectiveness of decision trees for detecting code smells. In: Proceedings of the 2015 IEEE 26th international symposium on software reliability engineering (ISSRE), ISSRE ’15. https://doi.org/10.1109/ISSRE.2015.7381819. IEEE Computer Society, Washington, DC, pp 261–269
Arcelli Fontana F, Mäntylä M V, Zanoni M, Marino A (2016) Comparing and experimenting machine learning techniques for code smell detection. Empir Softw Eng 21(3):1143–1191
Arcoverde R, Guimarães ET, Bertran IM, Garcia A, Cai Y (2013) Prioritization of code anomalies based on architecture sensitiveness. In: 27th Brazilian symposium on software engineering, SBES 2013, Brasilia, Brazil, October 1-4, 2013. https://doi.org/10.1109/SBES.2013.14. IEEE Computer Society, pp 69–78
Azeem M I, Palomba F, Shi L, Wang Q (2019) Machine learning techniques for code smell detection: a systematic literature review and meta-analysis. Inf Softw Technol 108:115–138. https://doi.org/10.1016/j.infsof.2018.12.009
Bertran IM (2011) Detecting architecturally-relevant code smells in evolving software systems. In: Taylor RN, Gall HC, Medvidovic N (eds) Proceedings of the 33rd international conference on software engineering, ICSE 2011, Waikiki, Honolulu , HI, USA, May 21-28, 2011. https://doi.org/10.1145/1985793.1986003. ACM, pp 1090–1093
Bertran IM, Arcoverde R, Garcia A, Chavez C, von Staa A (2012a) On the relevance of code anomalies for identifying architecture degradation symptoms. In: Mens T, Cleve A, Ferenc R (eds) 16th European conference on software maintenance and reengineering, CSMR 2012, Szeged, Hungary, March 27-30, 2012. https://doi.org/10.1109/CSMR.2012.35. IEEE Computer Society, pp 277–286
Bertran IM, Garcia J, Popescu D, Garcia A, Medvidovic N, von Staa A (2012b) Are automatically-detected code anomalies relevant to architectural modularity?: an exploratory analysis of evolving systems. In: Hirschfeld R, Tanter É, Sullivan KJ, Gabriel RP (eds) Proceedings of the 11th International Conference on Aspect-oriented Software Development, AOSD 2012, Potsdam, Germany, March 25-30, 2012. https://doi.org/10.1145/2162049.2162069. ACM, pp 167–178
Bertran IM, Garcia A, Chavez C, von Staa A (2013) Enhancing the detection of code anomalies with architecture-sensitive strategies. In: Cleve A, Ricca F, Cerioli M (eds) 17th European conference on software maintenance and reengineering, CSMR 2013, Genova, Italy, March 5-8, 2013. https://doi.org/10.1109/CSMR.2013.27. IEEE Computer Society, pp 177–186
Bigonha M A, Ferreira K, Souza P, Sousa B, Januário M, Lima D (2019) The usefulness of software metric thresholds for detection of bad smells and fault prediction. Inf Softw Technol 115:79–92
Breiman L, Friedman J H, Olshen R A, Stone C J (1984) Classification and regression trees, Wadsworth and Brooks, Monterey
Cohen W W (1995) Fast effective rule induction. In: Twelfth international conference on machine learning. Morgan Kaufmann, pp 115–123
de Mello RM, Oliveira RF, Garcia A (2017) On the influence of human factors for identifying code smells: a multi-trial empirical study. In: 2017 ACM/IEEE International symposium on empirical software engineering and measurement (ESEM). https://doi.org/10.1109/ESEM.2017.13, pp 68–77
Di Nucci D, Palomba F, Tamburri D A, Serebrenik A, De Lucia A (2018) Detecting code smells using machine learning techniques: Are we there yet?. In: IEEE 25th international conference on software analysis, evolution and reengineering (SANER), pp 612–621. https://doi.org/10.1109/SANER.2018.8330266
Fernandes E, Vale G, da Silva Sousa L, Figueiredo E, Garcia A, Lee J (2017) No code anomaly is an island - anomaly agglomeration as sign of product line instabilities. In: Botterweck G, Werner CML (eds) Mastering scale and complexity in software reuse - 16th international conference on software reuse, ICSR 2017, Salvador, Brazil, May 29-31, 2017, proceedings. Lecture Notes in Computer Science, vol 10221, pp 48–64. https://doi.org/10.1007/978-3-319-56856-0_4
Fleiss J L (1971) Measuring nominal scale agreement among many raters. Psychol Bull 76(5):378
Fontana F A, Mariani E, Mornioli A, Sormani R, Tonello A (2011) An experience report on using code smells detection tools. In: 2011 IEEE Fourth international conference on software testing, verification and validation workshops, pp 450–457. https://doi.org/10.1109/ICSTW.2011.12. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5954446
Fontana F A, Zanoni M, Marino A, Mäntylä MV (2013) code smell detection: towards a machine learning-based approach. In: 2013 IEEE International conference on software maintenance, pp 396–399. https://doi.org/10.1109/ICSM.2013.56. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6676916
Fowler M (1999) Refactoring: improving the design of existing code. Addison-Wesley, Boston
Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32(200):675–701
Gopalan R (2012) Automatic detection of code smells in java source code. Ph.D. thesis, Dissertation for Honour Degree The University of Western Australia
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten I H (2009) The weka data mining software: an update. ACM SIGKDD Expl Newsl 11(1):10–18
Ho T K (1995) Random decision forests. In: Proceedings of the third international conference on document analysis and recognition, vol 1. IEEE, pp 278–282
Holte R (1993) Very simple classification rules perform well on most commonly used datasets. Mach Learn 11:63–91
Hozano M, Antunes N, Fonseca B, Costa E (2017a) Evaluating the accuracy of machine learning algorithms on detecting code smells for different developers. In: Proceedings of the 19th international conference on enterprise information systems, pp 474–482
Hozano M, Garcia A, Antunes N, Fonseca B, Costa E (2017b) Smells are sensitive to developers!: on the efficiency of (un)guided customized detection. In: Proceedings of the 25th international conference on program comprehension, ICPC ’17. IEEE Press, Piscataway, pp 110–120. https://doi.org/10.1109/ICPC.2017.32
Hozano M, Garcia A, Fonseca B, Costa E (2018) Are you smelling it? Investigating how similar developers detect code smells. Inf Softw Technol 93(C):130–146. https://doi.org/10.1016/j.infsof.2017.09.002
Khomh F, Vaucher S, Guéhéneuc Y G, Sahraoui H (2009) A bayesian approach for the detection of code and design smells. In: 9th international conference on quality software. QSIC’09. IEEE, pp 305–314
Khomh F, Penta M D, Guéhéneuc Y G, Antoniol G (2011a) An exploratory study of the impact of antipatterns on class change- and fault-proneness. Empir Softw Eng 17(3):243–275. https://doi.org/10.1007/s10664-011-9171-y
Khomh F, Vaucher S, Guéhéneuc Y G, Sahraoui H (2011b) Bdtex: a gqm-based bayesian approach for the detection of antipatterns. J Syst Softw 84(4):559–572. https://doi.org/10.1016/j.jss.2010.11.921
Lantz B (2019) Machine learning with R: expert techniques for predictive modeling. Packt Publishing Ltd
Lanza M, Marinescu R, Ducasse S (2005) Object-oriented metrics in practice, Springer, New York
Maiga A, Ali N, Bhattacharya N, Sabane A, Gueheneuc Y G, Aimeur E (2012) SMURF: a SVM-based incremental anti-pattern detection approach. In: 2012 19th Working conference on reverse engineering, pp 466–475. https://doi.org/10.1109/WCRE.2012.56
Maneerat N, Muenchaisri P (2011) Bad-smell prediction from software design model using machine learning techniques. In: 2011 Eighth international joint conference on computer science and software engineering (JCSSE), pp 331–336. https://doi.org/10.1109/JCSSE.2011.5930143. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5930143
Mäntylä M V (2005) An experiment on subjective evolvability evaluation of object-oriented software: explaining factors and interrater agreement. In: 2005 International symposium on empirical software engineering, p 10. https://doi.org/10.1109/ISESE.2005.1541837
Mäntylä M V, Lassenius C (2006) Subjective evaluation of software evolvability using code smells: an empirical study, vol 11, Springer. https://doi.org/10.1007/s10664-006-9002-8
Marinescu R (2004) Detection strategies: metrics-based rules for detecting design flaws. In: Proceedings of the 20th IEEE international conference on software maintenance, ICSM ’04. IEEE Computer Society, Washington, DC, pp 350–359. http://dl.acm.org/citation.cfm?id=1018431.1021443
Mitchell T M (1997) Machine learning. McGraw-Hill series in computer science, McGraw-Hill, Boston. http://opac.inria.fr/record=b1093076
Moha N, Guéhéneuc Y G, Meur A F L, Duchien L, Tiberghien A (2009) From a domain analysis to the specification and detection of code and design smells. Form Asp Comput 22(3):345–361. https://doi.org/10.1007/s00165-009-0115-x. http://link.springer.com/10.1007/s00165-009-0115-x
Moha N, Gueheneuc Y G, Duchien L, Le Meur AF (2010) DECOR: a method for the specification and detection of code and design smells. IEEE Trans Softw Eng 36(1):20–36. https://doi.org/10.1109/TSE.2009.50
Munro M (2005) Product metrics for automatic identification of “Bad smell” design problems in java Source-Code. In: 11th IEEE International software metrics symposium (METRICS’05), pp 15–15. https://doi.org/10.1109/METRICS.2005.38
Oizumi WN, Garcia AF, Sousa LS, Cafeo BBP, Zhao Y (2016) Code anomalies flock together: exploring code anomaly agglomerations for locating design problems. In: Dillon LK, Visser W, Williams LA (eds) Proceedings of the 38th international conference on software engineering, ICSE 2016, Austin, TX, USA, May 14-22, 2016. ACM, pp 440–451. https://doi.org/10.1145/2884781.2884868
Oizumi WN, Sousa LS, Oliveira A, Carvalho L, Garcia A, Colanzi TE, Oliveira RF (2019) On the density and diversity of degradation symptoms in refactored classes: a multi-case study. In: Katinka Wolter, Schieferdecker I, Gallina B, Cukier M, Natella R, Ivaki NR, Laranjeiro N (eds) 30th IEEE International symposium on software reliability engineering, ISSRE 2019, Berlin, Germany, October 28-31, 2019. IEEE, pp 346–357. https://doi.org/10.1109/ISSRE.2019.00042
Oliveira D, Assunção W K G, Souza L, Oizumi W, Garcia A, Fonseca B (2020) Applying machine learning to customized smell detection: a multi-project study. In: 34th Brazilian symposium on software engineering, SBES ’20. Association for computing machinery, New York, pp 233–242. https://doi.org/10.1145/3422392.3422427
Oliveira D, Assunção W K G, Garcia A, Fonseca B, Ribeiro M (2022) Supplementary material—developers’ perception matters: Machine learning to detect developer-sensitive smells. https://github.com/smellsensitive/smellsensitive.github.io/raw/main/dataset.rar
Palomba F, Bavota G, Di Penta M, Oliveto R, De Lucia A, Poshyvanyk D (2013) Detecting bad smells in source code using change history information. In: 2013 28th IEEE/ACM international conference on automated software engineering (ASE). IEEE, pp 268–278. https://doi.org/10.1109/ASE.2013.669308. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6693086
Palomba F, Bavota G, Di Penta M, Oliveto R, Poshyvanyk D, De Lucia A (2014a) Mining version histories for detecting code smells. IEEE Trans Softw Eng 5589(c):1–1. https://doi.org/10.1109/TSE.2014.2372760https://doi.org/10.1109/TSE.2014.2372760
Palomba F, Bavota G, Penta M D, Oliveto R, Lucia A D (2014b) Do they really smell bad? A study on developers’ perception of bad code smells. In: 2014 IEEE International conference on software maintenance and evolution, pp 101–110. https://doi.org/10.1109/ICSME.2014.32
Pecorelli F, Di Nucci D, De Roover C, De Lucia A (2019) On the role of data balancing for machine learning-based code smell detection. In: Proceedings of the 3rd ACM SIGSOFT international workshop on machine learning techniques for software quality evaluation, MaLTeSQuE 2019. Association for Computing Machinery, New York, pp 19–24https://doi.org/10.1145/3340482.3342744
Pecorelli F, Di Nucci D, De Roover C, De Lucia A (2020) A large empirical assessment of the role of data balancing in machine-learning-based code smell detection. J Syst Softw 169:110693. https://doi.org/10.1016/j.jss.2020.110693. http://www.sciencedirect.com/science/article/pii/S0164121220301448
Platt J (1998) Fast training of support vector machines using sequential minimal optimization. In: Schoelkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning. MIT Press. http://research.microsoft.com/~jplatt/smo.html
Quinlan R (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers, San Mateo
Rasool G, Arshad Z (2015) A review of code smell mining techniques. J Softw: Evol Process 27(11):867–895
Santos J A M, de Mendonça M G, Silva C V A (2013) An exploratory study to investigate the impact of conceptualization in god class detection. In: Proceedings of the 17th international conference on evaluation and assessment in software engineering, EASE ’13. ACM, New York, pp 48–59. https://doi.org/10.1145/2460999.2461007. http://doi.acm.org/10.1145/2460999.2461007
Schumacher J, Zazworka N, Shull F, Seaman C, Shaw M (2010) Building empirical support for automated code smell detection. In: Proceedings of the 2010 ACM-IEEE international symposium on empirical software engineering and measurement—ESEM ’10, p 1. https://doi.org/10.1145/1852786.1852797
Silva AL, Garcia A, Cirilo EJR, de Lucena CJP (2013) Are domain-specific detection strategies for code anomalies reusable? An industry multi-project study. In: 27th Brazilian symposium on software engineering, SBES 2013, Brasilia, Brazil, October 1-4, 2013. IEEE Computer Society, pp 79–88. https://doi.org/10.1109/SBES.2013.9
Sousa LS, Oliveira A, Oizumi WN, Barbosa SDJ, Garcia A, Lee J, Kalinowski M, de Mello RM, Fonseca B, Oliveira RF, Lucena C, de Paes RB (2018) Identifying design problems in the source code: a grounded theory. In: Chaudron M, Crnkovic I, Chechik M, Harman M (eds) Proceedings of the 40th international conference on software engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018. ACM, pp 921–931. https://doi.org/10.1145/3180155.3180239
Sousa LS, Oizumi WN, Garcia A, Oliveira A, Cedrim D, Lucena C (2020) When are smells indicators of architectural refactoring opportunities: a study of 50 software projects. In: ICPC ’20: 28th international conference on program comprehension, Seoul, Republic of Korea, July 13-15, 2020. ACM, pp 354–365. https://doi.org/10.1145/3387904.3389276
Spearman C (1904) The proof and measurement of association between two things. Am J Psychol 15(1):72–101
Steinwart I, Christmann A (2008) Support vector machines. Springer Science & Business Media
Surhone LM, Timpledon MT, Marseken SF (2010) Shapiro-Wilk test. VDM Publishing
van Solingen R, Basili V, Caldiera G, Rombach H D (2002) Goal question metric (GQM) approach, Wiley, New York
Vargha A, Delaney H D (2000) A critique and improvement of the cl common language effect size statistics of McGraw and Wong. J Educ Behav Stat 25(2):101–132
Wohlin C, Runeson P, Höst M, Ohlsson M C, Regnell B, Wesslén A (2000) Experimentation in software engineering: an introduction. Kluwer Academic Publishers, Norwell
Yamashita A, Moonen L (2013) Exploring the impact of inter-smell relations on software maintainability: an empirical study. In: Proceedings of the 2013 international conference on software engineering, ICSE ’13. IEEE Press, Piscataway, pp 682–691. http://dl.acm.org/citation.cfm?id=2486788.2486878
Acknowledgements
This study was partially funded by CNPq grants 434969/2018-4, 312149/2016-6, 309844/2018-5, 421306/2018-1, 427787/2018-1 141276/2020-7 and 408356/2018-9; FAPERJ grants 22520-7/2016, 010002285/2019, 211033/2019, 202621/2019 and PDR-10 Fellowship 202073/2020; FAPPR grant 51435.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Additional information
Communicated by: Foutse Khomh, Gemma Catolino, Pasquale Salza
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article belongs to the Topical Collection: Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE)
Appendix: List of Metrics
Appendix: List of Metrics
Metric | Description |
---|---|
AMW | Average method weight. |
ATFD | Access to foreign data. |
BOvR, PRM(similar), SIX(similar) | Base-class overriding ratio;BovR is the |
ratio of overridden methods to all | |
methods of the given class’ parent in the | |
inheritance hierarchy. | |
BUR | BUR is the ratio of used protected |
members to all protected members of the | |
given class’ parent in the inheritance | |
hierarchy. | |
CBO, CountClassCoupled | Coupling between objects. |
CFNAMM | Called foreign not accessor or mutator |
methods. | |
CYCLO, VG, Cyclomatic | McCabe’s cyclomatic complexity. |
FDP | Foreign data providers. |
LAA | Locality of attribute accesses. |
LCOM, PercentLackOfCohesion | Lack of cohesion between methods. |
LMC, chains | Length of the message chain. |
MLOC | Lines of code of a method. |
LOC, CountLineCode | Lines of code. |
LOCactual, CountLine | Total line of code. |
LOCprob, CountLineCodeDecl | Number of lines of code for data fields, |
methods, imported packages, and | |
package declaration. | |
NIM, CountDeclInstanceMethod | Number of instance methods. |
NOA, NOP, NOF, CountDeclProperty | Number of attributes. |
NOAM, NACC | Number of accessor methods (getter/setter). |
NOC, NSC, CountClassDerived | Number of children. |
NOM, CountDeclMethod | Number of methods without inherited ones. |
NOMcalls, MC | Number of method calls. |
NP | Number of parameters. |
NOPA | Number of public attributes. |
NOPVA | Number of Private Attributes. |
NOV, CountDeclClassVariable | Number of class variables. |
NProtM, CountDeclMethodProtected | Number of protected members. |
RFC, CountDeclMethodAll | Number of methods, including inherited |
ones. | |
WMC, SumCyclomatic | Weighted methods per class. |
WMCNAMM | Weighted Methods Count of Not Accessor |
or Mutator Methods. | |
WOC | Weight of a class. |
NMO | Number of methods overridden. |
ELOC, CountLineCodeExe | Effective Lines of Code. |
FANOUT, CountOutput | Max number of references from the subject |
class to another class in the system | |
PDM, NFM | Number of forwarding methods. |
VAVG | Average count on the number of variables. |
DIT, MaxInheritanceTree | Number of classes that are above a certain |
class in the inheritance hierarchy. | |
NBD, MaxNesting | Maximum number of nested blocks of a |
method. | |
TCC | The cohesion between the public methods |
of a class. | |
IntelligentMethods | Number of intelligent methods. |
GroupedVariables | Number of grouped variables |
Constants | Number of constants |
Primitives | Number of primitives |
AccessorsRatio | Ratio of accessors methods to other |
methods | |
PublicAttributes | Number of public attributes |
InnerClass | Number of inner classes |
AltAvgLineBlank | Average number of blank lines for all |
nested functions or methods, including | |
inactive regions. | |
AltAvgLineCode | Average number of lines containing source |
code for all nested functions or methods, | |
including inactive regions. | |
AltAvgLineComment | Average number of lines containing |
comments for all nested functions or | |
methods, including inactive regions. | |
AltCountLineBlank | Number of blank lines, including inactive |
regions. | |
AltCountLineCode | Number of lines containing source code, |
including inactive regions. | |
AltCountLineComment | Number of lines containing comment, |
including inactive regions. | |
AvgCyclomatic | Average cyclomatic complexity for all |
nested functions or methods. | |
AvgCyclomaticModified | Average modified cyclomatic complexity |
for all nested functions or methods. | |
AvgCyclomaticStrict | Average strict cyclomatic complexity for |
all nested functions or methods. | |
AvgEssential | Average Essential complexity for all |
nested functions or methods. | |
AvgEssentialStrictModified | Average strict modified essential |
complexity for all nested functions or | |
methods. | |
AvgLine | Average number of lines for all nested |
functions or methods. | |
FANIN, CountInput | Max number of references to the subject |
class from another class in the system. | |
AvgLineBlank | Average number of blanks for all nested |
functions or methods. | |
AvgLineCode | Average number of lines containing source |
code for all nested functions or methods. | |
AvgLineComment | Average number of lines containing |
comments for all nested functions or | |
methods. | |
CountClassBase | Number of immediate base classes. |
[aka IFANIN] | |
CountDeclClass | Number of classes. |
CountDeclClassMethod | Number of class methods. |
CountDeclExecutableUnit | Number of program units with executable |
code. | |
CountDeclFile | Number of files. |
CountDeclFunction | Number of functions. |
CountDeclInstanceVariable | Number of instance variables. [aka NIV] |
CountDeclInstanceVariableInternal | Number of internal instance variables. |
CountDeclInstanceVariablePrivate | Number of private instance variables. |
CountDeclInstanceVariableProtected | Number of protected instance variables. |
CountDeclInstanceVariableProtectedInternal | Number of protected internal instance |
variables. | |
CountDeclInstanceVariablePublic | Number of public instance variables. |
CountDeclMethodConst | Number of local const methods. |
CountDeclMethodDefault | Number of local default methods. |
CountDeclMethodFriend | Number of local friend methods. |
[aka NFM] | |
CountDeclMethodInternal | Number of local internal methods. |
CountDeclMethodPrivate | Number of local private methods. |
[aka NPM] | |
CountDeclMethodProtectedInternal | Number of local protected internal |
methods. | |
CountDeclMethodPublic | Number of local public methods. |
[aka NPRM] | |
CountDeclMethodStrictPrivate | Number of local strict private methods. |
CountDeclMethodStrictPublished | Number of local strict published methods. |
CountDeclModule | Number of modules. |
CountDeclProgUnit | Number of non-nested modules, block data |
units, and subprograms. | |
CountDeclPropertyAuto | Number of auto-implemented properties. |
CountDeclSubprogram | Number of subprograms. |
CountLineBlank | Number of blank lines. [aka BLOC] |
CountLineComment | Number of lines containing comment. |
[aka CLOC] | |
CountLineInactive | Number of inactive lines. |
CountLinePreprocessor | Number of preprocessor lines. |
CountPackageCoupled | Number of other packages coupled to. |
CountPath | Number of possible paths, not counting |
abnormal exits or gotos. [aka NPATH] | |
CountPathLog | Log10, truncated to an integer value, of the metric |
CountPath | |
CountSemicolon | Number of semicolons. |
CountStmt | Number of statements. |
CountStmtDecl | Number of declarative statements. |
CountStmtEmpty | Number of empty statements. |
CountStmtExe | Number of executable statements. |
CyclomaticModified | Modified cyclomatic complexity. |
CyclomaticStrict | Strict cyclomatic complexity. |
Essential | Essential complexity. [aka Ev(G)] |
EssentialStrictModified | Strict Modified Essential complexity. |
MaxCyclomatic | Maximum cyclomatic complexity of all nested |
functions or methods. | |
MaxCyclomaticModified | Maximum modified cyclomatic complexity of nested |
functions or methods. | |
MaxCyclomaticStrict | Maximum strict cyclomatic complexity of nested |
functions or methods. | |
MaxEssential | Maximum essential complexity of all nested |
functions or methods. | |
MaxEssentialKnots | Maximum Knots after structured programming |
constructs have been removed. | |
MaxEssentialStrictModified | Maximum strict modified essential complexity of all |
nested functions or methods. | |
MaxNesting | Maximum nesting level of control constructs. |
MinEssentialKnots | Minimum Knots after structured programming |
constructs have been removed. | |
PercentLackOfCohesionModified | 100% minus the average cohesion for class data |
members, modified for accessor methods | |
RatioCommentToCode | Ratio of comment lines to code lines. |
SumCyclomaticModified | Sum of modified cyclomatic complexity of all nested |
functions or methods. | |
SumCyclomaticStrict | Sum of strict cyclomatic complexity of all nested |
functions or methods. | |
SumEssential | Sum of essential complexity of all nested functions |
or methods. | |
SumEssentialStrictModified | Sum of strict modified essential complexity of all |
nested functions or methods. |
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Oliveira, D., Assunção, W.K.G., Garcia, A. et al. Developers’ perception matters: machine learning to detect developer-sensitive smells. Empir Software Eng 27, 195 (2022). https://doi.org/10.1007/s10664-022-10234-2
Accepted:
Published:
DOI: https://doi.org/10.1007/s10664-022-10234-2