Abstract
Code-smells are identified, in general, by using a set of detection rules. These rules are manually defined to identify the key symptoms that characterize a code-smell using combinations of mainly quantitative (metrics), structural, and/or lexical information. We propose in this work to consider the problem of code-smell detection as a multi-objective problem where examples of code-smells and well-designed code are used to generate detection rules. To this end, we use multi-objective genetic programming (MOGP) to find the best combination of metrics that maximizes the detection of code-smell examples and minimizes the detection of well-designed code examples. We evaluated our proposal on seven large open-source systems and found that, on average, most of the different five code-smell types were detected with an average of 87 % of precision and 92 % of recall. Statistical analysis of our experiments over 51 runs shows that MOGP performed significantly better than state-of-the-art code-smell detectors.








Similar content being viewed by others
References
Abbes, M., Khomh, F., Gueheneuc, Y.-G., & Antoniol, G. (2011). An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension. In Software maintenance and reengineering (CSMR), 2011 15th European conference on (pp. 181–190). IEEE.
Abreu, F., Goulão, M., & Esteves, R. (1995). Toward the design quality evaluation of object-oriented software systems. In Proceedings of 5th ICSQ.
Aghezzaf, B., & Hachimi, M. (2000). Generalized invexity and duality in multiobjective programming problems. Journal of Global Optimization, 18(1), 91–101.
Al Dallal, J. (2014). Identifying refactoring opportunities in object-oriented code: A systematic literature review. Information and Software Technology, 58, 231–249.
Al Dallal, J. (2015). Identifying refactoring opportunities in object-oriented code: A systematic literature review. Information and Software Technology, 58, 231–249.
Arcuri, A., & Briand, L. C. (2011). A practical guide for using statistical tests to assess randomized algorithms in software engineering. In Proceedings of the 33rd international conference on software engineering (ICSE) (pp. 1–10).
Bavota, G., De Lucia, A., Di Penta, M., Oliveto, R., & Palomba, F. (2015). An experimental investigation on the innate relationship between quality and refactoring. Journal of Systems and Software, 107, 1–14.
Brown, W. J., Malveau, R. C., Brown, W. H., & Mowbray, T. J. (1998). Anti-patterns: Refactoring software, architectures, and projects in crisis. Hoboken: Wiley.
Chidamber, S. R., & Kemerer, C. F. (1994). A metrics suite for object-oriented design. IEEE Transactions on Software Engineering, 20(6), 293–318.
Concas, G., Destefanis, G., Marchesi, M., Ortu, M., & Tonelli, R. (2013). Micro patterns in agile software. Berlin: Springer.
Deb, K. (2001). Multiobjective optimization using evolutionary algorithms. New York: Wiley.
Deb, K., Agrawal, S., Pratap, A., & Meyarivan, T. (2002). A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182–197.
Destefanis, G., Tonelli, R., Tempero, E., Concas, G., & Marchesi, M. (2012, September). Micro pattern fault-proneness. In Software engineering and advanced applications (SEAA), 2012 38th EUROMICRO conference on (pp. 302–306). IEEE.
Dhambri, K., Sahraoui, H. A., & Poulin, P. (2008). Visual detection of design anomalies. In CSMR. IEEE (pp. 279–283).
Fenton, N., & Pfleeger, S. L. (1998). Software metrics: A rigorous and practical approach (2nd ed.). London: International Thomson Computer Press.
Fontana, F. A., Mäntylä, M. V., Zanoni, M., & Marino, A. (2015). Comparing and experimenting machine learning techniques for code smell detection. In Empirical Software Engineering (pp. 1–49).
Fowler, M., Beck, K., Brant, J., Opdyke, W., & Roberts, D. (1999). Refactoring—Improving the design of existing code. Boston: Addison-Wesley Professional.
Gil, J. Y., & Maman, I. (2005). Micro patterns in Java code. In ACM SIGPLAN Notices (Vol. 40, no. 10). ACM.
Gong, M., Jiao, L., Du, H., & Bo, L. (2008). Multiobjective immune algorithm with nondominated neighbor-based selection. Evolutionary Computation, 6(2), 225–255.
Hall, T., Zhang, M., Bowes, D., & Sun, Y. (2014). Some code smells have a significant but small effect on faults. ACM Transactions on Software Engineering and Methodology (TOSEM), 23(4), 33.
Harman, M., Mansouri, S. A., & Zhang, Y. (2012). Search-based software engineering: Trends, techniques and applications. ACM Computing Surveys, 45, 11.
Kessentini, M., Kessentini, W., Sahraoui, H., Boukadoum, M., & Ouni, A. (2011a). Design defects detection and correction by example. In Proceedings of the 19th IEEE international conference on program comprehension (ICPC’11) (pp. 81–90).
Kessentini, M., Kessentini, W., Sahraoui, H., Boukadoum, M., & Ouni, A. (2011b). Design defects detection and correction by example. In 19th IEEE international conference on program comprehension (ICPC), (22–24 June 2011), Kingston, Canada (pp. 81–90).
Kessentini, M., Vaucher, S., & Sahraoui, H. (2010). Deviance from perfection is a better criterion than closeness to evil when identifying risky code. In Proceedings of the 25th IEEE/ACM international conference on automated software engineering (ASE) (pp. 141–151).
Khomh, F., Vaucher, S., Guéhéneuc, Y.-G., & Sahraoui, H. (2009). A Bayesian approach for the detection of code and design smells. In Proceedings of the ICQS’09.
Kothari, S. C., Bishop, L., Sauceda, J., & Daugherty, G. (2004). A pattern-based framework for software anomaly detection. Software Quality Journal, 12(2), 99–120.
Kreimer, J. (2005). Adaptive detection of design flaws. Electronic Notes in Theoretical Computer Science, 141(4), 117–136.
Langdon, W. B., Poli, R., McPhee, N. F., & Koza, J. R. (2008). Genetic programming: An introduction and tutorial, with a survey of techniques and applications. In J. Fulcher & L. C. Jain (Eds.), Computational intelligence: A compendium (pp. 927–1028). Berlin: Springer.
Langelier, G., Sahraoui, H. A., & Poulin, P. (2005). Visualization-based analysis of quality for large-scale software systems. In T. Ellman & A. Zisma (Eds.), Proceedings of the 20th international conference on automated software engineering. New York: ACM Press.
Maggioni, S., & Arcelli, F. (2010). Metrics-based detection of micro patterns. In Proceedings of the 2010 ICSE workshop on emerging trends in software metrics. ACM.
Maiga, A., Ali, N., Bhattacharya, N., Sabane, A., Guéhéneuc, Y. G., & Aimeur, E. (2012, October). Smurf: A svm-based incremental anti-pattern detection approach. In Reverse engineering (WCRE), 2012 19th working conference on (pp. 466–475). IEEE.
Mäntylä, M. V. (2010). Empirical software evolvability—code smells and human evaluations. In ICSM (pp. 1–6).
Mäntylä, M., & Lassenius, C. (2006). Subjective evaluation of software evolvability using code smells: An empirical study. Empirical Software Engineering, 11(3), 395–431.
Marinescu, R. (2004). Detection strategies: Metrics-based rules for detecting design flaws. In Proceedings of ICM’04 (pp. 350–359).
Mkaouer, M. W., Kessentini, M., Bechikh, S., Cinnéide, M. Ó., & Deb, K. (2015). On the use of many quality attributes for software refactoring: A many-objective search-based software engineering approach. In Empirical Software Engineering (pp. 1–43).
Moha, N., Guéhéneuc, Y. G., Duchien, L., & Le Meur, A. F. (2010). DECOR: A method for the specification and detection of code and design smells. IEEE Transactions on Software Engineering, 36(1), 20–36.
Munro, M. J. (2005). Product metrics for automatic identification of “Bad Smell” design problems in java source-code. In Proceedings of the 11th international software metrics symposium.
Palomba, F., Bavota, G., Di Penta, M., Oliveto, R., & De Lucia, A. (2014). Do they really smell bad? A study on developers’ perception of bad code smells. In Software maintenance and evolution (ICSME), 2014 IEEE international conference on (pp. 101–110). IEEE.
Palomba, F., Bavota, G., Di Penta, M., Oliveto, R., De Lucia, A., & Poshyvanyk, D. (2013). Detecting bad smells in source code using change history information. In Automated software engineering (ASE), 2013 IEEE/ACM 28th international conference on (pp. 268–278). IEEE.
Rasool, G., & Arshad, Z. (2015). A review of code smell mining techniques. Journal of Software: Evolution and Process, 27(11), 867–895.
Sahin, D., Kessentini, M., Bechikh, S., & Deb, K. (2014). Code-smell detection as a bilevel problem. ACM Transactions on Software Engineering and Methodology (TOSEM), 24(1), 6.
Salehie, M., Li, S., & Tahvildari, L. (2006). A metric-based heuristic framework to detect object-oriented design flaws. In Proceedings of the 14th IEEE ICPC’06.
Sjøberg, D. I. K., Yamashita, A. F., Anda, B. C. D., Mockus, A., & Dybå, T. (2013). Quantifying the effect of code smells on maintenance effort. IEEE Transactions on Software Engineering, 39(8), 1144–1156.
Travassos, G., Shull, F., Fredericks, M., & Basili, V. R. (1999). Detecting defects in object-oriented designs: Using reading techniques to increase software quality. In Proceedings of the 14th conference on object-oriented programming, systems, languages, and applications (pp. 47–56). New York: ACM Press.
Tufano, M., Palomba, F., Bavota, G., Oliveto, R., Di Penta, M., De Lucia, A., et al. (2015). When and why your code starts to smell bad. In ICSE.
Van Emden, V. & Moonen, L. (2002). Java quality assurance by detecting code smells. In Proceedings of the ninth working conference on reverse engineering (WCRE’02). IEEE computer society, Washington, DC, USA (p. 97).
Vidal, S. A., Marcos, C., & Díaz-Pace, J. A. (2014). An approach to prioritize code smells for refactoring. In Automated Software Engineering (pp. 1–32).
Yamashita, A. F. & Moonen, L. (2012) Do code smells reflect important maintainability aspects? In ICSM, pp. 306–315.
Yamashita, A. F., & Moonen, L. (2013a). To what extent can maintenance problems be predicted by code smell detection? An empirical study. Information & Software Technology, 55(12), 2223–2242.
Yamashita, A. F., & Moonen, L. (2013b). To what extent can maintenance problems be predicted by code smell detection? An empirical study. Information and Software Technology, 55(12), 2223–2242.
Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., & da Fonseca, V. G. (2003). Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transaction on Evolutionary Computation, 7(2), 117–132.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Mansoor, U., Kessentini, M., Maxim, B.R. et al. Multi-objective code-smells detection using good and bad design examples. Software Qual J 25, 529–552 (2017). https://doi.org/10.1007/s11219-016-9309-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11219-016-9309-7