skip to main content
research-article

EASE: An Effort-aware Extension of Unsupervised Key Class Identification Approaches

Published:21 April 2024Publication History
Skip Abstract Section

Abstract

Key class identification approaches aim at identifying the most important classes to help developers, especially newcomers, start the software comprehension process. So far, many supervised and unsupervised approaches have been proposed; however, they have not considered the effort to comprehend classes. In this article, we identify the challenge of “effort-aware key class identification”; to partially tackle it, we propose an approach, EASE, which is implemented through a modification to existing unsupervised key class identification approaches to take into consideration the effort to comprehend classes. First, EASE chooses a set of network metrics that has a wide range of applications in the existing unsupervised approaches and also possesses good discriminatory power. Second, EASE normalizes the network metric values of classes to quantify the probability of any class to be a key class and utilizes Cognitive Complexity to estimate the effort required to comprehend classes. Third, EASE proposes a metric, RKCP, to measure the relative key-class proneness of classes and further uses it to sort classes in descending order. Finally, an effort threshold is utilized, and the top-ranked classes within the threshold are identified as the cost-effective key classes. Empirical results on a set of 18 software systems show that (i) the proposed effort-aware variants perform significantly better in almost all (≈98.33%) the cases, (ii) they are superior to most of the baseline approaches with only several exceptions, and (iii) they are scalable to large-scale software systems. Based on these findings, we suggest that (i) we should resort to effort-aware key class identification techniques in budget-limited scenarios; and (ii) when using different techniques, we should carefully choose the weighting mechanism to obtain the best performance.

REFERENCES

  1. [1] Agrawal Amritanshu and Menzies Tim. 2018. Is “better data” better than “better data miners”?: On the benefits of tuning SMOTE for defect prediction. In Proceedings of the 40th International Conference on Software Engineering (ICSE’18), Chaudron Michel, Crnkovic Ivica, Chechik Marsha, and Harman Mark (Eds.). ACM, 10501061.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Alspaugh Sara, Walcott Kristen R., Belanich Michael, Kapfhammer Gregory M., and Soffa Mary Lou. 2007. Efficient time-aware prioritization with knapsack solvers. In Proceedings of the 1st ACM International Workshop on Empirical Assessment of Software Engineering Languages and Technologies: Held in Conjunction with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE’07). 1318.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Armstrong Richard A.. 2014. When to use the Bonferroni correction. Ophthalm. Physiol. Optics 34, 5 (2014), 502508.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Barón Marvin Muñoz, Wyrich Marvin, and Wagner Stefan. 2020. An empirical validation of cognitive complexity as a measure of source code understandability. In Proceedings of the ACM/IEEEInternational Symposium on Empirical Software Engineering and Measurement (ESEM’20), Baldassarre Maria Teresa, Lanubile Filippo, Kalinowski Marcos, and Sarro Federica (Eds.). ACM, 5:1–5:12.Google ScholarGoogle Scholar
  5. [5] Bauer Markus. 1999. Analysing software systems by using combinations of metrics. In Object-oriented Technology, ECOOP’99 Workshop Reader, ECOOP’99 Workshops, Panels, and Posters, Lisbon, Portugal, June 14–18, 1999, Proceedings (Lecture Notes in Computer Science), Vol. 1743. Springer, 170171.Google ScholarGoogle Scholar
  6. [6] Briand Lionel C., Daly John W., and Wüst Jürgen. 1999. A unified framework for coupling measurement in object-oriented systems. IEEE Trans. Softw. Eng. 25, 1 (1999), 91121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Briand Lionel C., Devanbu Premkumar T., and Melo Walcélio L.. 1997. An investigation into coupling measures for C++. In Proceedings of the 19th International Conference on Software Engineering. ACM, 412421.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Campbell G. Ann. 2018. Cognitive complexity: An overview and evaluation. In Proceedings of the International Conference on Technical Debt (TechDebt@ICSE’18), Nord Robert L., Buschmann Frank, and Kruchten Philippe (Eds.). ACM, 5758.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Chong Chun Yong and Lee Sai Peck. 2017. Automatic clustering constraints derivation from object-oriented software using weighted complex network with graph theory analysis. J. Syst. Softw. 133 (2017), 2853.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Cliff Norman. 2014. Ordinal Methods for Behavioral Data Analysis. Psychology Press.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Concas Giulio, Marchesi Michele, Pinna Sandro, and Serra Nicola. 2007. Power-laws in a large object-oriented software system. IEEE Trans. Softw. Eng. 33, 10 (2007), 687708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Cornelissen Bas, Zaidman Andy, Deursen Arie van, Moonen Leon, and Koschke Rainer. 2009. A systematic survey of program comprehension through dynamic analysis. IEEE Trans. Softw. Eng. 35, 5 (2009), 684702.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Corritore Cynthia L. and Wiedenbeck Susan. 2001. An exploratory study of program comprehension strategies of procedural and object-oriented programmers. Int. J. Hum. Comput. Stud. 54, 1 (2001), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Dekel Uri and Herbsleb James D.. 2009. Reading the documentation of invoked API functions in program comprehension. In Proceedings of the 17th IEEE International Conference on Program Comprehension (ICPC’09). IEEE Computer Society, 168177.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Do Hyunsook, Mirarab Siavash, Tahvildari Ladan, and Rothermel Gregg. 2010. The effects of time constraints on test case prioritization: A series of controlled experiments. IEEE Trans. Softw. Eng. 36, 5 (2010), 593617.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Vale Liliane do Nascimento and Maia Marcelo de Almeida. 2019. Key classes in object-oriented systems: Detection and assessment. Int. J. Softw. Eng. Knowl. Eng. 29, 10 (2019), 14391463.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Abreu Fernando Brito e and Goulão Miguel. 2001. Coupling and cohesion as modularization drivers: Are we being over-persuaded? In Proceedings of the 5th Conference on Software Maintenance and Reengineering (CSMR’01), Sousa Pedro and Ebert Jürgen (Eds.). IEEE Computer Society, 4757.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Abreu Fernando Brito e, Pereira Gonçalo, and Sousa Pedro Manuel Antunes. 2000. A coupling-guided cluster analysis approach to reengineer the modularity of object-oriented systems. In Proceedings of the 4th European Conference on Software Maintenance and Reengineering (CSMR’00). IEEE Computer Society, 1322.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Feigenspan Janet, Apel Sven, Liebig Jörg, and Kästner Christian. 2011. Exploring software measures to assess program comprehension. In Proceedings of the 5th International Symposium on Empirical Software Engineering and Measurement (ESEM’11). IEEE Computer Society, 127136.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Ferreira Kecia Aline M., Bigonha Mariza Andrade da Silva, Bigonha Roberto da Silva, Mendes Luiz F. O., and Almeida Heitor C.. 2012. Identifying thresholds for object-oriented software metrics. J. Syst. Softw. 85, 2 (2012), 244257.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Friedman Lee and Komogortsev Oleg V.. 2019. Assessment of the effectiveness of seven biometric feature normalization techniques. IEEE Trans. Inf. Forens. Secur. 14, 10 (2019), 25282536.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Fritz Thomas, Murphy Gail C., Murphy-Hill Emerson R., Ou Jingwen, and Hill Emily. 2014. Degree-of-knowledge: Modeling a developer’s knowledge of code. ACM Trans. Softw. Eng. Methodol. 23, 2 (2014), 14:1–14:42.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] García Salvador, Fernández Alberto, Luengo Julián, and Herrera Francisco. 2010. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180, 10 (2010), 20442064.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] García Salvador and Herrera Francisco. 2008. An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J. Mach. Learn. Res. 9 (2008), 26772694.Google ScholarGoogle Scholar
  25. [25] Hatton Les. 2009. Power-law distributions of component size in general software systems. IEEE Trans. Softw Eng. 35, 4 (2009), 566572.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Hirsch Jorge E.. 2005. An index to quantify an individual’s scientific research output. Proc. Natl. Acad. Sci. USA 102, 46 (2005), 1656916572.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Sora Ioana. 2015. A PageRank based recommender system for identifying key classes in software systems. In Proceedings of the 10th IEEE Jubilee International Symposium on Applied Computational Intelligence and Informatics (SACI’15). IEEE, 495500.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Jin Bihui, Liang Liming, Ronald Rousseau, and Leo Egghe. 2007. The R- and AR-indices: Complementing the h-index. Chinese Sci. Bull. 52, 6 (2007), 855863.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Kamei Yasutaka, Matsumoto Shinsuke, Monden Akito, Matsumoto Ken-ichi, Adams Bram, and Hassan Ahmed E.. 2010. Revisiting common bug prediction findings using effort-aware models. In Proceedings of the IEEE International Conference on Software Maintenance. IEEE, 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Kamei Yasutaka, Shihab Emad, Adams Bram, Hassan Ahmed E., Mockus Audris, Sinha Anand, and Ubayashi Naoyasu. 2012. A large-scale empirical study of just-in-time quality assurance. IEEE Trans. Softw. Eng. 39, 6 (2012), 757773.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Kang Dazhou, Xu Baowen, Lu Jianjiang, and Chu William C.. 2004. A complexity measure for ontology based on UML. In Proceedings of the 10th IEEE International Workshop on Future Trends of Distributed Computing Systems (FTDCS’04). IEEE Computer Society, 222228.Google ScholarGoogle Scholar
  32. [32] Lessmann Stefan, Baesens Bart, Mues Christophe, and Pietsch Swantje. 2008. Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Trans. Softw. Eng. 34, 4 (2008), 485496.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Lethbridge Timothy, Singer Janice, and Forward Andrew. 2003. How software engineers use documentation: The state of the practice. IEEE Softw. 20, 6 (2003), 3539.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Ling Charles X., Sheng Victor S., and Yang Qiang. 2006. Test strategies for cost-sensitive decision trees. IEEE Trans. Knowl. Data Eng. 18, 8 (2006), 10551067.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Liu Huihui, Yu Yijun, Li Bixin, Yang Yibiao, and Jia Ru. 2018. Are smell-based metrics actually useful in effort-aware structural change-proneness prediction? An empirical study. In Proceedings of the 25th Asia-Pacific Software Engineering Conference (APSEC’18). IEEE, 315324.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Louridas Panagiotis, Spinellis Diomidis, and Vlachos Vasileios. 2008. Power laws in software. ACM Trans. Softw. Eng. Methodol. 18, 1 (2008), 2:1–2:26.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Ma Yutao, He Keqing, Li Bing, Liu Jing, and Zhou Xiaoyan. 2010. A hybrid set of complexity metrics for large-scale object-oriented software systems. J. Comput. Sci. Technol. 25, 6 (2010), 11841201.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Maletic Jonathan I. and Marcus Andrian. 2001. Supporting program comprehension using semantic and structural information. In Proceedings of the 23rd International Conference on Software Engineering (ICSE’01). IEEE Computer Society, 103112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Marcus Andrian, Poshyvanyk Denys, and Ferenc Rudolf. 2008. Using the conceptual cohesion of classes for fault prediction in object-oriented systems. IEEE Trans. Softw. Eng. 34, 2 (2008), 287300.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] McBurney Paul W., Jiang Siyuan, Kessentini Marouane, Kraft Nicholas A., Armaly Ameer, Mkaouer Mohamed Wiem, and McMillan Collin. 2018. Towards prioritizing documentation effort. IEEE Trans. Softw. Eng. 44, 9 (2018), 897913.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] McCabe Thomas J.. 1976. A complexity measure. IEEE Trans. Softw. Eng. 2, 4 (1976), 308320.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Mende Thilo and Koschke Rainer. 2010. Effort-aware defect prediction models. In Proceedings of the 14th European Conference on Software Maintenance and Reengineering. IEEE, 107116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Meyer P., Siy Harvey P., and Bhowmick Sanjukta. 2014. Identifying important classes of large software systems through k-Core decomposition. Adv. Complex Syst. 17, 7-8 (2014).Google ScholarGoogle Scholar
  44. [44] Minelli Roberto, Mocci Andrea, and Lanza Michele. 2015. I know what you did last summer: An investigation of how developers spend their time. In Proceedings of the IEEE 23rd International Conference on Program Comprehension (ICPC’15). IEEE Computer Society, 2535.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Nam Jaechang, Pan Sinno Jialin, and Kim Sunghun. 2013. Transfer defect learning. In Proceedings of the 35th International Conference on Software Engineering (ICSE’13). 382391.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] O’Brien Michael P., Buckley Jim, and Shaft Teresa M.. 2004. Expectation-based, inference-based, and bottom-up software comprehension. J. Softw. Mainten. Res. Pract. 16, 6 (2004), 427447.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Osman Mohd Hafeez, Chaudron Michel R. V., and Putten Peter van der. 2013. An analysis of machine learning algorithms for condensing reverse engineered class diagrams. In Proceedings of the IEEE International Conference on Software Maintenance. IEEE Computer Society, 140149.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Pan Weifeng, Du Xin, Ming Hua, Kim Dae-Kyoo, and Yang Zijiang. 2023. Identifying key classes for initial software comprehension: Can we do it better? In Proceedings of the IEEE/ACM 45th International Conference on Software Engineering (ICSE’23). 18781889.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Pan Weifeng, Ming Hua, Chang Carl K., Yang Zijiang, and Kim Dae-Kyoo. 2021. ElementRank: Ranking Java software classes and packages using a multilayer complex network-based approach. IEEE Trans. Softw. Eng. 47, 10 (2021), 22722295.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Pan Weifeng, Ming Hua, Kim Dae-Kyoo, and Yang Zijiang. 2023. Pride: Prioritizing documentation effort based on a PageRank-like algorithm and simple filtering rules. IEEE Trans. Softw. Eng. 49, 3 (2023), 11181151.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Pan Weifeng, Ming Hua, Yang Zijiang, and Wang Tian. 2022. Comments on “Using k-core Decomposition on class dependency networks to improve bug prediction model’s practical performance.” IEEE Trans. Softw. Eng. 48, 12 (2022), 51765187.Google ScholarGoogle Scholar
  52. [52] Pan Weifeng, Song Beibei, Li Kangshun, and Zhang Kejun. 2018. Identifying key classes in object-oriented software using generalized k-core decomposition. Fut. Gen. Comput. Syst. 81 (2018), 188202.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Perin Fabrizio, Renggli Lukas, and Ressia Jorge. 2010. Ranking software artifacts. In Proceedings of the 4th Workshop on FAMIX and Moose in Reengineering (FAMOOSr’10), Vol. 120. Citeseer.Google ScholarGoogle Scholar
  54. [54] Potanin Alex, Noble James, Frean Marcus R., and Biddle Robert. 2005. Scale-free geometry in OO programs. Commun. ACM 48, 5 (2005), 99103.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Prajapati Amarjeet and Chhabra Jitender Kumar. 2017. Improving modular structure of software system using structural and lexical dependency. Inf. Softw. Technol. 82 (2017), 96120.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Qu Yu, Zheng Qinghua, Chi Jianlei, Jin Yangxu, He Ancheng, Cui Di, Zhang Hengshan, and Liu Ting. 2021. Using K-core decomposition on class dependency networks to improve bug prediction model’s practical performance. IEEE Trans. Softw. Eng. 47, 2 (2021), 348366.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Rajlich Václav and Wilde Norman. 2002. The role of concepts in program comprehension. In Proceedings of the 10th International Workshop on Program Comprehension (IWPC’02). IEEE Computer Society, 271278.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Roehm Tobias, Tiarks Rebecca, Koschke Rainer, and Maalej Walid. 2012. How do professional developers comprehend software? In Proceedings of the 34th International Conference on Software Engineering (ICSE’12), Glinz Martin, Murphy Gail C., and Pezzè Mauro (Eds.). IEEE Computer Society, 255265.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Scalabrino Simone, Bavota Gabriele, Vendome Christopher, Linares-Vásquez Mario, Poshyvanyk Denys, and Oliveto Rocco. 2021. Automatically assessing code understandability. IEEE Trans. Softw. Eng. 47, 3 (2021), 595613.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Scalabrino Simone, Bavota Gabriele, Vendome Christopher, Vásquez Mario Linares, Poshyvanyk Denys, and Oliveto Rocco. 2017. Automatically assessing code understandability: How far are we? In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE’17). IEEE Computer Society, 417427.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Scalabrino Simone, Vásquez Mario Linares, Poshyvanyk Denys, and Oliveto Rocco. 2016. Improving code readability models with textual features. In Proceedings of the 24th IEEE International Conference on Program Comprehension (ICPC’16). IEEE Computer Society, 110.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Soloway Elliot and Ehrlich Kate. 1984. Empirical studies of programming knowledge. IEEE Trans. Softw. Eng. 10, 5 (1984), 595609.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Sora Ioana. 2015. Finding the right needles in hay—Helping program comprehension of large software systems. In Proceedings of the 10th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE’15), Filipe Joaquim and Maciaszek Leszek A. (Eds.). SciTePress, 129140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Sora Ioana and Chirila Ciprian-Bogdan. 2019. Finding key classes in object-oriented software systems by techniques based on static analysis. Inf. Softw. Technol. 116 (2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Steidl Daniela, Hummel Benjamin, and Jürgens Elmar. 2012. Using network analysis for recommendation of central software classes. In Proceedings of the 19th Working Conference on Reverse Engineering (WCRE’12). IEEE Computer Society, 93102.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Tempero Ewan D., Anslow Craig, Dietrich Jens, Han Ted, Li Jing, Lumpe Markus, Melton Hayden, and Noble James. 2010. The Qualitas Corpus: A curated collection of Java code for empirical studies. In Proceedings of the 17th Asia Pacific Software Engineering Conference (APSEC’10), Han Jun and Thu Tran Dan (Eds.). IEEE Computer Society, 336345.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Thung Ferdian, Lo David, Osman Mohd Hafeez, and Chaudron Michel R. V.. 2014. Condensing class diagrams by analyzing design and network metrics using optimistic classification. In Proceedings of the 22nd International Conference on Program Comprehension (ICPC’14). ACM, 110121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Turney Peter D.. 2000. Types of cost in inductive concept learning. In Proceedings of the Workshop on Cost-sensitive Learning at the 17th International Conference on Machine Learning (ICML’00). IEEE Computer Society, 1521.Google ScholarGoogle Scholar
  69. [69] Walcott Kristen R., Soffa Mary Lou, Kapfhammer Gregory M., and Roos Robert S.. 2006. TimeAware test suite prioritization. In Proceedings of the International Symposium on Software Testing and Analysis. 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Wang M. S., Lu H. M., Zhou Y. M., and Xu B. W.. 2011. Identifying key classes using h-index and its variants. J. Front. Comput. Sci. Technol. 5, 10 (2011), 891903.Google ScholarGoogle Scholar
  71. [71] Wang Shuo and Yao Xin. 2013. Using class imbalance learning for software defect prediction. IEEE Trans. Reliab. 62, 2 (2013), 434443.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Witten Ian H., Frank Eibe, and Hall Mark A.. 2011. Data Mining: Practical Machine Learning Tools and Techniques, 3rd Edition. Morgan Kaufmann, Elsevier.Google ScholarGoogle Scholar
  73. [73] Wyrich Marvin, Preikschat Andreas, Graziotin Daniel, and Wagner Stefan. 2021. The mind is a powerful place: How showing code comprehensibility metrics influences code understanding. In Proceedings of the 43rd IEEE/ACM International Conference on Software Engineering (ICSE’21). IEEE, 512523.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Yang Xinli, Lo David, Xia Xin, and Sun Jianling. 2016. Condensing class diagrams with minimal manual labeling cost. In Proceedings of the 40th IEEE Annual Computer Software and Applications Conference (COMPSAC’16). IEEE Computer Society, 2231.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Yang Yibiao, Zhou Yuming, Lu Hongmin, Chen Lin, Chen Zhenyu, Xu Baowen, Leung Hareton, and Zhang Zhenyu. 2015. Are slice-based cohesion metrics actually useful in effort-aware post-release fault-proneness prediction? An empirical study. IEEE Trans. Softw. Eng. 41, 4 (2015), 331357.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. [76] Zaidman Andy, Calders Toon, Demeyer Serge, and Paredaens Jan. 2005. Applying webmining techniques to execution traces to support the program comprehension process. In Proceedings of the 9th European Conference on Software Maintenance and Reengineering (CSMR’05). IEEE Computer Society, 134142.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Zaidman Andy and Demeyer Serge. 2008. Automatic identification of key classes in a software system using webmining techniques. J. Softw. Mainten. Res. Pract. 20, 6 (2008), 387417.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. [78] Zhang Lu, Hou Shan-Shan, Guo Chao, Xie Tao, and Mei Hong. 2009. Time-aware test-case prioritization using integer linear programming. In Proceedings of the 18th International Symposium on Software Testing and Analysis. 213224.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. EASE: An Effort-aware Extension of Unsupervised Key Class Identification Approaches

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Software Engineering and Methodology
      ACM Transactions on Software Engineering and Methodology  Volume 33, Issue 4
      May 2024
      940 pages
      ISSN:1049-331X
      EISSN:1557-7392
      DOI:10.1145/3613665
      • Editor:
      • Mauro Pezzè
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 April 2024
      • Online AM: 2 December 2023
      • Accepted: 7 November 2023
      • Revised: 27 September 2023
      • Received: 13 May 2023
      Published in tosem Volume 33, Issue 4

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text