skip to main content
10.1145/3180155.3180229acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Speedoo: prioritizing performance optimization opportunities

Published: 27 May 2018 Publication History

Abstract

Performance problems widely exist in modern software systems. Existing performance optimization techniques, including profiling-based and pattern-based techniques, usually fail to consider the architectural impacts among methods that easily slow down the overall system performance. This paper contributes a new approach, named Speedoo, to identify groups of methods that should be treated together and deserve high priorities for performance optimization. The uniqueness of Speedoo is to measure and rank the performance optimization opportunities of a method based on 1) the architectural impact and 2) the optimization potential. For each highly ranked method, we locate a respective Optimization Space based on 5 performance patterns generalized from empirical observations. The top ranked optimization spaces are suggested to developers as potential optimization opportunities. Our evaluation on three real-life projects has demonstrated that 18.52% to 42.86% of methods in the top ranked optimization spaces indeed undertook performance optimization in the projects. This outperforms one of the state-of-the-art profiling tools YourKit by 2 to 3 times. An important implication of this study is that developers should treat methods in an optimization space together as a group rather than as individuals in performance optimization. The proposed approach can provide guidelines and reduce developers' manual effort.

References

[1]
2017. JPROFILER. https://www.ej-technologies.com/products/jprofiler/overview.html. (2017).
[2]
2017. Understand. https://scitools.com/. (2017).
[3]
2017. YourKit. https://www.yourkit.com/. (2017).
[4]
Glenn Ammons, Jong-DeokChoi, Manish Gupta, and Nikhil Swamy. 2004. Finding and Removing Performance Bottlenecks in Large Systems. In ECOOP. 172--196.
[5]
Carliss Y. Baldwin and KimB. Clark. 1999. Design Rules: The Power of Modularity Volume 1. MIT Press, Cambridge, MA, USA.
[6]
Thomas Ball and James R. Larus. 1996. Efficient Path Profiling. In MICRO. 46--57.
[7]
S. Baltes, O. Moseler, F. Beck, and S. Diehl. 2015. Navigate, Understand, Communicate: How Developers Locate Performance Bugs. In ESEM. 1--10.
[8]
Suparna Bhattacharya, Mangala Gowri Nanda, K. Gopinath, and Manish Gupta. 2011. Reuse, Recycle to De-bloat Software. In ECOOP. 408--432.
[9]
Marc Brünink and David S. Rosenblum. 2016. Mining Performance Specifications. In FSE. 39--49.
[10]
Yuanfang Cai and Kevin J. Sullivan. 2006. Modularity Analysis of Logical Design Models. In ASE. 91--102.
[11]
Bihuan Chen, Yang Liu, and Wei Le. 2016. Generating Performance Distributions via Probabilistic Symbolic Execution. In ICSE. 49--60.
[12]
Tse-Hsun Chen, Weiyi Shang, Zhen Ming Jiang, Ahmed E. Hassan, Mohamed Nasser, and Parminder Flora. 2014. Detecting Performance Anti-patterns for Applications Developed Using Object-relational Mapping. In ICSE. 1001--1012.
[13]
Norman Cliff. 1993. Dominance statistics: Ordinal analyses to answer ordinal questions. Psychological Bulletin 114, 3 (1993), 494.
[14]
Emilio Coppa, Camil Demetrescu, and Irene Finocchi. 2012. Input-sensitive Profiling. In PLDI.89--98.
[15]
Luca Della Toffola, Michael Pradel, and Thomas R. Gross. 2015. Performance Problems You Can Fix: A Dynamic Analysis of Memoization Opportunities. In OOPSLA. 607--622.
[16]
Monika Dhok and Murali Krishna Ramanathan. 2016. Directed Test Generation to Detect Loop Inefficiencies. In FSE. 895--907.
[17]
Evelyn Duesterwald and Vasanth Bala. 2000. Software Profiling for Hot Path Prediction: Less is More. In ASPLOS. 202--211.
[18]
Bruno Dufour, Barbara G. Ryder, and Gary Sevitsky. 2008. A Scalable Technique for Characterizing the Usage of Temporaries in Framework-intensive Java Applications. In FSE. 59--70.
[19]
Gordon Fraser and Andrea Arcuri. 2011. EvoSuite: Automatic Test Suite Generation for Object-Oriented Software. In ESEC/FSE. 416--419.
[20]
Simon F. Goldsmith, Alex S. Aiken, and Daniel S. Wilkerson. 2007. Measuring Empirical Computational Complexity. In ESEC-FSE. 395--404.
[21]
Mark Grechanik, Chen Fu, and Qing Xie. 2012. Automatically Finding Performance Problems with Feedback-directed Learning Software Testing. In ICSE. 156--166.
[22]
Shi Han, Yingnong Dang, Song Ge, Dongmei Zhang, and Tao Xie. 2012. Performance Debugging in the Large via Mining Millions of Stack Traces. In ICSE. 145--155.
[23]
Guoliang Jin, Linhai Song, Xiaoming Shi, Joel Scherpelz, and Shan Lu. 2012. Understanding and Detecting Real-world Performance Bugs. In PLDI. 77--88.
[24]
Milan Jovic, Andrea Adamoli, and Matthias Hauswirth. 2011. Catch Me if You Can: Performance Bug Detection in the Wild. In OOPSLA. 155--170.
[25]
Changhee Jung, Silvius Rus, Brian P. Railing, Nathan Clark, and Santosh Pande. 2011. Brainy: Effective Selection of Data Structures. In PLDI. 86--97.
[26]
Charles Killian, Karthik Nagaraj, Salman Pervez, Ryan Braud, James W. Anderson, and Ranjit Jhala. 2010. Finding Latent Performance Bugs in Systems Implementations. In FSE. 17--26.
[27]
James R. Larus. 1999. Whole Program Paths. In PLDI. 259--269.
[28]
Lixia Liu and Silvius Rus. 2009. Perflint: A Context Sensitive Performance Advisor for C++ Programs. 265--274.
[29]
Yepang Liu, Chang Xu, and Shing-Chi Cheung. 2014. Characterizing and Detecting Performance Bugs for Smartphone Applications. In ICSE. 1013--1024.
[30]
Rashmi Mudduluru and Murali Krishna Ramanathan. 2016. Efficient Flow Profiling for Detecting Performance Bugs. In ISSTA. 413--424.
[31]
Khanh Nguyen and Guoqing Xu. 2013. Cachetor: Detecting Cacheable Data to Remove Bloat. In ESEC/FSE. 268--278.
[32]
Adrian Nistor, Po-Chun Chang, Cosmin Radoi, and Shan Lu. 2015. CARAMEL: Detecting and Fixing Performance Problems That Have Non-Intrusive Fixes. In ICSE. 902--912.
[33]
Adrian Nistor, Tian Jiang, and Lin Tan. 2013. Discovering, Reporting, and Fixing Performance Bugs. In MSR. 237--246.
[34]
Adrian Nistor, Linhai Song, Darko Marinov, and Shan Lu. 2013. Toddler: Detecting Performance Problems via Similar Memory-access Patterns. In ICSE. 562--571.
[35]
Oswaldo Olivo, Isil Dillig, and Calvin Lin. 2015. Static Detection of Asymptotic Performance Bugs in Collection Traversals. In PLDI. 369--378.
[36]
Michael Pradel, Markus Huggler, and Thomas R. Gross. 2014. Performance Regression Testing of Concurrent Classes. In ISSTA. 13--25.
[37]
Michael Pradel, Parker Schuh, George Necula, and Koushik Sen. 2014. Event Break: Analyzing the Responsiveness of User Interfaces Through Performance-guided Test Generation. In OOPSLA. 33--47.
[38]
Marija Selakovic, Thomas Glaser, and Michael Pradel. 2017. An Actionable Performance Profiler for Optimizing the Order of Evaluations. In ISSTA. 170--180.
[39]
Marija Selakovic and Michael Pradel. 2016. Performance Issues and Optimizations in JavaScript: An Empirical Study. In ICSE. 61--72.
[40]
Ohad Shacham, Martin Vechev, and Eran Yahav. 2009. Chameleon: Adaptive Selection of Collections. In PLDI. 408--418.
[41]
Du Shen, Qi Luo, Denys Poshyvanyk, and Mark Grechanik. 2015. Automating Performance Bottleneck Detection Using Search-based Application Profiling. In ISSTA. 270--281.
[42]
Connie Smith and Lloyd G. Williams. 2002. New Software Performance AntiPatterns: More Ways to Shoot Yourself in the Foot. In CMG. 667--674.
[43]
Linhai Song and Shan Lu. 2014. Statistical Debugging for Real-world Performance Problems. In OOPSLA. 561--578.
[44]
Linhai Song and Shan Lu. 2017. Performance Diagnosis for Inefficient Loops. In ICSE. 370--380.
[45]
Alexander Wert, Jens Happe, and Lucia Happe. 2013. Supporting Swift Reaction: Automatically Uncovering Performance Problems by Systematic Experiments. In ICSE. 552--561.
[46]
Frank Wilcoxon. 1992. Individual Comparisons by Ranking Methods", bookTitle="Breakthroughs in Statistics: Methodology and Distribution. 196--202.
[47]
Sunny Wong, Yuanfang Cai, Giuseppe Valetto, Georgi Simeonov, and Kanwarpreet Sethi. 2009. Design Rule Hierarchies and Parallelism in Software Development Tasks. In ASE. 197--208.
[48]
Lu Xiao, Yuanfang Cai, and Rick Kazman. 2014. Titan: A Toolset That Connects Software Architecture With Quality Analysis. In FSE. 763--766.
[49]
Xusheng Xiao, Shi Han, Dongmei Zhang, and Tao Xie. 2013. Context-sensitive Delta Inference for Identifying Workload-dependent Performance Bottlenecks. In ISSTA. 90--100.
[50]
Guoqing Xu. 2012. Finding Reusable Data Structures. In OOPSLA. 1017--1034.
[51]
Guoqing Xu, Matthew Arnold, Nick Mitchell, Atanas Rountev, and Gary Sevitsky. 2009. Go with the Flow: Profiling Copies to Find Runtime Bloat. In PLDI. 419--430.
[52]
Guoqing Xu, Nick Mitchell, Matthew Arnold, Atanas Rountev, Edith Schonberg, and Gary Sevitsky. 2010. Finding Low-utility Data Structures. In PLDI. 174--186.
[53]
Guoqing Xu and Atanas Rountev. 2010. Detecting Inefficiently-used Containers to Avoid Bloat. In PLDI. 160--173.
[54]
Guoqing Xu, Dacong Yan, and Atanas Rountev. 2012. Static Detection of Loop-invariant Data Structures. In ECOOP. 738--763.
[55]
Dacong Yan, Guoqing Xu, and Atanas Rountev. 2012. Uncovering Performance Problems in Java Applications with Reference Propagation Profiling. In ICSE. 134--144.
[56]
Yibiao Yang, Mark Harman, Jens Krinke, Syed Islam, David Binkley, Yuming Zhou, and Baowen Xu. 2016. An Empirical Study on Dependence Clusters for Effort-Aware Fault-Proneness Prediction. In ASE. 296--307.
[57]
Tingting Yu and Michael Pradel. 2016. SyncProf: Detecting, Localizing, and Optimizing Synchronization Bottlenecks. In ISSTA. 389--400.
[58]
Xiao Yu, Shi Han, Dongmei Zhang, and Tao Xie. 2014. Comprehending Performance from Real-world Execution Traces: A Device-driver Case. In ASPLOS. 193--206.
[59]
Shahed Zaman, Bram Adams, and Ahmed E. Hassan. 2012. A Qualitative Study on Performance Bugs. In MSR. 199--208.
[60]
Dmitrijs Zaparanuks and Matthias Hauswirth. 2012. Algorithmic Profiling. In PLDI. 67--76.

Cited By

View all
  • (2024)A Platform-Agnostic Framework for Automatically Identifying Performance Issue Reports With Heuristic Linguistic PatternsIEEE Transactions on Software Engineering10.1109/TSE.2024.339062350:7(1704-1725)Online publication date: 1-Jul-2024
  • (2023)A Large-Scale Empirical Study of Real-Life Performance Issues in Open Source ProjectsIEEE Transactions on Software Engineering10.1109/TSE.2022.316762849:2(924-946)Online publication date: 1-Feb-2023
  • (2023)Performance evolution of configurable software systems: an empirical studyEmpirical Software Engineering10.1007/s10664-023-10338-328:6Online publication date: 13-Nov-2023
  • Show More Cited By

Index Terms

  1. Speedoo: prioritizing performance optimization opportunities

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICSE '18: Proceedings of the 40th International Conference on Software Engineering
    May 2018
    1307 pages
    ISBN:9781450356381
    DOI:10.1145/3180155
    • Conference Chair:
    • Michel Chaudron,
    • General Chair:
    • Ivica Crnkovic,
    • Program Chairs:
    • Marsha Chechik,
    • Mark Harman
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 May 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. architecture
    2. metrics
    3. performance

    Qualifiers

    • Research-article

    Conference

    ICSE '18
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 276 of 1,856 submissions, 15%

    Upcoming Conference

    ICSE 2025

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)27
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Platform-Agnostic Framework for Automatically Identifying Performance Issue Reports With Heuristic Linguistic PatternsIEEE Transactions on Software Engineering10.1109/TSE.2024.339062350:7(1704-1725)Online publication date: 1-Jul-2024
    • (2023)A Large-Scale Empirical Study of Real-Life Performance Issues in Open Source ProjectsIEEE Transactions on Software Engineering10.1109/TSE.2022.316762849:2(924-946)Online publication date: 1-Feb-2023
    • (2023)Performance evolution of configurable software systems: an empirical studyEmpirical Software Engineering10.1007/s10664-023-10338-328:6Online publication date: 13-Nov-2023
    • (2023)Modeling more software performance antipatterns in cyber-physical systemsSoftware and Systems Modeling10.1007/s10270-023-01137-x23:4(1003-1023)Online publication date: 20-Dec-2023
    • (2022)A Systematical Study on Application Performance Management Libraries for AppsIEEE Transactions on Software Engineering10.1109/TSE.2021.307765448:8(3044-3065)Online publication date: 1-Aug-2022
    • (2022)A comparative study of application-level caching recommendations at the method levelEmpirical Software Engineering10.1007/s10664-021-10089-z27:4Online publication date: 1-Jul-2022
    • (2021)Satisfying Increasing Performance Requirements With Caching at the Application LevelIEEE Software10.1109/MS.2020.303350838:3(87-95)Online publication date: May-2021
    • (2021)Identifying change patterns of API misuses from code changesScience China Information Sciences10.1007/s11432-019-2745-564:3Online publication date: 7-Feb-2021
    • (2020)An Algorithm for Multi-objective Software Performance Optimisation at the Architecture LevelProceedings of the 2020 4th International Conference on Electronic Information Technology and Computer Engineering10.1145/3443467.3443914(1113-1119)Online publication date: 6-Nov-2020
    • (2020)Automatically identifying performance issue reports with heuristic linguistic patternsProceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3409674(964-975)Online publication date: 8-Nov-2020
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media