Abstract
Nowadays, we have greater expectations of software than ever before. This is followed by the constant pressure to run the same program on smaller and cheaper machines. To meet this demand, the application’s performance has become an essential concern in software development. Unfortunately, many applications still suffer from performance issues: coding or design errors that lead to performance degradation. However, finding performance issues is a challenging task: there is limited knowledge on how performance issues are discovered and fixed in practice, and current profilers report only where resources are spent, but not where resources are wasted. In this chapter, we investigate actionable performance analyses that help developers optimize their software by applying relatively simple code changes. We focus on optimizations that are effective, exploitable, recurring, and out-of-reach for compilers. These properties suggest that proposed optimizations lead to significant performance improvement, that they are easy to understand and apply, applicable across multiple projects, and that the compilers cannot guarantee that these optimizations always preserve the original program semantics. We implement our actionable analyses in practical tools and demonstrate their potential in improving software performance by applying relatively simple code optimizations.
Chapter PDF
Similar content being viewed by others
References
W. Ahn, J. Choi, T. Shull, M. J. Garzarán, and J. Torrellas. Improving JavaScript performance by deconstructing the type system. In Conference on Programming Language Design and Implementation (PLDI), pages 496–507, 2014.
A. V. Aho, R. Sethi, and J. D. Ullman. Compilers. Principles, Techniques and Tools. Addison Wesley, 1986.
M. Boshernitsan, S. L. Graham, and M. A. Hearst. Aligning development tools with the way programmers think about code changes. In CHI, pages 567–576, 2007.
C. Boyapati, S. Khurshid, and D. Marinov. Korat: automated testing based on Java predicates. In International Symposium on Software Testing and Analysis (ISSTA), pages 123–133, 2002.
J. Burnim, S. Juvekar, and K. Sen. WISE: Automated test generation for worst-case complexity. In ICSE, pages 463–473. IEEE, 2009.
C. Cadar, D. Dunbar, and D. R. Engler. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In Symposium on Operating Systems Design and Implementation (OSDI), pages 209–224. USENIX, 2008.
R. Chaiken, B. Jenkins, P. Larson, B. Ramsey, D. Shakib, S. Weaver, and J. Zhou. Scope: Easy and efficient parallel processing of massive data sets. Proc. VLDB Endow., 1(2):1265–1276, Aug. 2008.
K. Claessen and J. Hughes. Quickcheck: A lightweight tool for random testing of Haskell programs. SIGPLAN Not., 46(4):53–64, May 2011.
I. Costa, P. Alves, H. N. Santos, and F. M. Q. Pereira. Just-in-time value specialization. In CGO, pages 1–11, 2013.
M. Dhok and M. K. Ramanathan. Directed test generation to detect loop inefficiencies. In FSE, 2016.
R. R. Dumke, C. Rautenstrauch, A. Schmietendorf, and A. Scholz, editors. Performance Engineering, State of the Art and Current Trends, London, UK, UK, 2001. Springer-Verlag.
G. Fraser and A. Arcuri. Evosuite: automatic test suite generation for object-oriented software. In SIGSOFT/FSE’11 19th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-19) and ESEC’11: 13th European Software Engineering Conference (ESEC-13), Szeged, Hungary, September 5–9, 2011, pages 416–419, 2011.
A. Gal, B. Eich, M. Shaver, D. Anderson, D. Mandelin, M. R. Haghighat, B. Kaplan, G. Hoare, B. Zbarsky, J. Orendorff, J. Ruderman, E. W. Smith, R. Reitmaier, M. Bebenita, M. Chang, and M. Franz. Trace-based just-in-time type specialization for dynamic languages. In PLDI, pages 465–478, 2009.
P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In Conference on Programming Language Design and Implementation (PLDI), pages 213–223. ACM, 2005.
L. Gong, M. Pradel, and K. Sen. JITProf: Pinpointing JIT-unfriendly JavaScript code. In European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 357–368, 2015.
S. L. Graham, P. B. Kessler, and M. K. Mckusick. Gprof: A call graph execution profiler. In SIGPLAN Symposium on Compiler Construction, pages 120–126. ACM, 1982.
M. Grechanik, C. Fu, and Q. Xie. Automatically finding performance problems with feedback-directed learning software testing. In International Conference on Software Engineering (ICSE), pages 156–166, 2012.
B. Hackett and S. Guo. Fast and precise hybrid type inference for JavaScript. In Conference on Programming Language Design and Implementation (PLDI), pages 239–250. ACM, 2012.
A. Hartono, B. Norris, and P. Sadayappan. Annotation-based empirical performance tuning using orio. 2009 IEEE International Symposium on Parallel & Distributed Processing, pages 1–11, 2009.
S. H. Jensen, M. Sridharan, K. Sen, and S. Chandra. Meminsight: platform-independent memory debugging for javascript. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, Bergamo, Italy, August 30 - September 4, 2015, pages 345–356, 2015.
J. C. King. Symbolic execution and program testing. Communications of the ACM, 19(7):385–394, 1976.
D. E. Knuth. Computer programming as an art. Commun. ACM, 17(12):667–673, Dec. 1974.
F. Logozzo and H. Venter. RATA: Rapid atomic type analysis by abstract interpretation—application to JavaScript optimization. In CC, pages 66–83, 2010.
T. J. McCabe. A complexity measure. IEEE Transactions on Software Engineering, 2(4):308–320, Dec. 1976.
A. M. Memon. An event-flow model of GUI-based applications for testing. Softw. Test., Verif. Reliab., pages 137–157, 2007.
N. Meng, M. Kim, and K. S. McKinley. Systematic editing: generating program transformations from an example. In PLDI, pages 329–342, 2011.
N. Meng, M. Kim, and K. S. McKinley. Lase: locating and applying systematic edits by learning from examples. In ICSE, pages 502–511, 2013.
A. Mesbah and A. van Deursen. Invariant-based automatic testing of Ajax user interfaces. In ICSE, pages 210–220, 2009.
C. Pacheco and M. D. Ernst. Randoop: Feedback-directed random testing for java. In Companion to the 22Nd ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications Companion, OOPSLA ’07, pages 815–816, New York, NY, USA, 2007. ACM.
C. Pacheco, S. K. Lahiri, and T. Ball. Finding errors in .NET with feedback-directed random testing. In International Symposium on Software Testing and Analysis (ISSTA), pages 87–96. ACM, 2008.
M. Pradel and T. R. Gross. Fully automatic and precise detection of thread safety violations. In Conference on Programming Language Design and Implementation (PLDI), pages 521–530, 2012.
M. Samak and M. K. Ramanathan. Multithreaded test synthesis for deadlock detection. In Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA), pages 473–489, 2014.
M. Selakovic, M. Barnett, M. Musuvathi, and T. Mytkowicz. Cross-language optimizations in big data systems: A case study of scope. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice, ICSE-SEIP ’18, pages 45–54, New York, NY, USA, 2018. ACM.
M. Selakovic, T. Glaser, and M. Pradel. An actionable performance profiler for optimizing the order of evaluations. In International Symposium on Software Testing and Analysis (ISSTA), pages 170–180, 2017.
M. Selakovic and M. Pradel. Performance issues and optimizations in JavaScript: An empirical study. In International Conference on Software Engineering (ICSE), pages 61–72, 2016.
M. Selakovic, M. Pradel, R. Karim, and F. Tip. Test generation for higher-order functions in dynamic languages. Proceedings of the ACM on Programming Languages, 2:1–27, 10 2018.
K. Sen, D. Marinov, and G. Agha. Cute: A concolic unit testing engine for c. In Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, ESEC/FSE-13, pages 263–272, New York, NY, USA, 2005. ACM.
L. D. Toffola, M. Pradel, and T. R. Gross. Performance problems you can fix: A dynamic analysis of memoization opportunities. In Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 607–622, 2015.
G. H. Xu, N. Mitchell, M. Arnold, A. Rountev, E. Schonberg, and G. Sevitsky. Finding low-utility data structures. In Conference on Programming Language Design and Implementation (PLDI), pages 174–186, 2010.
H. Zhong, L. Zhang, and S. Khurshid. Combinatorial generation of structurally complex test inputs for commercial software applications. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2016, pages 981–986, New York, NY, USA, 2016. ACM.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2020 The Author(s)
About this chapter
Cite this chapter
Selakovic, M. (2020). Actionable Program Analyses for Improving Software Performance. In: Felderer, M., et al. Ernst Denert Award for Software Engineering 2019. Springer, Cham. https://doi.org/10.1007/978-3-030-58617-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-58617-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58616-4
Online ISBN: 978-3-030-58617-1
eBook Packages: Computer ScienceComputer Science (R0)