skip to main content
10.1145/3578245.3584851acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

Towards Solving the Challenge of Minimal Overhead Monitoring

Published: 15 April 2023 Publication History

Abstract

The examination of performance changes or the performance behavior of a software requires the measurement of the performance. This is done via probes, i.e., pieces of code which obtain and process measurement data, and which are inserted into the examined application. The execution of those probes in a singular method creates overhead, which deteriorates performance measurements of calling methods and slows down the measurement process. Therefore, an important challenge for performance measurement is the reduction of the measurement overhead.
To address this challenge, the overhead should be minimized. Based on an analysis of the sources of performance overhead, we derive the following four optimization options: (1) Source instrumentation instead of AspectJ instrumentation, (2) reduction of measurement data, (3) change of the queue and (4) aggregation of measurement data. We evaluate the effect of these optimization options using the MooBench benchmark. Thereby, we show that these optimizations options reduce the monitoring overhead of the monitoring framework Kieker. For MooBench, the execution duration could be reduced from 4.77 µs to 0.39 µs per method invocation on average.

References

[1]
Lucian Bara, Oana Boncalo, and Marius Marcu. 2015. Hardware support for performance measurements and energy estimation of OpenRISC processor. In 2015 IEEE 10th Jubilee International Symposium on Applied Computational Intelligence and Informatics. 399--404. https://doi.org/10.1109/SACI.2015.7208237
[2]
Holger Eichelberger, Aike Sass, and Klaus Schmid. 2016. From reproducibility problems to improvements: a journey. In SSP 2016, Softwaretechnik-Trends, Vol. 36. 43--45.
[3]
Holger Eichelberger and Klaus Schmid. 2014. Flexible resource monitoring of Java programs. Journal of Systems and Software, Vol. 93 (2014), 163--186. https://doi.org/10.1016/j.jss.2014.02.022
[4]
Andy Georges, Dries Buytaert, and Lieven Eeckhout. 2007. Statistically Rigorous Java Performance Evaluation. ACM SIGPLAN Notices, Vol. 42, 10 (2007), 57--76. https://doi.org/10.1145/1297027.1297033
[5]
Wilhelm Hasselbring. 2021. Benchmarking as Empirical Standard in Software Engineering Research, In EASE 2021: Evaluation and Assessment in Software Engineering. Evaluation and Assessment in Software Engineering, 365--372. https://doi.org/
[6]
Wilhelm Hasselbring and André van Hoorn. 2020. Kieker: A monitoring framework for software engineering research. Software Impacts, Vol. 5 (2020), 100019. https://doi.org/10.1016/j.simpa.2020.100019
[7]
Peter Hofer, David Gnedt, and Hanspeter Mössenböck. 2015. Lightweight Java profiling with partial safepoints and incremental stack tracing. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering. 75--86.
[8]
Vojtvech Horký, Jaroslav Kotrvc, Peter Libivc, and Petr T?ma. 2016. Analysis of Overhead in Dynamic Java Performance Monitoring. In 7th ACM/SPEC ICPE (Delft, The Netherlands) (ICPE '16). Association for Computing Machinery, New York, NY, USA, 275--286. https://doi.org/10.1145/2851553.2851569
[9]
Holger Knoche and Holger Eichelberger. 2017. The Raspberry Pi: A Platform for Replicable Performance Benchmarks? Softwaretechnik-Trends, Vol. 37, 3 (2017), 14--16.
[10]
Holger Knoche and Holger Eichelberger. 2018. Using the Raspberry Pi and Docker for Replicable Performance Experiments: Experience Paper. In Proceedings of the 2018 ICPE. 305--316. https://doi.org/10.1145/3184407.3184431
[11]
Lizhi Liao, Jinfu Chen, Heng Li, Yi Zeng, Weiyi Shang, Catalin Sporea, Andrei Toma, and Sarah Sajedi. 2022. Locating Performance Regression Root Causes in the Field Operations of Web-Based Systems: An Experience Report. IEEE Transactions on Software Engineering, Vol. 48, 12 (2022), 4986--5006. https://doi.org/10.1109/TSE.2021.3131529
[12]
Lukávs Marek, Alex Villazón, Yudi Zheng, Danilo Ansaloni, Walter Binder, and Zhengwei Qi. 2012. DiSL: a domain-specific language for bytecode instrumentation. In 11th International Conference on Aspect-oriented Software Development. 239--250. https://doi.org/10.1145/2162049.2162077
[13]
Jhonny Mertz and Ingrid Nunes. 2019. On the Practical Feasibility of Software Monitoring: A Framework for Low-Impact Execution Tracing. In 14th SEAMS (Montreal, Quebec, Canada). IEEE Press, 169--180. https://doi.org/10.1109/SEAMS.2019.00030
[14]
Du?an Okanovi?, Milan Vidakovi?, and Zora Konjovi?. 2013. Towards performance monitoring overhead reduction. In 2013 IEEE 11th International Symposium on Intelligent Systems and Informatics (SISY). 135--140. https://doi.org/10.1109/SISY.2013.6662557
[15]
Pedro Freire Popiolek, Karina dos Santos Machado, and Odorico Machado Mendizabal. 2021. Low overhead performance monitoring for shared infrastructures. Expert Systems with Applications, Vol. 171 (2021), 114558. https://doi.org/10.1016/j.eswa.2020.114558
[16]
David Georg Reichelt, Stefan Kühne, and Wilhelm Hasselbring. 2021. Overhead Comparison of OpenTelemetry, inspectIT and Kieker. In SSP 2021.
[17]
David Georg Reichelt, Stefan Kühne, and Willhelm Hasselbring. 2019. PeASS: A Tool for Identifying Performance Changes at Code Level. In Proceedings of the 33rd ACM/IEEE ASE. ACM. https://doi.org/10.1109/ASE.2019.00123
[18]
Tao B Schardl, Tyler Denniston, Damon Doucet, Bradley C Kuszmaul, I-Ting Angelina Lee, and Charles E Leiserson. 2017. The CSI Framework for Compiler-Inserted Program Instrumentation. Proceedings of the ACM on Measurement and Analysis of Computing Systems, Vol. 1, 2 (2017), 1--25. https://doi.org/10.1145/3154502
[19]
Weiyi Shang, Ahmed E Hassan, Mohamed Nasser, and Parminder Flora. 2015. Automated Detection of Performance Regressions Using Regression Models on Clustered Performance Counters. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering. ACM, 15--26. https://doi.org/10.1145/2668930.2688052
[20]
Hannes Strubel and Christian Wulf. 2016. Refactoring Kieker's Monitoring Component to further Reduce the Runtime Overhead. In Symposium on Software Performance 2016 (SSP '16).
[21]
Jan Waller. 2015. Performance Benchmarking of Application Monitoring Frameworks. BoD--Books on Demand.
[22]
Jan Waller, Nils Christian Ehmke, and Wilhelm Hasselbring. 2015. Including Performance Benchmarks into Continuous Integration to Enable DevOps. ACM SIGSOFT Software Engineering Notes, Vol. 40, 2 (3 2015), 1--4. https://doi.org/10.1145/2735399.2735416
[23]
Jan Waller, Florian Fittkau, and Wilhelm Hasselbring. 2014. Application performance monitoring: Trade-off between overhead reduction and maintainability. Proceedings of the Symposium on Software Performance (2014).
[24]
Jan Waller and Wilhelm Hasselbring. 2012. A Comparison of the Influence of Different Multi-Core Processors on the Runtime Overhead for Application-Level Monitoring. In International Conference on Multicore Software Engineering, Performance, and Tools. Springer, 42--53. https://doi.org/10.1007/978--3--642--31202--1_5
[25]
Zhiqiang Zuo, Kai Ji, Yifei Wang, Wei Tao, Linzhang Wang, Xuandong Li, and Guoqing Harry Xu. 2021. JPortal: Precise and efficient control-flow tracing for JVM programs with Intel Processor Trace. In Proceedings of the 42nd ACM SIGPLAN ICPLDI. 1080--1094. https://doi.org/10.1145/3453483.3454096

Cited By

View all
  • (2024)Flexible Non-intrusive Dynamic Instrumentation for WebAssemblyProceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 310.1145/3620666.3651338(398-415)Online publication date: 27-Apr-2024
  • (2023)Demystify the Fuzzing Methods: A Comprehensive SurveyACM Computing Surveys10.1145/362337556:3(1-38)Online publication date: 5-Oct-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICPE '23 Companion: Companion of the 2023 ACM/SPEC International Conference on Performance Engineering
April 2023
421 pages
ISBN:9798400700729
DOI:10.1145/3578245
Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 April 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. benchmarking
  2. monitoring overhead
  3. performance measurement
  4. software performance engineering

Qualifiers

  • Research-article

Funding Sources

  • Bundesministerium für Bildung und Forschung

Conference

ICPE '23

Acceptance Rates

Overall Acceptance Rate 252 of 851 submissions, 30%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)30
  • Downloads (Last 6 weeks)3
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Flexible Non-intrusive Dynamic Instrumentation for WebAssemblyProceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 310.1145/3620666.3651338(398-415)Online publication date: 27-Apr-2024
  • (2023)Demystify the Fuzzing Methods: A Comprehensive SurveyACM Computing Surveys10.1145/362337556:3(1-38)Online publication date: 5-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media