Abstract
A challenging aspect in optimizing legacy distributed systems with strict real-time requirements is how to evaluate the performance of the system running in a production environment without disrupting its regular operation. The challenge is even greater when the System Under Evaluation (SUE) runs within a resource-sharing environment and, thus, is affected by the resource usage of other software running in the same environment. Current performance evaluation methods dealing with this challenge rely on data collected by Application Performance Monitoring (APM) tools that are not always available in existing systems and hard to establish when the system is already in production. In this paper, we improve the initial, proof-of-concept implementation of our RAST (Regression Analysis, Simulation, and load Testing) approach to evaluate the response time of a distributed system using the available system’s request logs. In particular, we greatly improve the prediction model based on machine learning. Our use case is a commercial alarm system in productive use, developed and maintained by the GSelectronic company in Germany. We experimentally demonstrate that our improvements significantly enhance RAST’s capability to adequately predict the system performance and verify the strict requirements on the response time. We make our model and software freely available in order to enable reproducing our experiments.
This work was supported by the DFG project PPP-DL at the University of Muenster.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Response time is the time interval between a sent request and the received response to it; it usually includes network latency and the request’s processing time.
References
Aichernig, B.K., et al.: Learning and statistical model checking of system response times. Softw. Qual. J. 27(2), 757–795 (2019)
Beazley, D.: Understanding the Python GIL. In: PyCON Python Conference (2010)
Belete, D.M., Huchaiah, M.D.: Grid search in hyperparameter optimization of machine learning models for prediction of HIV/AIDS test results. Int. J. Comput. Appl. 44(9), 875–886 (2022). https://doi.org/10.1080/1206212X.2021.1974663
Brownlee, J.: How to remove outliers for machine learning (2020). https://machinelearningmastery.com/how-to-use-statistics-to-identify-outliers-in-data/
Byström, C., et al.: Locust. https://docs.locust.io/en/stable/what-is-locust.html
Courageux-Sudan, C., Orgerie, A.C., Quinson, M.: Automated performance prediction of microservice applications using simulation. In: MASCOTS 2021, pp. 1–8 (2021). https://doi.org/10.1109/MASCOTS53633.2021.9614260
DIN EN 50136-1:2012-08: Alarm systems - alarm transmission systems and equipment - part 1: General requirements for alarm transmission systems (2012)
Grohmann, J., et al.: Monitorless: predicting performance degradation in cloud applications with machine learning. In: Middleware 2019, pp. 149–162. ACM (2019). https://doi.org/10.1145/3361525.3361543
Huang, J., et al.: An empirical analysis of data preprocessing for machine learning-based software cost estimation. Inf. Softw. Technol. 67, 108–127 (2015). https://doi.org/10.1016/j.infsof.2015.07.004
ISO/IEC: ISO/IEC 2382:2015: Information technology - Vocabulary. Technical report, ISO (2015)
JetBrains: Ktor (2022). https://github.com/ktorio/ktor
Keti, F., Askar, S.: Emulation of software defined networks using mininet in different simulation environments. In: ISMS 2015, pp. 205–210 (2015). https://doi.org/10.1109/ISMS.2015.46
Matthiesen, S., Bjørn, P.: Why replacing legacy systems is so hard in global software development: an information infrastructure perspective. In: CSCW 2015, pp. 876–890. ACM (2015)
Okanović, D., Vidaković, M.: Software performance prediction using linear regression. In: Proceedings of the 2nd International Conference on Information Society Technology and Management, pp. 60–64. Citeseer (2012)
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Potdar, K., et al.: A comparative study of categorical variable encoding techniques for neural network classifiers. Int. J. Comput. Appl. 175(4), 7–9 (2017). https://doi.org/10.5120/ijca2017915495
Regression analysis essentials for machine learning. https://www.sthda.com/english/articles/40-regression-analysis/
Scikit-learn Development Team: Model selection and evaluation. https://scikit-learn.org/stable/model_selection.html
Tomak, J.: Performance Testing Infrastructure (2022). https://github.com/jtpgames/Locust_Scripts
Tomak, J., Gorlatch, S.: Measuring performance of fault management in a legacy system: an alarm system study. In: Calzarossa, M.C., Gelenbe, E., Grochla, K., Lent, R., Czachórski, T. (eds.) MASCOTS 2020. LNCS, vol. 12527, pp. 129–146. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68110-4_9
Tomak, J., Gorlatch, S.: RAST: evaluating performance of a legacy system using regression analysis and simulation. In: MASCOTS 2022, pp. 49–56 (2022). https://doi.org/10.1109/MASCOTS56607.2022.00015
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Tomak, J., Liermann, A., Gorlatch, S. (2024). Performance Evaluation of a Legacy Real-Time System: An Improved RAST Approach. In: Guisado-Lizar, JL., Riscos-Núñez, A., Morón-Fernández, MJ., Wainer, G. (eds) Simulation Tools and Techniques. SIMUtools 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 519. Springer, Cham. https://doi.org/10.1007/978-3-031-57523-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-57523-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-57522-8
Online ISBN: 978-3-031-57523-5
eBook Packages: Computer ScienceComputer Science (R0)