Abstract
Before software systems are shipped, they are tuned to optimize their field performance. This process is called performance tuning. Performance tuning is used to find the best settings for a set of tunable, or changeable, parameters like buffer space, disk file allocation, main memory partition, I/O priority, process scheduling quantum, etc. Examples of performance measures to be optimized are: query or transaction loss, throughput rate, response time, etc. Improperly tuned systems can create field problems even if there are no software faults in the product. Hence, it is important that software systems be tuned for optimal performance before they are delivered. However, optimal performance tuning is quite complex because of: exponentially many alternatives, unknown functional relationships between parameters and performance measures, stochastically fluctuating system performance, and expensive empirical experiments. For these reasons, tuning is typically practiced as an art and depends heavily on the intuitions of experts. In this paper, we examine a method for tuning which is repeatable and produces consistently superior results across many different applications. This method, based upon Robust Experimental Design, has revolutionized design optimization in hardware systems. The methodology consists of conducting a few carefully chosen experiments and using the associated analysis technology to help extract the maximum possible information for performance optimization. Specifically we give some background on statistical experimental design and demonstrate it on an actual software system that provides network database services which had experienced occasional query losses. Focusing on nine carefully chosen parameters, 12 experiments were conducted. This number of experiments is far fewer and consequently far less costly in time and effort than what would be required for collecting the same amount of information by traditional methods. The selection of the experiments took into account ideas from accelerated life testing and ideas from the Robust Experimental Design. Based on the analysis of this data, new settings for the parameters in software system were implemented. All tests done with the new settings have shown that the query loss problem has been totally controlled.
Similar content being viewed by others
References
Becker, R.A., J.M. Chambers and A.R. Wilks (1988), The New S Language, A Programming Environment for Data Analysis and Graphics, Wadsworth & Brooks/Cole.
Box, G.E.P., W.G. Hunter and J.S. Hunter (1978), Statistics for Experimenters, Wiley Interscience, New York.
Chambers, J.M., W.S. Cleveland, B. Kleiner and P.A. Tukey (1983), Graphical Methods for Data Analysis, Wadsworth.
Cleveland, W.S. (1979), “Robust Locally Weighted Regression and Smoothing Scatter Plots,” Journal of the American Statistical Association 74, 829–836.
Cochran, W.G. and G. Cox (1957), Experimental Design, Wiley, New York.
Draper, N. and H. Smith (1981), Applied Regression Analysis, 2nd Edition, Wiley Interscience, New York.
Gnanadesikan, R. (1977), Methods for Statistical Data Analysis of Multivariate Observations, Wiley, New York.
Hamada, M. and C.F.J. Wu (1992), “Analysis of Designed Experiments With Complex Aliasing,” Journal of Quality Technology 24, 130–137.
Kempthorne, O. (1979), The Design and Analysis of Experiments, R.E. Krieger Publishing Company.
Phadke, M.S. (1989), Quality Engineering Using Robust Design, Prentice-Hall, Englewood Chiffs, NJ.
Plackett, R.L. and J.P. Burman (1946), “The Design of Multifactorial Experiments,” Biometrika 33, 305–325.
Taguchi, G. (1986), Introduction to Quality Engineering, Unipub/Kraus.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Dalal, S.R., Hamada, M.S. & Wang, T. How to improve performance of software systems: A methodology and a case study for tuning performance. Annals of Software Engineering 8, 53–84 (1999). https://doi.org/10.1023/A:1018910926921
Issue Date:
DOI: https://doi.org/10.1023/A:1018910926921