Abstract
Exascale reaching imposes a high automation level on HPC supercomputers. In this paper, a self-optimization strategy is proposed to improve application IO performance using statistical and machine learning based methods.
The proposed method takes advantage of collected IO data through an off-line analysis to infers the most relevant parameterization of an IO accelerator that should be used for the next launch of a similar job. This is thus a continuous improvement process that will converge toward an optimal parameterization along iterations.
The inference process uses a numerical optimization method to propose the parameterization that minimizes the execution time of the considered application. A regression method is used to model the objective function to be optimized from a sparse set of collected data from the past runs.
Experiments on different artificial parametric spaces show that the convergence speed of the proposed method requires less than 20 runs to converge toward a parameterization of the IO accelerator.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bergman, K., et al.: Exascale computing study: technology challenges in achieving exascale systems. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO), Technical report 15 (2008)
Abrahm, E., et al. Preparing HPC applications for exascale: challenges and recommendations. In: 2015 18th International Conference on Network-Based Information Systems (NBiS). IEEE (2015)
Gainaru, A., et al.: Failure prediction for HPC systems and applications: current situation and open issues. Int. J. High Perform. Comput. Appl. 27(3), 273–282 (2013)
https://atos.net/en/products/high-performance-computing-hpc/extreme-data
Vu, K.K., D’Ambrosio, C., Hamadi, Y., Liberti, L.: Surrogate-based methods for blackbox optimization. Int. Trans. Oper. Res. 24, 393–424 (2017). https://doi.org/10.1111/itor.12292
Mackay, D.J.C.: Bayesian interpolation. Neural Comput. J. 4, 415–447 (1992)
Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. J. 20(3), 273–297 (1995)
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT press, Cambridge (2006)
Nelder, J., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–313 (1965)
Kennedy, J., Eberhart, R.: Particle swarm optimization. IEEE (1995)
Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)
Jamil, M., Xin, X.-S.: A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 4, 150–194 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Vincent, L., Nabe, M., Goret, G. (2018). Self-optimization Strategy for IO Accelerator Parameterization. In: Yokota, R., Weiland, M., Shalf, J., Alam, S. (eds) High Performance Computing. ISC High Performance 2018. Lecture Notes in Computer Science(), vol 11203. Springer, Cham. https://doi.org/10.1007/978-3-030-02465-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-02465-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-02464-2
Online ISBN: 978-3-030-02465-9
eBook Packages: Computer ScienceComputer Science (R0)