Skip to main content

Self-optimization Strategy for IO Accelerator Parameterization

  • Conference paper
  • First Online:
High Performance Computing (ISC High Performance 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11203))

Included in the following conference series:

Abstract

Exascale reaching imposes a high automation level on HPC supercomputers. In this paper, a self-optimization strategy is proposed to improve application IO performance using statistical and machine learning based methods.

The proposed method takes advantage of collected IO data through an off-line analysis to infers the most relevant parameterization of an IO accelerator that should be used for the next launch of a similar job. This is thus a continuous improvement process that will converge toward an optimal parameterization along iterations.

The inference process uses a numerical optimization method to propose the parameterization that minimizes the execution time of the considered application. A regression method is used to model the objective function to be optimized from a sparse set of collected data from the past runs.

Experiments on different artificial parametric spaces show that the convergence speed of the proposed method requires less than 20 runs to converge toward a parameterization of the IO accelerator.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bergman, K., et al.: Exascale computing study: technology challenges in achieving exascale systems. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO), Technical report 15 (2008)

    Google Scholar 

  2. Abrahm, E., et al. Preparing HPC applications for exascale: challenges and recommendations. In: 2015 18th International Conference on Network-Based Information Systems (NBiS). IEEE (2015)

    Google Scholar 

  3. Gainaru, A., et al.: Failure prediction for HPC systems and applications: current situation and open issues. Int. J. High Perform. Comput. Appl. 27(3), 273–282 (2013)

    Article  Google Scholar 

  4. https://atos.net/en/products/high-performance-computing-hpc/extreme-data

  5. Vu, K.K., D’Ambrosio, C., Hamadi, Y., Liberti, L.: Surrogate-based methods for blackbox optimization. Int. Trans. Oper. Res. 24, 393–424 (2017). https://doi.org/10.1111/itor.12292

    Article  MathSciNet  MATH  Google Scholar 

  6. Mackay, D.J.C.: Bayesian interpolation. Neural Comput. J. 4, 415–447 (1992)

    Article  Google Scholar 

  7. Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. J. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  8. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT press, Cambridge (2006)

    MATH  Google Scholar 

  9. Nelder, J., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–313 (1965)

    Article  MathSciNet  Google Scholar 

  10. Kennedy, J., Eberhart, R.: Particle swarm optimization. IEEE (1995)

    Google Scholar 

  11. Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)

    Article  Google Scholar 

  12. Jamil, M., Xin, X.-S.: A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 4, 150–194 (2013)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gaël Goret .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vincent, L., Nabe, M., Goret, G. (2018). Self-optimization Strategy for IO Accelerator Parameterization. In: Yokota, R., Weiland, M., Shalf, J., Alam, S. (eds) High Performance Computing. ISC High Performance 2018. Lecture Notes in Computer Science(), vol 11203. Springer, Cham. https://doi.org/10.1007/978-3-030-02465-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-02465-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-02464-2

  • Online ISBN: 978-3-030-02465-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics