Skip to main content

Part of the book series: Proceedings of the International Neural Networks Society ((INNS,volume 2))

  • 1031 Accesses

Abstract

Well-known as an effective algorithm for optimizing expensive black-box functions, the popularity of Bayesian Optimization has surged in recent years alongside with the rise of machine learning thanks to its role as the most important algorithm for hyperparameter optimization. Many have used it, few would comprehend, since behind this powerful technique is a plethora of complex mathematical concepts most computer scientists and machine learning practitioners could barely familiarize themselves with. Even its simplest and most traditional building block - Gaussian Process - alone would involve enough advanced multivariate probability that can fill hundreds of pages. This work reviews this powerful algorithm and its traditional components such as Gaussian Process and Upper Confidence Bound in an alternative way by presenting a fresh intuition and filtering the complications of mathematics. Our paper will serve well as a functional reference for applied computer scientists who seek for a quick understanding of the subject to apply the tool more effectively.

Supported by the Frankfurt University of Applied Sciences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bajer, L., Pitra, Z., Holeňa, M.: Benchmarking Gaussian processes and random forests surrogate models on the BBOB noiseless testbed. In: Proceedings of the Companion Publication of the 2015 Annual Conference Genetic Evolutionary Computation, pp. 1143–1150 (2015)

    Google Scholar 

  2. Benassi, R., Bect, J., Vazquez, E.: Robust Gaussian process-based global optimization using a fully Bayesian expected improvement criterion. In: International Conference on Learning and Intelligent Optimization, pp. 176–190. Springer (2011)

    Google Scholar 

  3. Chang, H.S., Fu, M.C., Hu, J., Marcus, S.I.: Google deep mind’s AlphaGo. OR/MS Today 43(5), 24–29 (2016)

    Article  Google Scholar 

  4. Chapra, S.C., Canale, R.P., et al.: Numerical Methods for Engineers. McGraw-Hill Higher Education, Boston (2010)

    Google Scholar 

  5. Contal, E., Buffoni, D., Robicquet, A., Vayatis, N.: Parallel Gaussian process optimization with upper confidence bound and pure exploration. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 225–240. Springer (2013)

    Google Scholar 

  6. Dorigo, M., Blum, C.: Ant colony optimization theory: a survey. Theor. Comput. Sci. 344(2–3), 243–278 (2005)

    Article  MathSciNet  Google Scholar 

  7. Falkner, S., Klein, A., Hutter, F.: BOHB: robust and efficient hyperparameter optimization at scale. arXiv preprint arXiv:1807.01774 (2018)

  8. Fei, Y., Rong, G., Wang, B., Wang, W.: Parallel L-BFGS-B algorithm on GPU. Comput. Graph. 40, 1–9 (2014)

    Article  Google Scholar 

  9. Harik, G.R., Lobo, F.G., Goldberg, D.E.: The compact genetic algorithm. IEEE Trans. Evol. Comput. 3(4), 287–297 (1999)

    Article  Google Scholar 

  10. Hernández-Lobato, D., Hernandez-Lobato, J., Shah, A., Adams, R.: Predictive entropy search for multi-objective Bayesian optimization. In: International Conference on Machine Learning, pp. 1492–1501 (2016)

    Google Scholar 

  11. Ismail, M.E., et al.: Bessel functions and the infinite divisibility of the student \( t \)-distribution. Ann. Probab. 5(4), 582–585 (1977)

    Article  MathSciNet  Google Scholar 

  12. Lizotte, D.J.: Practical Bayesian optimization. University of Alberta (2008)

    Google Scholar 

  13. Martınez, J.M.: Practical quasi-Newton methods for solving nonlinear systems. J. Comput. Appl. Math. 124(1–2), 97–121 (2000)

    Article  MathSciNet  Google Scholar 

  14. Minasny, B., McBratney, A.B.: The Matérn function as a general model for soil variograms. Geoderma 128(3–4), 192–207 (2005)

    Article  Google Scholar 

  15. Nydick, S.W.: The Wishart and inverse Wishart distributions. J. Stat. 6, 1–19 (2012)

    Google Scholar 

  16. Ranjit, M.P., Ganapathy, G., Sridhar, K., Arumugham, V.: Efficient deep learning hyperparameter tuning using cloud infrastructure: intelligent distributed hyperparameter tuning with Bayesian optimization in the cloud. In: 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pp. 520–522. IEEE (2019)

    Google Scholar 

  17. Rasmussen, C.E.: Gaussian processes in machine learning. In: Summer School on Machine Learning, pp. 63–71. Springer (2003)

    Google Scholar 

  18. Shah, A., Wilson, A., Ghahramani, Z.: Student-t processes as alternatives to Gaussian processes. In: Artificial Intelligence and Statistics, pp. 877–885 (2014)

    Google Scholar 

  19. Shi, Y., Eberhart, R.C.: Empirical study of particle swarm optimization. In: Proceedings of the 1999 Congress on Evolutionary Computation-CEC 1999 (Cat. No. 99TH8406), vol. 3, pp. 1945–1950. IEEE (1999)

    Google Scholar 

  20. Tong, Y.L.: The Multivariate Normal Distribution. Springer Science & Business Media, New York (2012)

    Google Scholar 

  21. Viana, F., Haftka, R.: Surrogate-based optimization with parallel simulations using the probability of improvement. In: 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, p. 9392 (2010)

    Google Scholar 

  22. Wu, J., Poloczek, M., Wilson, A.G., Frazier, P.: Bayesian optimization with gradients. In: Advances in Neural Information Processing Systems, pp. 5267–5278 (2017)

    Google Scholar 

  23. Zinkevich, M., Weimer, M., Li, L., Smola, A.J.: Parallelized stochastic gradient descent. In: Advances in Neural Information Processing Systems, pp. 2595–2603 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thuan, L.G., Logofatu, D. (2020). A Comparative Study on Bayesian Optimization. In: Iliadis, L., Angelov, P., Jayne, C., Pimenidis, E. (eds) Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference. EANN 2020. Proceedings of the International Neural Networks Society, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-030-48791-1_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-48791-1_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-48790-4

  • Online ISBN: 978-3-030-48791-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics