Skip to main content

Black-Box Optimizer with Stochastic Implicit Natural Gradient

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12977))

Abstract

Black-box optimization is primarily important for many computationally intensive applications, including reinforcement learning (RL), robot control, etc. This paper presents a novel theoretical framework for black-box optimization, in which our method performs stochastic updates with an implicit natural gradient of an exponential-family distribution. Theoretically, we prove the convergence rate of our framework with full matrix update for convex functions under Gaussian distribution. Our methods are very simple and contain fewer hyper-parameters than CMA-ES [12]. Empirically, our method with full matrix update achieves competitive performance compared with one of the state-of-the-art methods CMA-ES on benchmark test problems. Moreover, our methods can achieve high optimization precision on some challenging test functions (e.g., \(l_1\)-norm ellipsoid test problem and Levy test problem), while methods with explicit natural gradient, i.e., IGO [21] with full matrix update can not. This shows the efficiency of our methods.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Akimoto, Y., Nagata, Y., Ono, I., Kobayashi, S.: Bidirectional relation between CMA evolution strategies and natural evolution strategies. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN 2010. LNCS, vol. 6238, pp. 154–163. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15844-5_16

    Chapter  Google Scholar 

  2. Amari, S.I.: Natural gradient works efficiently in learning. Neural Comput. 10(2), 251–276 (1998)

    Article  Google Scholar 

  3. Amari, S.: Information Geometry and Its Applications. AMS, vol. 194. Springer, Tokyo (2016). https://doi.org/10.1007/978-4-431-55978-8

    Book  MATH  Google Scholar 

  4. Azoury, K.S., Warmuth, M.K.: Relative loss bounds for on-line density estimation with the exponential family of distributions. Mach. Learn. 43(3), 211–246 (2001)

    Article  Google Scholar 

  5. Back, T., Hoffmeister, F., Schwefel, H.P.: A survey of evolution strategies. In: Proceedings of the Fourth International Conference on Genetic Algorithms, vol. 2. Morgan Kaufmann Publishers, San Mateo (1991)

    Google Scholar 

  6. Balasubramanian, K., Ghadimi, S.: Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. In: Advances in Neural Information Processing Systems, pp. 3455–3464 (2018)

    Google Scholar 

  7. Barsce, J.C., Palombarini, J.A., Martínez, E.C.: Towards autonomous reinforcement learning: automatic setting of hyper-parameters using Bayesian optimization. In: Computer Conference (CLEI), 2017 XLIII Latin American, pp. 1–9. IEEE (2017)

    Google Scholar 

  8. Bull, A.D.: Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res. (JMLR) 12, 2879–2904 (2011)

    MathSciNet  MATH  Google Scholar 

  9. Choromanski, K., Pacchiano, A., Parker-Holder, J., Tang, Y.: From complexity to simplicity: adaptive ES-active subspaces for blackbox optimization. arXiv:1903.04268 (2019)

  10. Choromanski, K., Rowland, M., Sindhwani, V., Turner, R.E., Weller, A.: Structured evolution with compact architectures for scalable policy optimization. In: ICML, pp. 969–977 (2018)

    Google Scholar 

  11. Domke, J.: Provable smoothness guarantees for black-box variational inference. arXiv preprint arXiv:1901.08431 (2019)

  12. Hansen, N.: The CMA evolution strategy: a comparing review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.) Towards a New Evolutionary Computation. STUDFUZZ, vol. 192, pp. 75–102. Springer, Heidelberg (2006). https://doi.org/10.1007/3-540-32494-1_4

  13. Khan, M.E., Lin, W.: Conjugate-computation variational inference: converting variational inference in non-conjugate models to inferences in conjugate models. arXiv preprint arXiv:1703.04265 (2017)

  14. Khan, M.E., Nielsen, D.: Fast yet simple natural-gradient descent for variational inference in complex models. In: 2018 International Symposium on Information Theory and Its Applications (ISITA), pp. 31–35. IEEE (2018)

    Google Scholar 

  15. Khan, M.E., Nielsen, D., Tangkaratt, V., Lin, W., Gal, Y., Srivastava, A.: Fast and scalable Bayesian deep learning by weight-perturbation in adam. In: ICML (2018)

    Google Scholar 

  16. Liu, G., et al.: Trust region evolution strategies. In: AAAI (2019)

    Google Scholar 

  17. Lizotte, D.J., Wang, T., Bowling, M.H., Schuurmans, D.: Automatic gait optimization with Gaussian process regression. In: IJCAI, vol. 7, pp. 944–949 (2007)

    Google Scholar 

  18. Lyu, Y.: Spherical structured feature maps for kernel approximation. In: Proceedings of the 34th International Conference on Machine Learning (ICML), pp. 2256–2264 (2017)

    Google Scholar 

  19. Lyu, Y., Yuan, Y., Tsang, I.W.: Efficient batch black-box optimization with deterministic regret bounds. arXiv preprint arXiv:1905.10041 (2019)

  20. Negoescu, D.M., Frazier, P.I., Powell, W.B.: The knowledge-gradient algorithm for sequencing experiments in drug discovery. INFORMS J. Comput. 23(3), 346–363 (2011)

    Article  MathSciNet  Google Scholar 

  21. Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: a unifying picture via invariance principles. J. Mach. Learn. Res. (JMLR) 18(1), 564–628 (2017)

    MathSciNet  MATH  Google Scholar 

  22. Raskutti, G., Mukherjee, S.: The information geometry of mirror descent. IEEE Trans. Inf. Theory 61(3), 1451–1457 (2015)

    Article  MathSciNet  Google Scholar 

  23. Salimans, T., Ho, J., Chen, X., Sidor, S., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 (2017)

  24. Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: NeurIPS, pp. 2951–2959 (2012)

    Google Scholar 

  25. Srinivas, M., Patnaik, L.M.: Genetic algorithms: a survey. Computer 27(6), 17–26 (1994)

    Article  Google Scholar 

  26. Srinivas, N., Krause, A., Kakade, S.M., Seeger, M.: Gaussian process optimization in the bandit setting: no regret and experimental design. In: ICML (2010)

    Google Scholar 

  27. Wang, G.G., Shan, S.: Review of metamodeling techniques in support of engineering design optimization. J. Mech. Des. 129(4), 370–380 (2007)

    Article  Google Scholar 

  28. Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., Schmidhuber, J.: Natural evolution strategies. J. Mach. Learn. Res. (JMLR) 15(1), 949–980 (2014)

    MathSciNet  MATH  Google Scholar 

  29. Xu, Z.: Deterministic sampling of sparse trigonometric polynomials. J. Complex. 27(2), 133–140 (2011)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgement

We would like to thank all anonymous reviewers and the area chair for their valuable comments and suggestions. Yueming Lyu was supported by UTS President Scholarship. Ivor Tsang was supported by the Australian Research Council Grant (DP180100106 and DP200101328).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yueming Lyu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1137 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lyu, Y., Tsang, I.W. (2021). Black-Box Optimizer with Stochastic Implicit Natural Gradient. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12977. Springer, Cham. https://doi.org/10.1007/978-3-030-86523-8_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86523-8_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86522-1

  • Online ISBN: 978-3-030-86523-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics