Skip to main content

Convergence Rate Evaluation of Derivative-Free Optimization Techniques

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Big Data (MOD 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10122))

Included in the following conference series:

  • 2576 Accesses

Abstract

This paper presents a convergence rate comparison of five different derivative-free numerical optimization techniques across a set of 50 benchmark objective functions. Results suggest that Adaptive Memory Programming for constrained Global Optimization, and a variant of Simulated Annealing are two of the fastest-converging numerical optimization techniques in this set. Lastly, there is a mechanism for expanding the set of optimization algorithms provided.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The computation of \(n(\{i \in T_\alpha \text{ s.t. } v \le i\})\) in the upper product sum can be done in \(log(n(T_\alpha ))\) if each \(T_\alpha \) is sorted. \(n_{avg}\) represents the average number of trials for all \(\alpha \in A\).

References

  1. Kirkpatrick, S.: Optimization by simulated annealing: quantitative studies. J. Stat. Phys. 34(5–6), 975–986 (1984)

    Article  MathSciNet  Google Scholar 

  2. Lasdon, L., Duarte, A., Glover, F., Laguna, M., Martí, R.: Adaptive memory programming for constrained global optimization. Comput. Oper. Res. 37(8), 1500–1509 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Yang, X.S., Deb, S.: Cuckoo search via lévy flights. In: World Congress on Nature and Biologically Inspired Computing, NaBIC 2009, pp. 210–214. IEEE (2009)

    Google Scholar 

  4. Civicioglu, P.: Backtracking search optimization algorithm for numerical optimization problems. Appl. Math. Comput. 219(15), 8121–8144 (2013)

    MathSciNet  MATH  Google Scholar 

  5. Karaboga, D., Gorkemli, B.: A quick artificial bee colony (qABC) algorithm and its performance on optimization problems. Appl. Soft Comput. 23, 227–238 (2014)

    Article  Google Scholar 

  6. Civicioglu, P., Besdok, E.: A conceptual comparison of the cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif. Intell. Rev. 39(4), 315–346 (2013)

    Article  Google Scholar 

  7. Moré, J.J., Wild, S.M.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20(1), 172–191 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Floudas, C.A., Pardalos, P.M.: Encyclopedia of Optimization. Springer Science and Business Media, Heidelberg (2009)

    Google Scholar 

  9. Gavana, A.: Global optimization benchmarks and AMPGO. Accessed Apr 2016 (2014)

    Google Scholar 

  10. Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim. 39(3), 459–471 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  11. Rios, L.M., Sahinidis, N.V.: Derivative-free optimization: a review of algorithms and comparison of software implementations. J. Glob. Optim. 56(3), 1247–1293 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  12. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  13. Alizamir, S., Pardalos, P.M., Rebennack, S.: Improving the neighborhood selection strategy in simulated annealing using the optimal stopping problem. INTECH Open Access Publisher (2008)

    Google Scholar 

Download references

Acknowledgments

This research project was funded by the Roanoke College Mathematics Computer Science and Physics Department. The python code for AMPGO and the benchmarking library were a result of the freely available work done by Andrea Gavana at [9].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Lux .

Editor information

Editors and Affiliations

Appendix

Appendix

Table 1. These are the mathematical formulations of the functions used to compare the optimization algorithms. For the definitions of the Needle Eye, Penalty02, Rana, and Zero Sum functions please see code provided in electronic supplementary materials.
Table 2. Objective functions used for evaluating the five optimization algorithms.

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Lux, T. (2016). Convergence Rate Evaluation of Derivative-Free Optimization Techniques. In: Pardalos, P., Conca, P., Giuffrida, G., Nicosia, G. (eds) Machine Learning, Optimization, and Big Data. MOD 2016. Lecture Notes in Computer Science(), vol 10122. Springer, Cham. https://doi.org/10.1007/978-3-319-51469-7_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-51469-7_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-51468-0

  • Online ISBN: 978-3-319-51469-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics