Skip to main content

Algorithmic Parameter Optimization of the DFO Method with the OPAL Framework

  • Chapter
  • First Online:
Software Automatic Tuning

Abstract

We introduce the opal framework in which the identification of good algorithmic parameters is interpreted as a black box optimization problem whose variables are the algorithmic parameters. In addition to the target algorithm, the user of the framework must supply or select two components. The first is a set of metrics defining the notions of acceptable parameter values and of performance of the algorithm. The second is a collection of representative sets of valid input data for the target algorithm. opal may be applied to virtually any context in which parameter tuning leads to increased performance. The black box optimization problem is solved by way of a direct-search method that provides local optimality guarantees and offers a certain flexibility. We illustrate its use on a parameter-tuning application on the DFO method from the field of derivative-free optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://math-atlas.sourceforge.net.

  2. 2.

    http://www.python.org.

References

  1. Audet C, Orban D (2006) Finding optimal algorithmic parameters using the mesh adaptive direct search algorithm. SIAM J. Optim. 17(3):642–664

    Article  MathSciNet  MATH  Google Scholar 

  2. Kolda TG, Lewis RM, Torczon V (2003) Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev. 45(3):385–482

    Article  MathSciNet  MATH  Google Scholar 

  3. Miller W (1975) Software for roundoff analysis. Trans. ACM Math. Softw. 1:108–128

    Article  MATH  Google Scholar 

  4. Larson JL, Sameh AH (1980) Algorithms for roundoff error analysis – a relative error approach. Computing 24:275–297

    Article  MathSciNet  MATH  Google Scholar 

  5. Adenso-Diaz B, Laguna M (2006) Fine-tuning of algorithms using fractional experimental designs and local search. Oper. Res. 54(1):99–114

    Article  MATH  Google Scholar 

  6. Hutter F, Hoos HH, Stützle T (2007) Automatic algorithm configuration based on local search. In: Proc. of the twenty-second conference on artifical intelligence (AAAI 2007), pp 1152–1157

    Google Scholar 

  7. Whaley RC, Dongarra JJ (1999) Automatically tuned linear algebra software. In: Ninth SIAM conference on parallel processing for scientific computing

    Google Scholar 

  8. Lawson CL, Hanson RJ, Kincaid D, Krogh FT (1979) Basic linear algebra subprograms for Fortran usage. Trans. ACM Math. Softw. 5:308–323

    Article  MATH  Google Scholar 

  9. Blackford LS, Demmel J, Dongarra JJ, Duff IS, Hammarling S, Henry G, Heroux M, Kaufman L, Lumsdaine A, Petitet A, Pozo R, Remington K, Whaley RC (2002) An updated set of basic linear algebra subprograms (BLAS). Trans. ACM Math. Softw. 28(2):135–151

    Article  Google Scholar 

  10. Whaley RC, Petitet A, Dongarra JJ (2001) Automated empirical optimization of software and the ATLAS project. Parallel Comput. 27(1–2):3–35

    Article  MATH  Google Scholar 

  11. Seymour K, You H, Dongarra JJ (2008) A comparison of search heuristics for empirical code optimization. In: Proceedings of the 2008 IEEE international conference on cluster computing. Third international workshop on automatic performance tuning (iWAPT 2008), Tsukuba International Congress Center, EPOCHAL TSUKUBA, Japan, pp 421–429

    Google Scholar 

  12. Bilmes J, Asanović K, Chin CW, Demmel J (1998) The PHiPAC v1.0 matrix-multiply distribution. Technical Report TR-98-35, International Computer Science Institute, CS Division, University of California, Berkeley, CA

    Google Scholar 

  13. Vuduc R, Demmel JW, Yelick KA (2005) OSKI: a library of automatically tuned sparse matrix kernels. In: Proceedings of SciDAC 2005. J. Phys. Conf. Ser., San Francisco, CA, USA, Institute of Physics Publishing

    Google Scholar 

  14. Gould NIM, Orban D, Toint PL (2003) CUTEr (and SifDec): a constrained and unconstrained testing environment, revisited. Trans. ACM Math. Softw. 29(4):373–394

    Article  MathSciNet  MATH  Google Scholar 

  15. Gould NIM, Orban D, Sartenaer A, Toint PL (2005) Sensitivity of trust-region algorithms on their parameters. 4OR 3(3):227–241

    Google Scholar 

  16. Audet C, Dennis JE Jr (2006) Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(1):188–217

    Article  MathSciNet  MATH  Google Scholar 

  17. Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New York. Reissued in 1990, Classics in applied mathematics, vol 5. SIAM, Philadelphia

    Google Scholar 

  18. Audet C, Dennis JE Jr (2009) A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20(1):445–472

    Article  MathSciNet  MATH  Google Scholar 

  19. Abramson MA, Audet C, Couture G, Dennis JE Jr, Le Digabel S The nomad project. Software available at http://www.gerad.ca/nomad

  20. Hock W, Schittkowski K (1981) Test examples for nonlinear programming codes. Springer, New York

    Book  MATH  Google Scholar 

  21. Conn AR, Scheinberg K, Toint PL (1998) A derivative free optimization algorithm in practice. In: Proceedings the of 7th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, St. Louis, Missouri

    Google Scholar 

  22. Wächter A, Biegler LT (2006) On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1):25–57

    Article  MathSciNet  MATH  Google Scholar 

  23. Conn AR, Scheinberg K, Toint PL (2009) DFO. http://www.coin-or.org

  24. Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math. Program. 91(2):201–213

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dominique Orban .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer New York

About this chapter

Cite this chapter

Audet, C., Dang, CK., Orban, D. (2011). Algorithmic Parameter Optimization of the DFO Method with the OPAL Framework. In: Naono, K., Teranishi, K., Cavazos, J., Suda, R. (eds) Software Automatic Tuning. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-6935-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-1-4419-6935-4_15

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4419-6934-7

  • Online ISBN: 978-1-4419-6935-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics