Abstract
We introduce the opal framework in which the identification of good algorithmic parameters is interpreted as a black box optimization problem whose variables are the algorithmic parameters. In addition to the target algorithm, the user of the framework must supply or select two components. The first is a set of metrics defining the notions of acceptable parameter values and of performance of the algorithm. The second is a collection of representative sets of valid input data for the target algorithm. opal may be applied to virtually any context in which parameter tuning leads to increased performance. The black box optimization problem is solved by way of a direct-search method that provides local optimality guarantees and offers a certain flexibility. We illustrate its use on a parameter-tuning application on the DFO method from the field of derivative-free optimization.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Audet C, Orban D (2006) Finding optimal algorithmic parameters using the mesh adaptive direct search algorithm. SIAM J. Optim. 17(3):642–664
Kolda TG, Lewis RM, Torczon V (2003) Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev. 45(3):385–482
Miller W (1975) Software for roundoff analysis. Trans. ACM Math. Softw. 1:108–128
Larson JL, Sameh AH (1980) Algorithms for roundoff error analysis – a relative error approach. Computing 24:275–297
Adenso-Diaz B, Laguna M (2006) Fine-tuning of algorithms using fractional experimental designs and local search. Oper. Res. 54(1):99–114
Hutter F, Hoos HH, Stützle T (2007) Automatic algorithm configuration based on local search. In: Proc. of the twenty-second conference on artifical intelligence (AAAI 2007), pp 1152–1157
Whaley RC, Dongarra JJ (1999) Automatically tuned linear algebra software. In: Ninth SIAM conference on parallel processing for scientific computing
Lawson CL, Hanson RJ, Kincaid D, Krogh FT (1979) Basic linear algebra subprograms for Fortran usage. Trans. ACM Math. Softw. 5:308–323
Blackford LS, Demmel J, Dongarra JJ, Duff IS, Hammarling S, Henry G, Heroux M, Kaufman L, Lumsdaine A, Petitet A, Pozo R, Remington K, Whaley RC (2002) An updated set of basic linear algebra subprograms (BLAS). Trans. ACM Math. Softw. 28(2):135–151
Whaley RC, Petitet A, Dongarra JJ (2001) Automated empirical optimization of software and the ATLAS project. Parallel Comput. 27(1–2):3–35
Seymour K, You H, Dongarra JJ (2008) A comparison of search heuristics for empirical code optimization. In: Proceedings of the 2008 IEEE international conference on cluster computing. Third international workshop on automatic performance tuning (iWAPT 2008), Tsukuba International Congress Center, EPOCHAL TSUKUBA, Japan, pp 421–429
Bilmes J, Asanović K, Chin CW, Demmel J (1998) The PHiPAC v1.0 matrix-multiply distribution. Technical Report TR-98-35, International Computer Science Institute, CS Division, University of California, Berkeley, CA
Vuduc R, Demmel JW, Yelick KA (2005) OSKI: a library of automatically tuned sparse matrix kernels. In: Proceedings of SciDAC 2005. J. Phys. Conf. Ser., San Francisco, CA, USA, Institute of Physics Publishing
Gould NIM, Orban D, Toint PL (2003) CUTEr (and SifDec): a constrained and unconstrained testing environment, revisited. Trans. ACM Math. Softw. 29(4):373–394
Gould NIM, Orban D, Sartenaer A, Toint PL (2005) Sensitivity of trust-region algorithms on their parameters. 4OR 3(3):227–241
Audet C, Dennis JE Jr (2006) Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(1):188–217
Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New York. Reissued in 1990, Classics in applied mathematics, vol 5. SIAM, Philadelphia
Audet C, Dennis JE Jr (2009) A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20(1):445–472
Abramson MA, Audet C, Couture G, Dennis JE Jr, Le Digabel S The nomad project. Software available at http://www.gerad.ca/nomad
Hock W, Schittkowski K (1981) Test examples for nonlinear programming codes. Springer, New York
Conn AR, Scheinberg K, Toint PL (1998) A derivative free optimization algorithm in practice. In: Proceedings the of 7th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, St. Louis, Missouri
Wächter A, Biegler LT (2006) On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1):25–57
Conn AR, Scheinberg K, Toint PL (2009) DFO. http://www.coin-or.org
Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math. Program. 91(2):201–213
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer New York
About this chapter
Cite this chapter
Audet, C., Dang, CK., Orban, D. (2011). Algorithmic Parameter Optimization of the DFO Method with the OPAL Framework. In: Naono, K., Teranishi, K., Cavazos, J., Suda, R. (eds) Software Automatic Tuning. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-6935-4_15
Download citation
DOI: https://doi.org/10.1007/978-1-4419-6935-4_15
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4419-6934-7
Online ISBN: 978-1-4419-6935-4
eBook Packages: EngineeringEngineering (R0)