Skip to main content
Log in

An incremental particle swarm for large-scale continuous optimization problems: an example of tuning-in-the-loop (re)design of optimization algorithms

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

The development cycle of high-performance optimization algorithms requires the algorithm designer to make several design decisions. These decisions range from implementation details to the setting of parameter values for testing intermediate designs. Proper parameter setting can be crucial for the effective assessment of algorithmic components because a bad parameter setting can make a good algorithmic component perform poorly. This situation may lead the designer to discard promising components that just happened to be tested with bad parameter settings. Automatic parameter tuning techniques are being used by practitioners to obtain peak performance from already designed algorithms. However, automatic parameter tuning also plays a crucial role during the development cycle of optimization algorithms. In this paper, we present a case study of a tuning-in-the-loop approach for redesigning a particle swarm-based optimization algorithm for tackling large-scale continuous optimization problems. Rather than just presenting the final algorithm, we describe the whole redesign process. Finally, we study the scalability behavior of the final algorithm in the context of this special issue.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The terms “large-scale” and “high-dimensional” are used interchangeably in this paper.

  2. In this paper, we focus on the minimization case.

  3. For conciseness, we present here only the most relevant results. The complete set of results can be found in this paper’s companion website (http://iridia.ulb.ac.be/supp/IridiaSupp2010-011).

  4. We remind the reader that the complete set of results can be found in http://iridia.ulb.ac.be/supp/IridiaSupp2010-011.

References

  • Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. In: Proceedings of the IEEE congress on evolutionary computation (CEC 2005). IEEE Press, Piscataway, pp 1769–1776

  • Auger A, Hansen N, Zerpa JMP, Ros R, Schoenauer M (2009) Experimental comparisons of derivative free optimization algorithms. In: Vahrenhold J (ed) LNCS 5526. Proceedings of the symposium on experimental algorithmics (SEA 2009). Springer, Heidelberg pp 3–15

  • Balaprakash P, Birattari M, Stützle T (2007) Improvement strategies for the F-Race algorithm: sampling design and iterative refinement. In: Bartz-Beielstein T et al (eds) LNCS 4771. Proceedings of the international workshop on hybrid metaheuristics (HM 2007). Springer, Heidelberg, pp 108–122

  • Bartz-Beielstein T (2006) Experimental research in evolutionary computation—the new experimentalism. Springer, Berlin

    MATH  Google Scholar 

  • Birattari M (2009) Tuning metaheuristics: a machine learning perspective. Springer, Berlin

    MATH  Google Scholar 

  • Birattari M, Stützle T, Paquete L, Varrentrapp K (2002) A racing algorithm for configuring metaheuristics. In: Langdon WB et al (eds) GECCO 2002: Proceedings of the genetic and evolutionary computation conference. Morgan Kaufmann, San Francisco, pp 11–18

  • Birattari M, Yuan Z, Balaprakash P, Stützle T (2010) F-Race and iterated F-race: an overview. In: Bartz-Beielstein T et al (eds) Experimental methods for the analysis of optimization algorithms. Springer, Berlin, pp 311–336

  • Chiarandini M, Birattari M, Socha K, Rossi-Doria O (2006) An effective hybrid algorithm for university course timetabling. J Sched 9(5):403–432

    Article  MathSciNet  MATH  Google Scholar 

  • Clerc M, Kennedy J (2002) The particle swarm–explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73

    Article  Google Scholar 

  • Conn AR, Gould NIM, Toint PL (2000) Trust-region methods. MPS-SIAM series on optimization. MPS-SIAM, Philadelphia

    Google Scholar 

  • Eshelman LJ, Schaffer JD (1993) Real-coded genetic algorithms and interval-schemata. In: Whitley DL (ed) Foundation of genetic algorithms 2. Morgan Kaufmann, San Mateo, pp 187–202

  • Hansen N (2010) The CMA evolution strategy. http://www.lri.fr/hansen/cmaesintro.html. Last accessed July 2010

  • Hansen N, Ostermeier A, Gawelczyk A (1995) On the adaptation of arbitrary normal mutation distributions in evolution strategies: the generating set adaptation. In: Eshelman L (ed) Proceedings of the sixth international conference on genetic algorithms. Morgan Kaufmann, San Francisco, pp 57–64

  • Herrera F, Lozano M, Molina D (2010) Test suite for the special issue of soft computing on scalability of evolutionary algorithms and other metaheuristics for large-scale continuous optimization problems. http://sci2s.ugr.es/eamhco/updated-functions1-19.pdf. Last accessed July 2010

  • Hoos HH, Stützle T (2005) Stochastic local search: foundations and applications. Morgan Kaufmann, San Francisco

    MATH  Google Scholar 

  • Hutter F, Hoos HH, Leyton-Brown K, Murphy KP (2009) An experimental investigation of model-based parameter optimisation: SPO and beyond. In: Rothlauf F (ed) GECCO 2009: Proceedings of the genetic and evolutionary computation conference. ACM Press, New York, pp 271–278

  • Johnson SG (2010) The NLopt nonlinear-optimization package. http://ab-initio.mit.edu/nlopt. Last accessed July 2010

  • Kennedy J, Eberhart R (2001) Swarm intelligence. Morgan Kaufmann, San Francisco

    Google Scholar 

  • KhudaBukhsh AR, Xu L, Hoos HH, Leyton-Brown K (2009) SATenstein: automatically building local search SAT solvers from components. In: Boutilier C et al (eds) Proceedings of the international joint conference on artificial intelligence (IJCAI 2009), pp 517–524

  • López-Ibáñez M, Stützle T (2010) Automatic configuration of multi-objective ACO algorithms. In: Dorigo M et al (eds) LNCS 6234. Proceedings of the international conference on swarm intelligence (ANTS 2010). Springer, Heidelberg, pp 95–106

  • Lozano M, Herrera F (2010) Call for papers: Special issue of soft computing: a fusion of foundations, methodologies and applications on scalability of evolutionary algorithms and other metaheuristics for large scale continuous optimization problems. http://sci2s.ugr.es/eamhco/CFP.php. Last accessed July 2010

  • Montes de Oca MA, Aydın D, Stützle T. An incremental particle swarm for large-scale optimization problems: Complete data. http://iridia.ulb.ac.be/supp/IridiaSupp2010-011

  • Montes de Oca MA, Stützle T, Van den Enden K, Dorigo M (2010) Incremental social learning in particle swarms. IEEE Trans Syst Man Cybern B Cybern (in press)

  • Montes de Oca MA, Van den Enden K, Stützle T (2008) Incremental particle swarm-guided local search for continuous optimization. In: Blesa MJ et al (eds) LNCS 5296. Proceedings of the international workshop on hybrid metaheuristics (HM 2008). Springer, Heidelberg, pp 72–86

  • Moré J, Wild S (2009) Benchmarking derivative-free optimization algorithms. SIAM J Optim 20(1):172–191

    Article  MathSciNet  MATH  Google Scholar 

  • Nannen V, Eiben AE (2007) Relevance estimation and value calibration of evolutionary algorithm parameters. In: Proceedings of the international joint conference on artificial intelligence (IJCAI 2009), pp 975–980

  • Powell MJD (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Comput J 7(2):155–162

    Article  MathSciNet  MATH  Google Scholar 

  • Powell MJD (2006) The NEWUOA software for unconstrained optimization. Large-scale nonlinear optimization. Nonconvex optimization and its applications, vol 83. Springer, Berlin, pp 255–297

  • Powell MJD (2009) The BOBYQA algorithm for bound constrained optimization without derivatives. Technical Report NA2009/06, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge

  • Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical Recipes in C. The Art of Scientific Computing, 2nd edn. Cambridge University Press, New York

    MATH  Google Scholar 

  • Smit SK, Eiben AE (2009) Comparing parameter tuning methods for evolutionary algorithms. In: Proceedings of the IEEE ccongress on evolutionary computation (CEC 2009). IEEE Press, Piscataway, pp 399–406

  • Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359

    Article  MathSciNet  MATH  Google Scholar 

  • Stützle T, Birattari M, Hoos HH (eds) (2007) Engineering stochastic local search algorithms. Designing implementing and analizing effective heuristics. International workshop, SLS 2007. LNCS 4638. Springer, Heidelberg

  • Stützle T, Birattari M, Hoos HH (eds) (2009) Engineering stochastic local search algorithms. Designing implementing and analizing effective heuristics. Second international workshop, SLS 2009. LNCS 5752. Springer, Heidelberg

  • Yuan Z, Montes de Oca MA, Birattari M, Stützle T (2010) Modern continuous optimization algorithms for tuning real and integer algorithm parameters. In: Dorigo M et al (eds) LNCS 6234. Proceedings of the international conference on swarm intelligence (ANTS 2010). Springer, Heidelberg, pp 204–215

Download references

Acknowledgments

The work described in this paper was supported by the META-X project, an Action de Recherche Concertée funded by the Scientific Research Directorate of the French Community of Belgium. Thomas Stützle acknowledges support from the F.R.S-FNRS of the French Community of Belgium of which he is a Research Associate. The authors thank Manuel López-Ibáñez for adapting the code of iterated F-race to deal with the tuning task studied in this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marco A. Montes de Oca.

Appendix

Appendix

1.1 Benchmark functions

A set of 19 scalable benchmark functions was used in this paper. Their mathematical definition is shown in Table 14. These functions were proposed by Herrera et al. (2010). The source code that implements them was also provided by them (available from Lozano and Herrera 2010).

Table 14 Benchmark functions

For functions F 1F 11 (and some of the hybrid functions, F 12F 19), candidate solutions, x, are transformed as z = x − o before evaluation. This transformation shifts the optimal solution from the origin of the coordinate system to o, with o  ∈ [X minX max]n. For function F 3, the transformation is z = x − o + 1. Hybrid functions combine two basic functions. The combination procedure is shown in (Herrera et al. 2010). The parameter m ns is used to control the number of components that are taken from a nonseparable function (functions F 3F 5F 9, and F 10). The higher m ns , the larger the number of components evaluated that come from a nonseparable function.

1.2 Ranges and setting of free and fixed algorithm parameters

During tuning, the number of free parameters and their corresponding range or domain has to be given to iterated F-race. The list of free parameters and their corresponding range or domain as used with iterated F-race is given in Table 15. A description of their meaning and effect is given in the main text.

Table 15 Free parameters in IPSOLS (all versions)

Other parameter settings for IPSOLS, for both tuned and nontuned versions, remained fixed. A list of them with their settings is shown in Table 16.

Table 16 Fixed parameters in IPSOLS

1.3 Iterated F-race parameter setting

Iterated F-race (Balaprakash et al. 2007; Birattari et al. 2010) has a number of parameters that need to be set before it can be used. The parameters setting used in our work is shown in Table 17.

Table 17 Iterated F-race parameter settings

In iterated F-race, the number of iterations L is equal to 2 + round(log2(d)), where d is the number of parameters to tune. In iterated F-race, each iteration has a different maximum number of evaluations. This number, denoted by B l , is equal to (B − B used)/(L − l + 1), where l is the iteration counter, B is the overall maximum number of evaluations, and B used is the number of evaluations used until iteration l − 1. The number of candidate configurations tested during iteration l is equal to ⌊B l l ⌋. For more information on the parameters of iterated F-race and their effect, please see (Balaprakash et al. 2007; Birattari et al. 2010).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Montes de Oca, M.A., Aydın, D. & Stützle, T. An incremental particle swarm for large-scale continuous optimization problems: an example of tuning-in-the-loop (re)design of optimization algorithms. Soft Comput 15, 2233–2255 (2011). https://doi.org/10.1007/s00500-010-0649-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-010-0649-0

Keywords

Navigation