Skip to main content

Quantitative Performance Assessment of Multiobjective Optimizers: The Average Runtime Attainment Function

  • Conference paper
  • First Online:
Evolutionary Multi-Criterion Optimization (EMO 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10173))

Included in the following conference series:

Abstract

Numerical benchmarking of multiobjective optimization algorithms is an important task needed to understand and recommend algorithms. So far, two main approaches to assessing algorithm performance have been pursued: using set quality indicators, and the (empirical) attainment function and its higher-order moments as a generalization of empirical cumulative distributions of function values. Both approaches have their advantages but rely on the choice of a quality indicator and/or take into account only the location of the resulting solution sets and not when certain regions of the objective space are attained. In this paper, we propose the average runtime attainment function as a quantitative measure of the performance of a multiobjective algorithm. It estimates, for any point in the objective space, the expected runtime to find a solution that weakly dominates this point. After defining the average runtime attainment function and detailing the relation to the (empirical) attainment function, we illustrate how the average runtime attainment function plot displays algorithm performance (and differences in performance) for some algorithms that have been previously run on the biobjective bbob-biobj test suite of the COCO platform.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    see https://numbbo.github.io/workshops/BBOB-2016/.

  2. 2.

    We opt for displaying ratios here instead of differences as the ratio scale is more natural for statements on runtimes and also has stronger theoretical properties than the interval scale [13].

  3. 3.

    https://github.com/numbbo/coco/tree/master/code-postprocessing/aRTAplots.

  4. 4.

    Note that such a normalization allows for objective values to be larger than 1 and that our plots clips the display to objective values smaller than 10.

  5. 5.

    Note that with the logscale parameter in the provided source code, the log-scale can be easily turned on and off.

  6. 6.

    A single function/dimension combination with 10 instances produces up to 930 MB of data.

  7. 7.

    All experiments were performed on an Intel Core i7-5600U CPU Windows 7 laptop with 8 GB of RAM.

  8. 8.

    Note that it is not necessarily the case that the instance with the smallest (largest) number of solutions recorded results in the smallest (largest) set of downsampled points.

References

  1. Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., Wagner, T.: Benchmarking MATLAB’s Gamultiobj (NSGA-II) on the bi-objective BBOB-2016 test suite. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1233–1239. ACM (2016)

    Google Scholar 

  2. Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., Wagner, T.: Benchmarking the pure random search on the bi-objective BBOB-2016 testbed. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1217–1223. ACM (2016)

    Google Scholar 

  3. Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., Wagner, T.: The impact of variation operators on the performance of SMS-EMOA on the bi-objective BBOB-2016 test suite. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1225–1232. ACM (2016)

    Google Scholar 

  4. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  5. Fonseca, C.M., Fleming, P.J.: On the performance assessment and comparison of stochastic multiobjective optimizers. In: Voigt, H.-M., Ebeling, W., Rechenberg, I., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 584–593. Springer, Heidelberg (1996). doi:10.1007/3-540-61723-X_1022

    Chapter  Google Scholar 

  6. Grunert da Fonseca, V., Fonseca, C.M., Hall, A.O.: Inferential performance assessment of stochastic optimisers and the attainment function. In: Zitzler, E., Thiele, L., Deb, K., Coello Coello, C.A., Corne, D. (eds.) EMO 2001. LNCS, vol. 1993, pp. 213–225. Springer, Heidelberg (2001). doi:10.1007/3-540-44719-9_15

    Chapter  Google Scholar 

  7. Hansen, N., Auger, A., Brockhoff, D., Tušar, D., Tušar, T.: COCO: performance assessment. CoRR abs/1605.03560 (2016). http://arxiv.org/abs/1605.03560

  8. Hansen, N., Auger, A., Mersmann, O., Tušar, T., Brockhoff, D.: COCO: a platform for comparing continuous optimizers in a black-box setting. CoRR abs/1603.08785 (2016). http://arxiv.org/abs/1603.08785

  9. Hoos, H., Stützle, T.: Evaluating Las Vegas algorithms: pitfalls and remedies. In: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, pp. 238–245. Morgan Kaufmann Publishers Inc. (1998)

    Google Scholar 

  10. Hoos, H.H., Stützle, T.: Stochastic Local Search: Foundations and Applications. Elsevier, San Francisco (2004)

    MATH  Google Scholar 

  11. López-Ibáñez, M., Paquete, L., Stützle, T.: Exploratory analysis of stochastic local search algorithms in biobjective optimization. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp. 209–222. Springer, Heidelberg (2010). Chap. 9

    Chapter  Google Scholar 

  12. Moré, J., Wild, S.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20(1), 172–191 (2009). Preprint available as Mathematics and Computer Science Division, Argonne National Laboratory, Preprint ANL/MCS-P1471-1207, May 2008

    Google Scholar 

  13. Stevens, S.S.: On the theory of scales of measurement. Science 103(2684), 677–680 (1946)

    Article  MATH  Google Scholar 

  14. Tušar, T., Brockhoff, D., Hansen, N., Auger, A.: COCO: the bi-objective black box optimization benchmarking (bbob-biobj) test suite. CoRR abs/1604.00359 (2016). http://arxiv.org/abs/1604.00359

  15. Wong, C., Al-Dujaili, A., Sundaram, S.: Hypervolume-based DIRECT for multi-objective optimisation. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1201–1208. ACM (2016)

    Google Scholar 

Download references

Acknowledgments

The authors acknowledge the support of the French National Research Agency (ANR) within the Modèles Numérique project “NumBBO – Analysis, Improvement and Evaluation of Numerical Blackbox Optimizers” (ANR-12-MONU-0009). In addition, this work is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 692286. This work was partially funded also by the Slovenian Research Agency under research program P2-0209. We finally thank the anonymous reviewers for their valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimo Brockhoff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Brockhoff, D., Auger, A., Hansen, N., Tušar, T. (2017). Quantitative Performance Assessment of Multiobjective Optimizers: The Average Runtime Attainment Function. In: Trautmann, H., et al. Evolutionary Multi-Criterion Optimization. EMO 2017. Lecture Notes in Computer Science(), vol 10173. Springer, Cham. https://doi.org/10.1007/978-3-319-54157-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-54157-0_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-54156-3

  • Online ISBN: 978-3-319-54157-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics